Oct  1 11:37:33 np0005464891 kernel: Linux version 5.14.0-617.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025
Oct  1 11:37:33 np0005464891 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  1 11:37:33 np0005464891 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64 root=UUID=d6a81468-b74c-4055-b485-def635ab40f8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  1 11:37:33 np0005464891 kernel: BIOS-provided physical RAM map:
Oct  1 11:37:33 np0005464891 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  1 11:37:33 np0005464891 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  1 11:37:33 np0005464891 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  1 11:37:33 np0005464891 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct  1 11:37:33 np0005464891 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct  1 11:37:33 np0005464891 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  1 11:37:33 np0005464891 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  1 11:37:33 np0005464891 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct  1 11:37:33 np0005464891 kernel: NX (Execute Disable) protection: active
Oct  1 11:37:33 np0005464891 kernel: APIC: Static calls initialized
Oct  1 11:37:33 np0005464891 kernel: SMBIOS 2.8 present.
Oct  1 11:37:33 np0005464891 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct  1 11:37:33 np0005464891 kernel: Hypervisor detected: KVM
Oct  1 11:37:33 np0005464891 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  1 11:37:33 np0005464891 kernel: kvm-clock: using sched offset of 5020082611 cycles
Oct  1 11:37:33 np0005464891 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  1 11:37:33 np0005464891 kernel: tsc: Detected 2799.998 MHz processor
Oct  1 11:37:33 np0005464891 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct  1 11:37:33 np0005464891 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  1 11:37:33 np0005464891 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  1 11:37:33 np0005464891 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct  1 11:37:33 np0005464891 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct  1 11:37:33 np0005464891 kernel: Using GB pages for direct mapping
Oct  1 11:37:33 np0005464891 kernel: RAMDISK: [mem 0x2d7d0000-0x32bdffff]
Oct  1 11:37:33 np0005464891 kernel: ACPI: Early table checksum verification disabled
Oct  1 11:37:33 np0005464891 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct  1 11:37:33 np0005464891 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  1 11:37:33 np0005464891 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  1 11:37:33 np0005464891 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  1 11:37:33 np0005464891 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct  1 11:37:33 np0005464891 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  1 11:37:33 np0005464891 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  1 11:37:33 np0005464891 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct  1 11:37:33 np0005464891 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct  1 11:37:33 np0005464891 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct  1 11:37:33 np0005464891 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct  1 11:37:33 np0005464891 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct  1 11:37:33 np0005464891 kernel: No NUMA configuration found
Oct  1 11:37:33 np0005464891 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct  1 11:37:33 np0005464891 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct  1 11:37:33 np0005464891 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct  1 11:37:33 np0005464891 kernel: Zone ranges:
Oct  1 11:37:33 np0005464891 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  1 11:37:33 np0005464891 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  1 11:37:33 np0005464891 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct  1 11:37:33 np0005464891 kernel:  Device   empty
Oct  1 11:37:33 np0005464891 kernel: Movable zone start for each node
Oct  1 11:37:33 np0005464891 kernel: Early memory node ranges
Oct  1 11:37:33 np0005464891 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  1 11:37:33 np0005464891 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct  1 11:37:33 np0005464891 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct  1 11:37:33 np0005464891 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct  1 11:37:33 np0005464891 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  1 11:37:33 np0005464891 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  1 11:37:33 np0005464891 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  1 11:37:33 np0005464891 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  1 11:37:33 np0005464891 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  1 11:37:33 np0005464891 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  1 11:37:33 np0005464891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  1 11:37:33 np0005464891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  1 11:37:33 np0005464891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  1 11:37:33 np0005464891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  1 11:37:33 np0005464891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  1 11:37:33 np0005464891 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  1 11:37:33 np0005464891 kernel: TSC deadline timer available
Oct  1 11:37:33 np0005464891 kernel: CPU topo: Max. logical packages:   8
Oct  1 11:37:33 np0005464891 kernel: CPU topo: Max. logical dies:       8
Oct  1 11:37:33 np0005464891 kernel: CPU topo: Max. dies per package:   1
Oct  1 11:37:33 np0005464891 kernel: CPU topo: Max. threads per core:   1
Oct  1 11:37:33 np0005464891 kernel: CPU topo: Num. cores per package:     1
Oct  1 11:37:33 np0005464891 kernel: CPU topo: Num. threads per package:   1
Oct  1 11:37:33 np0005464891 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct  1 11:37:33 np0005464891 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  1 11:37:33 np0005464891 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  1 11:37:33 np0005464891 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  1 11:37:33 np0005464891 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  1 11:37:33 np0005464891 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  1 11:37:33 np0005464891 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct  1 11:37:33 np0005464891 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct  1 11:37:33 np0005464891 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  1 11:37:33 np0005464891 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  1 11:37:33 np0005464891 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  1 11:37:33 np0005464891 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct  1 11:37:33 np0005464891 kernel: Booting paravirtualized kernel on KVM
Oct  1 11:37:33 np0005464891 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  1 11:37:33 np0005464891 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct  1 11:37:33 np0005464891 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct  1 11:37:33 np0005464891 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct  1 11:37:33 np0005464891 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64 root=UUID=d6a81468-b74c-4055-b485-def635ab40f8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  1 11:37:33 np0005464891 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64", will be passed to user space.
Oct  1 11:37:33 np0005464891 kernel: random: crng init done
Oct  1 11:37:33 np0005464891 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  1 11:37:33 np0005464891 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  1 11:37:33 np0005464891 kernel: Fallback order for Node 0: 0 
Oct  1 11:37:33 np0005464891 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  1 11:37:33 np0005464891 kernel: Policy zone: Normal
Oct  1 11:37:33 np0005464891 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  1 11:37:33 np0005464891 kernel: software IO TLB: area num 8.
Oct  1 11:37:33 np0005464891 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct  1 11:37:33 np0005464891 kernel: ftrace: allocating 49329 entries in 193 pages
Oct  1 11:37:33 np0005464891 kernel: ftrace: allocated 193 pages with 3 groups
Oct  1 11:37:33 np0005464891 kernel: Dynamic Preempt: voluntary
Oct  1 11:37:33 np0005464891 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  1 11:37:33 np0005464891 kernel: rcu: #011RCU event tracing is enabled.
Oct  1 11:37:33 np0005464891 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct  1 11:37:33 np0005464891 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  1 11:37:33 np0005464891 kernel: #011Rude variant of Tasks RCU enabled.
Oct  1 11:37:33 np0005464891 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  1 11:37:33 np0005464891 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  1 11:37:33 np0005464891 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct  1 11:37:33 np0005464891 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  1 11:37:33 np0005464891 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  1 11:37:33 np0005464891 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  1 11:37:33 np0005464891 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct  1 11:37:33 np0005464891 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  1 11:37:33 np0005464891 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  1 11:37:33 np0005464891 kernel: Console: colour VGA+ 80x25
Oct  1 11:37:33 np0005464891 kernel: printk: console [ttyS0] enabled
Oct  1 11:37:33 np0005464891 kernel: ACPI: Core revision 20230331
Oct  1 11:37:33 np0005464891 kernel: APIC: Switch to symmetric I/O mode setup
Oct  1 11:37:33 np0005464891 kernel: x2apic enabled
Oct  1 11:37:33 np0005464891 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  1 11:37:33 np0005464891 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  1 11:37:33 np0005464891 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Oct  1 11:37:33 np0005464891 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  1 11:37:33 np0005464891 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  1 11:37:33 np0005464891 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  1 11:37:33 np0005464891 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  1 11:37:33 np0005464891 kernel: Spectre V2 : Mitigation: Retpolines
Oct  1 11:37:33 np0005464891 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  1 11:37:33 np0005464891 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct  1 11:37:33 np0005464891 kernel: RETBleed: Mitigation: untrained return thunk
Oct  1 11:37:33 np0005464891 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  1 11:37:33 np0005464891 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  1 11:37:33 np0005464891 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  1 11:37:33 np0005464891 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  1 11:37:33 np0005464891 kernel: x86/bugs: return thunk changed
Oct  1 11:37:33 np0005464891 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  1 11:37:33 np0005464891 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  1 11:37:33 np0005464891 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  1 11:37:33 np0005464891 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  1 11:37:33 np0005464891 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  1 11:37:33 np0005464891 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct  1 11:37:33 np0005464891 kernel: Freeing SMP alternatives memory: 40K
Oct  1 11:37:33 np0005464891 kernel: pid_max: default: 32768 minimum: 301
Oct  1 11:37:33 np0005464891 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  1 11:37:33 np0005464891 kernel: landlock: Up and running.
Oct  1 11:37:33 np0005464891 kernel: Yama: becoming mindful.
Oct  1 11:37:33 np0005464891 kernel: SELinux:  Initializing.
Oct  1 11:37:33 np0005464891 kernel: LSM support for eBPF active
Oct  1 11:37:33 np0005464891 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  1 11:37:33 np0005464891 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  1 11:37:33 np0005464891 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct  1 11:37:33 np0005464891 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  1 11:37:33 np0005464891 kernel: ... version:                0
Oct  1 11:37:33 np0005464891 kernel: ... bit width:              48
Oct  1 11:37:33 np0005464891 kernel: ... generic registers:      6
Oct  1 11:37:33 np0005464891 kernel: ... value mask:             0000ffffffffffff
Oct  1 11:37:33 np0005464891 kernel: ... max period:             00007fffffffffff
Oct  1 11:37:33 np0005464891 kernel: ... fixed-purpose events:   0
Oct  1 11:37:33 np0005464891 kernel: ... event mask:             000000000000003f
Oct  1 11:37:33 np0005464891 kernel: signal: max sigframe size: 1776
Oct  1 11:37:33 np0005464891 kernel: rcu: Hierarchical SRCU implementation.
Oct  1 11:37:33 np0005464891 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  1 11:37:33 np0005464891 kernel: smp: Bringing up secondary CPUs ...
Oct  1 11:37:33 np0005464891 kernel: smpboot: x86: Booting SMP configuration:
Oct  1 11:37:33 np0005464891 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct  1 11:37:33 np0005464891 kernel: smp: Brought up 1 node, 8 CPUs
Oct  1 11:37:33 np0005464891 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Oct  1 11:37:33 np0005464891 kernel: node 0 deferred pages initialised in 19ms
Oct  1 11:37:33 np0005464891 kernel: Memory: 7765572K/8388068K available (16384K kernel code, 5784K rwdata, 13988K rodata, 4072K init, 7304K bss, 616480K reserved, 0K cma-reserved)
Oct  1 11:37:33 np0005464891 kernel: devtmpfs: initialized
Oct  1 11:37:33 np0005464891 kernel: x86/mm: Memory block size: 128MB
Oct  1 11:37:33 np0005464891 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  1 11:37:33 np0005464891 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct  1 11:37:33 np0005464891 kernel: pinctrl core: initialized pinctrl subsystem
Oct  1 11:37:33 np0005464891 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  1 11:37:33 np0005464891 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  1 11:37:33 np0005464891 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  1 11:37:33 np0005464891 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  1 11:37:33 np0005464891 kernel: audit: initializing netlink subsys (disabled)
Oct  1 11:37:33 np0005464891 kernel: audit: type=2000 audit(1759333051.566:1): state=initialized audit_enabled=0 res=1
Oct  1 11:37:33 np0005464891 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  1 11:37:33 np0005464891 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  1 11:37:33 np0005464891 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  1 11:37:33 np0005464891 kernel: cpuidle: using governor menu
Oct  1 11:37:33 np0005464891 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  1 11:37:33 np0005464891 kernel: PCI: Using configuration type 1 for base access
Oct  1 11:37:33 np0005464891 kernel: PCI: Using configuration type 1 for extended access
Oct  1 11:37:33 np0005464891 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  1 11:37:33 np0005464891 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  1 11:37:33 np0005464891 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  1 11:37:33 np0005464891 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  1 11:37:33 np0005464891 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  1 11:37:33 np0005464891 kernel: Demotion targets for Node 0: null
Oct  1 11:37:33 np0005464891 kernel: cryptd: max_cpu_qlen set to 1000
Oct  1 11:37:33 np0005464891 kernel: ACPI: Added _OSI(Module Device)
Oct  1 11:37:33 np0005464891 kernel: ACPI: Added _OSI(Processor Device)
Oct  1 11:37:33 np0005464891 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  1 11:37:33 np0005464891 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  1 11:37:33 np0005464891 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  1 11:37:33 np0005464891 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  1 11:37:33 np0005464891 kernel: ACPI: Interpreter enabled
Oct  1 11:37:33 np0005464891 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct  1 11:37:33 np0005464891 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  1 11:37:33 np0005464891 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  1 11:37:33 np0005464891 kernel: PCI: Using E820 reservations for host bridge windows
Oct  1 11:37:33 np0005464891 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct  1 11:37:33 np0005464891 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  1 11:37:33 np0005464891 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [3] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [4] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [5] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [6] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [7] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [8] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [9] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [10] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [11] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [12] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [13] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [14] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [15] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [16] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [17] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [18] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [19] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [20] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [21] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [22] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [23] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [24] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [25] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [26] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [27] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [28] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [29] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [30] registered
Oct  1 11:37:33 np0005464891 kernel: acpiphp: Slot [31] registered
Oct  1 11:37:33 np0005464891 kernel: PCI host bridge to bus 0000:00
Oct  1 11:37:33 np0005464891 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  1 11:37:33 np0005464891 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  1 11:37:33 np0005464891 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  1 11:37:33 np0005464891 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  1 11:37:33 np0005464891 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct  1 11:37:33 np0005464891 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct  1 11:37:33 np0005464891 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  1 11:37:33 np0005464891 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  1 11:37:33 np0005464891 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  1 11:37:33 np0005464891 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  1 11:37:33 np0005464891 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct  1 11:37:33 np0005464891 kernel: iommu: Default domain type: Translated
Oct  1 11:37:33 np0005464891 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  1 11:37:33 np0005464891 kernel: SCSI subsystem initialized
Oct  1 11:37:33 np0005464891 kernel: ACPI: bus type USB registered
Oct  1 11:37:33 np0005464891 kernel: usbcore: registered new interface driver usbfs
Oct  1 11:37:33 np0005464891 kernel: usbcore: registered new interface driver hub
Oct  1 11:37:33 np0005464891 kernel: usbcore: registered new device driver usb
Oct  1 11:37:33 np0005464891 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  1 11:37:33 np0005464891 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  1 11:37:33 np0005464891 kernel: PTP clock support registered
Oct  1 11:37:33 np0005464891 kernel: EDAC MC: Ver: 3.0.0
Oct  1 11:37:33 np0005464891 kernel: NetLabel: Initializing
Oct  1 11:37:33 np0005464891 kernel: NetLabel:  domain hash size = 128
Oct  1 11:37:33 np0005464891 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  1 11:37:33 np0005464891 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  1 11:37:33 np0005464891 kernel: PCI: Using ACPI for IRQ routing
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  1 11:37:33 np0005464891 kernel: vgaarb: loaded
Oct  1 11:37:33 np0005464891 kernel: clocksource: Switched to clocksource kvm-clock
Oct  1 11:37:33 np0005464891 kernel: VFS: Disk quotas dquot_6.6.0
Oct  1 11:37:33 np0005464891 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  1 11:37:33 np0005464891 kernel: pnp: PnP ACPI init
Oct  1 11:37:33 np0005464891 kernel: pnp: PnP ACPI: found 5 devices
Oct  1 11:37:33 np0005464891 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  1 11:37:33 np0005464891 kernel: NET: Registered PF_INET protocol family
Oct  1 11:37:33 np0005464891 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  1 11:37:33 np0005464891 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  1 11:37:33 np0005464891 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  1 11:37:33 np0005464891 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  1 11:37:33 np0005464891 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  1 11:37:33 np0005464891 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  1 11:37:33 np0005464891 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  1 11:37:33 np0005464891 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  1 11:37:33 np0005464891 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  1 11:37:33 np0005464891 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  1 11:37:33 np0005464891 kernel: NET: Registered PF_XDP protocol family
Oct  1 11:37:33 np0005464891 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  1 11:37:33 np0005464891 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  1 11:37:33 np0005464891 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  1 11:37:33 np0005464891 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct  1 11:37:33 np0005464891 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct  1 11:37:33 np0005464891 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct  1 11:37:33 np0005464891 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 78992 usecs
Oct  1 11:37:33 np0005464891 kernel: PCI: CLS 0 bytes, default 64
Oct  1 11:37:33 np0005464891 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  1 11:37:33 np0005464891 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct  1 11:37:33 np0005464891 kernel: ACPI: bus type thunderbolt registered
Oct  1 11:37:33 np0005464891 kernel: Trying to unpack rootfs image as initramfs...
Oct  1 11:37:33 np0005464891 kernel: Initialise system trusted keyrings
Oct  1 11:37:33 np0005464891 kernel: Key type blacklist registered
Oct  1 11:37:33 np0005464891 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  1 11:37:33 np0005464891 kernel: zbud: loaded
Oct  1 11:37:33 np0005464891 kernel: integrity: Platform Keyring initialized
Oct  1 11:37:33 np0005464891 kernel: integrity: Machine keyring initialized
Oct  1 11:37:33 np0005464891 kernel: Freeing initrd memory: 86080K
Oct  1 11:37:33 np0005464891 kernel: NET: Registered PF_ALG protocol family
Oct  1 11:37:33 np0005464891 kernel: xor: automatically using best checksumming function   avx       
Oct  1 11:37:33 np0005464891 kernel: Key type asymmetric registered
Oct  1 11:37:33 np0005464891 kernel: Asymmetric key parser 'x509' registered
Oct  1 11:37:33 np0005464891 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  1 11:37:33 np0005464891 kernel: io scheduler mq-deadline registered
Oct  1 11:37:33 np0005464891 kernel: io scheduler kyber registered
Oct  1 11:37:33 np0005464891 kernel: io scheduler bfq registered
Oct  1 11:37:33 np0005464891 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  1 11:37:33 np0005464891 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  1 11:37:33 np0005464891 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  1 11:37:33 np0005464891 kernel: ACPI: button: Power Button [PWRF]
Oct  1 11:37:33 np0005464891 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct  1 11:37:33 np0005464891 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct  1 11:37:33 np0005464891 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct  1 11:37:33 np0005464891 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  1 11:37:33 np0005464891 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  1 11:37:33 np0005464891 kernel: Non-volatile memory driver v1.3
Oct  1 11:37:33 np0005464891 kernel: rdac: device handler registered
Oct  1 11:37:33 np0005464891 kernel: hp_sw: device handler registered
Oct  1 11:37:33 np0005464891 kernel: emc: device handler registered
Oct  1 11:37:33 np0005464891 kernel: alua: device handler registered
Oct  1 11:37:33 np0005464891 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct  1 11:37:33 np0005464891 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct  1 11:37:33 np0005464891 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct  1 11:37:33 np0005464891 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct  1 11:37:33 np0005464891 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  1 11:37:33 np0005464891 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  1 11:37:33 np0005464891 kernel: usb usb1: Product: UHCI Host Controller
Oct  1 11:37:33 np0005464891 kernel: usb usb1: Manufacturer: Linux 5.14.0-617.el9.x86_64 uhci_hcd
Oct  1 11:37:33 np0005464891 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct  1 11:37:33 np0005464891 kernel: hub 1-0:1.0: USB hub found
Oct  1 11:37:33 np0005464891 kernel: hub 1-0:1.0: 2 ports detected
Oct  1 11:37:33 np0005464891 kernel: usbcore: registered new interface driver usbserial_generic
Oct  1 11:37:33 np0005464891 kernel: usbserial: USB Serial support registered for generic
Oct  1 11:37:33 np0005464891 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  1 11:37:33 np0005464891 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  1 11:37:33 np0005464891 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  1 11:37:33 np0005464891 kernel: mousedev: PS/2 mouse device common for all mice
Oct  1 11:37:33 np0005464891 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct  1 11:37:33 np0005464891 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  1 11:37:33 np0005464891 kernel: rtc_cmos 00:04: registered as rtc0
Oct  1 11:37:33 np0005464891 kernel: rtc_cmos 00:04: setting system clock to 2025-10-01T15:37:32 UTC (1759333052)
Oct  1 11:37:33 np0005464891 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct  1 11:37:33 np0005464891 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  1 11:37:33 np0005464891 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  1 11:37:33 np0005464891 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  1 11:37:33 np0005464891 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  1 11:37:33 np0005464891 kernel: usbcore: registered new interface driver usbhid
Oct  1 11:37:33 np0005464891 kernel: usbhid: USB HID core driver
Oct  1 11:37:33 np0005464891 kernel: drop_monitor: Initializing network drop monitor service
Oct  1 11:37:33 np0005464891 kernel: Initializing XFRM netlink socket
Oct  1 11:37:33 np0005464891 kernel: NET: Registered PF_INET6 protocol family
Oct  1 11:37:33 np0005464891 kernel: Segment Routing with IPv6
Oct  1 11:37:33 np0005464891 kernel: NET: Registered PF_PACKET protocol family
Oct  1 11:37:33 np0005464891 kernel: mpls_gso: MPLS GSO support
Oct  1 11:37:33 np0005464891 kernel: IPI shorthand broadcast: enabled
Oct  1 11:37:33 np0005464891 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  1 11:37:33 np0005464891 kernel: AES CTR mode by8 optimization enabled
Oct  1 11:37:33 np0005464891 kernel: sched_clock: Marking stable (1178007884, 157258156)->(1455560120, -120294080)
Oct  1 11:37:33 np0005464891 kernel: registered taskstats version 1
Oct  1 11:37:33 np0005464891 kernel: Loading compiled-in X.509 certificates
Oct  1 11:37:33 np0005464891 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bb2966091bafcba340f8183756023c985dcc8fe9'
Oct  1 11:37:33 np0005464891 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  1 11:37:33 np0005464891 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  1 11:37:33 np0005464891 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  1 11:37:33 np0005464891 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  1 11:37:33 np0005464891 kernel: Demotion targets for Node 0: null
Oct  1 11:37:33 np0005464891 kernel: page_owner is disabled
Oct  1 11:37:33 np0005464891 kernel: Key type .fscrypt registered
Oct  1 11:37:33 np0005464891 kernel: Key type fscrypt-provisioning registered
Oct  1 11:37:33 np0005464891 kernel: Key type big_key registered
Oct  1 11:37:33 np0005464891 kernel: Key type encrypted registered
Oct  1 11:37:33 np0005464891 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  1 11:37:33 np0005464891 kernel: Loading compiled-in module X.509 certificates
Oct  1 11:37:33 np0005464891 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bb2966091bafcba340f8183756023c985dcc8fe9'
Oct  1 11:37:33 np0005464891 kernel: ima: Allocated hash algorithm: sha256
Oct  1 11:37:33 np0005464891 kernel: ima: No architecture policies found
Oct  1 11:37:33 np0005464891 kernel: evm: Initialising EVM extended attributes:
Oct  1 11:37:33 np0005464891 kernel: evm: security.selinux
Oct  1 11:37:33 np0005464891 kernel: evm: security.SMACK64 (disabled)
Oct  1 11:37:33 np0005464891 kernel: evm: security.SMACK64EXEC (disabled)
Oct  1 11:37:33 np0005464891 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  1 11:37:33 np0005464891 kernel: evm: security.SMACK64MMAP (disabled)
Oct  1 11:37:33 np0005464891 kernel: evm: security.apparmor (disabled)
Oct  1 11:37:33 np0005464891 kernel: evm: security.ima
Oct  1 11:37:33 np0005464891 kernel: evm: security.capability
Oct  1 11:37:33 np0005464891 kernel: evm: HMAC attrs: 0x1
Oct  1 11:37:33 np0005464891 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  1 11:37:33 np0005464891 kernel: Running certificate verification RSA selftest
Oct  1 11:37:33 np0005464891 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  1 11:37:33 np0005464891 kernel: Running certificate verification ECDSA selftest
Oct  1 11:37:33 np0005464891 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  1 11:37:33 np0005464891 kernel: clk: Disabling unused clocks
Oct  1 11:37:33 np0005464891 kernel: Freeing unused decrypted memory: 2028K
Oct  1 11:37:33 np0005464891 kernel: Freeing unused kernel image (initmem) memory: 4072K
Oct  1 11:37:33 np0005464891 kernel: Write protecting the kernel read-only data: 30720k
Oct  1 11:37:33 np0005464891 kernel: Freeing unused kernel image (rodata/data gap) memory: 348K
Oct  1 11:37:33 np0005464891 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  1 11:37:33 np0005464891 kernel: Run /init as init process
Oct  1 11:37:33 np0005464891 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  1 11:37:33 np0005464891 systemd: Detected virtualization kvm.
Oct  1 11:37:33 np0005464891 systemd: Detected architecture x86-64.
Oct  1 11:37:33 np0005464891 systemd: Running in initrd.
Oct  1 11:37:33 np0005464891 systemd: No hostname configured, using default hostname.
Oct  1 11:37:33 np0005464891 systemd: Hostname set to <localhost>.
Oct  1 11:37:33 np0005464891 systemd: Initializing machine ID from VM UUID.
Oct  1 11:37:33 np0005464891 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  1 11:37:33 np0005464891 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  1 11:37:33 np0005464891 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  1 11:37:33 np0005464891 kernel: usb 1-1: Manufacturer: QEMU
Oct  1 11:37:33 np0005464891 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct  1 11:37:33 np0005464891 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  1 11:37:33 np0005464891 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct  1 11:37:33 np0005464891 systemd: Queued start job for default target Initrd Default Target.
Oct  1 11:37:33 np0005464891 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  1 11:37:33 np0005464891 systemd: Reached target Local Encrypted Volumes.
Oct  1 11:37:33 np0005464891 systemd: Reached target Initrd /usr File System.
Oct  1 11:37:33 np0005464891 systemd: Reached target Local File Systems.
Oct  1 11:37:33 np0005464891 systemd: Reached target Path Units.
Oct  1 11:37:33 np0005464891 systemd: Reached target Slice Units.
Oct  1 11:37:33 np0005464891 systemd: Reached target Swaps.
Oct  1 11:37:33 np0005464891 systemd: Reached target Timer Units.
Oct  1 11:37:33 np0005464891 systemd: Listening on D-Bus System Message Bus Socket.
Oct  1 11:37:33 np0005464891 systemd: Listening on Journal Socket (/dev/log).
Oct  1 11:37:33 np0005464891 systemd: Listening on Journal Socket.
Oct  1 11:37:33 np0005464891 systemd: Listening on udev Control Socket.
Oct  1 11:37:33 np0005464891 systemd: Listening on udev Kernel Socket.
Oct  1 11:37:33 np0005464891 systemd: Reached target Socket Units.
Oct  1 11:37:33 np0005464891 systemd: Starting Create List of Static Device Nodes...
Oct  1 11:37:33 np0005464891 systemd: Starting Journal Service...
Oct  1 11:37:33 np0005464891 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  1 11:37:33 np0005464891 systemd: Starting Apply Kernel Variables...
Oct  1 11:37:33 np0005464891 systemd: Starting Create System Users...
Oct  1 11:37:33 np0005464891 systemd: Starting Setup Virtual Console...
Oct  1 11:37:33 np0005464891 systemd: Finished Create List of Static Device Nodes.
Oct  1 11:37:33 np0005464891 systemd: Finished Apply Kernel Variables.
Oct  1 11:37:33 np0005464891 systemd: Finished Create System Users.
Oct  1 11:37:33 np0005464891 systemd-journald[313]: Journal started
Oct  1 11:37:33 np0005464891 systemd-journald[313]: Runtime Journal (/run/log/journal/9659e74716374bf98b69aeb4fd4304e0) is 8.0M, max 153.5M, 145.5M free.
Oct  1 11:37:33 np0005464891 systemd-sysusers[318]: Creating group 'users' with GID 100.
Oct  1 11:37:33 np0005464891 systemd-sysusers[318]: Creating group 'dbus' with GID 81.
Oct  1 11:37:33 np0005464891 systemd-sysusers[318]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  1 11:37:33 np0005464891 systemd: Started Journal Service.
Oct  1 11:37:33 np0005464891 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  1 11:37:33 np0005464891 systemd[1]: Starting Create Volatile Files and Directories...
Oct  1 11:37:33 np0005464891 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  1 11:37:33 np0005464891 systemd[1]: Finished Create Volatile Files and Directories.
Oct  1 11:37:33 np0005464891 systemd[1]: Finished Setup Virtual Console.
Oct  1 11:37:33 np0005464891 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  1 11:37:33 np0005464891 systemd[1]: Starting dracut cmdline hook...
Oct  1 11:37:33 np0005464891 dracut-cmdline[331]: dracut-9 dracut-057-102.git20250818.el9
Oct  1 11:37:33 np0005464891 dracut-cmdline[331]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64 root=UUID=d6a81468-b74c-4055-b485-def635ab40f8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  1 11:37:33 np0005464891 systemd[1]: Finished dracut cmdline hook.
Oct  1 11:37:33 np0005464891 systemd[1]: Starting dracut pre-udev hook...
Oct  1 11:37:33 np0005464891 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  1 11:37:33 np0005464891 kernel: device-mapper: uevent: version 1.0.3
Oct  1 11:37:33 np0005464891 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  1 11:37:33 np0005464891 kernel: RPC: Registered named UNIX socket transport module.
Oct  1 11:37:33 np0005464891 kernel: RPC: Registered udp transport module.
Oct  1 11:37:33 np0005464891 kernel: RPC: Registered tcp transport module.
Oct  1 11:37:33 np0005464891 kernel: RPC: Registered tcp-with-tls transport module.
Oct  1 11:37:33 np0005464891 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  1 11:37:33 np0005464891 rpc.statd[448]: Version 2.5.4 starting
Oct  1 11:37:33 np0005464891 rpc.statd[448]: Initializing NSM state
Oct  1 11:37:33 np0005464891 rpc.idmapd[453]: Setting log level to 0
Oct  1 11:37:33 np0005464891 systemd[1]: Finished dracut pre-udev hook.
Oct  1 11:37:33 np0005464891 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  1 11:37:33 np0005464891 systemd-udevd[466]: Using default interface naming scheme 'rhel-9.0'.
Oct  1 11:37:33 np0005464891 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  1 11:37:33 np0005464891 systemd[1]: Starting dracut pre-trigger hook...
Oct  1 11:37:33 np0005464891 systemd[1]: Finished dracut pre-trigger hook.
Oct  1 11:37:33 np0005464891 systemd[1]: Starting Coldplug All udev Devices...
Oct  1 11:37:33 np0005464891 systemd[1]: Created slice Slice /system/modprobe.
Oct  1 11:37:33 np0005464891 systemd[1]: Starting Load Kernel Module configfs...
Oct  1 11:37:33 np0005464891 systemd[1]: Finished Coldplug All udev Devices.
Oct  1 11:37:33 np0005464891 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  1 11:37:33 np0005464891 systemd[1]: Finished Load Kernel Module configfs.
Oct  1 11:37:33 np0005464891 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  1 11:37:33 np0005464891 systemd[1]: Reached target Network.
Oct  1 11:37:33 np0005464891 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  1 11:37:33 np0005464891 systemd[1]: Starting dracut initqueue hook...
Oct  1 11:37:33 np0005464891 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct  1 11:37:33 np0005464891 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  1 11:37:33 np0005464891 kernel: vda: vda1
Oct  1 11:37:33 np0005464891 systemd-udevd[482]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 11:37:33 np0005464891 kernel: scsi host0: ata_piix
Oct  1 11:37:33 np0005464891 kernel: scsi host1: ata_piix
Oct  1 11:37:33 np0005464891 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct  1 11:37:33 np0005464891 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct  1 11:37:34 np0005464891 systemd[1]: Found device /dev/disk/by-uuid/d6a81468-b74c-4055-b485-def635ab40f8.
Oct  1 11:37:34 np0005464891 systemd[1]: Reached target Initrd Root Device.
Oct  1 11:37:34 np0005464891 kernel: ata1: found unknown device (class 0)
Oct  1 11:37:34 np0005464891 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  1 11:37:34 np0005464891 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  1 11:37:34 np0005464891 systemd[1]: Mounting Kernel Configuration File System...
Oct  1 11:37:34 np0005464891 systemd[1]: Mounted Kernel Configuration File System.
Oct  1 11:37:34 np0005464891 systemd[1]: Reached target System Initialization.
Oct  1 11:37:34 np0005464891 systemd[1]: Reached target Basic System.
Oct  1 11:37:34 np0005464891 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  1 11:37:34 np0005464891 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  1 11:37:34 np0005464891 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  1 11:37:34 np0005464891 systemd[1]: Finished dracut initqueue hook.
Oct  1 11:37:34 np0005464891 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  1 11:37:34 np0005464891 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  1 11:37:34 np0005464891 systemd[1]: Reached target Remote File Systems.
Oct  1 11:37:34 np0005464891 systemd[1]: Starting dracut pre-mount hook...
Oct  1 11:37:34 np0005464891 systemd[1]: Finished dracut pre-mount hook.
Oct  1 11:37:34 np0005464891 systemd[1]: Starting File System Check on /dev/disk/by-uuid/d6a81468-b74c-4055-b485-def635ab40f8...
Oct  1 11:37:34 np0005464891 systemd-fsck[562]: /usr/sbin/fsck.xfs: XFS file system.
Oct  1 11:37:34 np0005464891 systemd[1]: Finished File System Check on /dev/disk/by-uuid/d6a81468-b74c-4055-b485-def635ab40f8.
Oct  1 11:37:34 np0005464891 systemd[1]: Mounting /sysroot...
Oct  1 11:37:34 np0005464891 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  1 11:37:34 np0005464891 kernel: XFS (vda1): Mounting V5 Filesystem d6a81468-b74c-4055-b485-def635ab40f8
Oct  1 11:37:34 np0005464891 kernel: XFS (vda1): Ending clean mount
Oct  1 11:37:34 np0005464891 systemd[1]: Mounted /sysroot.
Oct  1 11:37:34 np0005464891 systemd[1]: Reached target Initrd Root File System.
Oct  1 11:37:34 np0005464891 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  1 11:37:34 np0005464891 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  1 11:37:34 np0005464891 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  1 11:37:34 np0005464891 systemd[1]: Reached target Initrd File Systems.
Oct  1 11:37:34 np0005464891 systemd[1]: Reached target Initrd Default Target.
Oct  1 11:37:34 np0005464891 systemd[1]: Starting dracut mount hook...
Oct  1 11:37:34 np0005464891 systemd[1]: Finished dracut mount hook.
Oct  1 11:37:34 np0005464891 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  1 11:37:35 np0005464891 rpc.idmapd[453]: exiting on signal 15
Oct  1 11:37:35 np0005464891 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  1 11:37:35 np0005464891 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped target Network.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped target Timer Units.
Oct  1 11:37:35 np0005464891 systemd[1]: dbus.socket: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  1 11:37:35 np0005464891 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped target Initrd Default Target.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped target Basic System.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped target Initrd Root Device.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped target Initrd /usr File System.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped target Path Units.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped target Remote File Systems.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped target Slice Units.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped target Socket Units.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped target System Initialization.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped target Local File Systems.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped target Swaps.
Oct  1 11:37:35 np0005464891 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped dracut mount hook.
Oct  1 11:37:35 np0005464891 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped dracut pre-mount hook.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  1 11:37:35 np0005464891 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  1 11:37:35 np0005464891 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped dracut initqueue hook.
Oct  1 11:37:35 np0005464891 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped Apply Kernel Variables.
Oct  1 11:37:35 np0005464891 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  1 11:37:35 np0005464891 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped Coldplug All udev Devices.
Oct  1 11:37:35 np0005464891 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped dracut pre-trigger hook.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  1 11:37:35 np0005464891 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped Setup Virtual Console.
Oct  1 11:37:35 np0005464891 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  1 11:37:35 np0005464891 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  1 11:37:35 np0005464891 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Closed udev Control Socket.
Oct  1 11:37:35 np0005464891 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Closed udev Kernel Socket.
Oct  1 11:37:35 np0005464891 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped dracut pre-udev hook.
Oct  1 11:37:35 np0005464891 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped dracut cmdline hook.
Oct  1 11:37:35 np0005464891 systemd[1]: Starting Cleanup udev Database...
Oct  1 11:37:35 np0005464891 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  1 11:37:35 np0005464891 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  1 11:37:35 np0005464891 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Stopped Create System Users.
Oct  1 11:37:35 np0005464891 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  1 11:37:35 np0005464891 systemd[1]: Finished Cleanup udev Database.
Oct  1 11:37:35 np0005464891 systemd[1]: Reached target Switch Root.
Oct  1 11:37:35 np0005464891 systemd[1]: Starting Switch Root...
Oct  1 11:37:35 np0005464891 systemd[1]: Switching root.
Oct  1 11:37:35 np0005464891 systemd-journald[313]: Journal stopped
Oct  1 11:37:36 np0005464891 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct  1 11:37:36 np0005464891 kernel: audit: type=1404 audit(1759333055.513:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  1 11:37:36 np0005464891 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 11:37:36 np0005464891 kernel: SELinux:  policy capability open_perms=1
Oct  1 11:37:36 np0005464891 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 11:37:36 np0005464891 kernel: SELinux:  policy capability always_check_network=0
Oct  1 11:37:36 np0005464891 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 11:37:36 np0005464891 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 11:37:36 np0005464891 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 11:37:36 np0005464891 kernel: audit: type=1403 audit(1759333055.660:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  1 11:37:36 np0005464891 systemd: Successfully loaded SELinux policy in 151.160ms.
Oct  1 11:37:36 np0005464891 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.879ms.
Oct  1 11:37:36 np0005464891 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  1 11:37:36 np0005464891 systemd: Detected virtualization kvm.
Oct  1 11:37:36 np0005464891 systemd: Detected architecture x86-64.
Oct  1 11:37:36 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 11:37:36 np0005464891 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  1 11:37:36 np0005464891 systemd: Stopped Switch Root.
Oct  1 11:37:36 np0005464891 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  1 11:37:36 np0005464891 systemd: Created slice Slice /system/getty.
Oct  1 11:37:36 np0005464891 systemd: Created slice Slice /system/serial-getty.
Oct  1 11:37:36 np0005464891 systemd: Created slice Slice /system/sshd-keygen.
Oct  1 11:37:36 np0005464891 systemd: Created slice User and Session Slice.
Oct  1 11:37:36 np0005464891 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  1 11:37:36 np0005464891 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  1 11:37:36 np0005464891 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  1 11:37:36 np0005464891 systemd: Reached target Local Encrypted Volumes.
Oct  1 11:37:36 np0005464891 systemd: Stopped target Switch Root.
Oct  1 11:37:36 np0005464891 systemd: Stopped target Initrd File Systems.
Oct  1 11:37:36 np0005464891 systemd: Stopped target Initrd Root File System.
Oct  1 11:37:36 np0005464891 systemd: Reached target Local Integrity Protected Volumes.
Oct  1 11:37:36 np0005464891 systemd: Reached target Path Units.
Oct  1 11:37:36 np0005464891 systemd: Reached target rpc_pipefs.target.
Oct  1 11:37:36 np0005464891 systemd: Reached target Slice Units.
Oct  1 11:37:36 np0005464891 systemd: Reached target Swaps.
Oct  1 11:37:36 np0005464891 systemd: Reached target Local Verity Protected Volumes.
Oct  1 11:37:36 np0005464891 systemd: Listening on RPCbind Server Activation Socket.
Oct  1 11:37:36 np0005464891 systemd: Reached target RPC Port Mapper.
Oct  1 11:37:36 np0005464891 systemd: Listening on Process Core Dump Socket.
Oct  1 11:37:36 np0005464891 systemd: Listening on initctl Compatibility Named Pipe.
Oct  1 11:37:36 np0005464891 systemd: Listening on udev Control Socket.
Oct  1 11:37:36 np0005464891 systemd: Listening on udev Kernel Socket.
Oct  1 11:37:36 np0005464891 systemd: Mounting Huge Pages File System...
Oct  1 11:37:36 np0005464891 systemd: Mounting POSIX Message Queue File System...
Oct  1 11:37:36 np0005464891 systemd: Mounting Kernel Debug File System...
Oct  1 11:37:36 np0005464891 systemd: Mounting Kernel Trace File System...
Oct  1 11:37:36 np0005464891 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  1 11:37:36 np0005464891 systemd: Starting Create List of Static Device Nodes...
Oct  1 11:37:36 np0005464891 systemd: Starting Load Kernel Module configfs...
Oct  1 11:37:36 np0005464891 systemd: Starting Load Kernel Module drm...
Oct  1 11:37:36 np0005464891 systemd: Starting Load Kernel Module efi_pstore...
Oct  1 11:37:36 np0005464891 systemd: Starting Load Kernel Module fuse...
Oct  1 11:37:36 np0005464891 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  1 11:37:36 np0005464891 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  1 11:37:36 np0005464891 systemd: Stopped File System Check on Root Device.
Oct  1 11:37:36 np0005464891 systemd: Stopped Journal Service.
Oct  1 11:37:36 np0005464891 systemd: Starting Journal Service...
Oct  1 11:37:36 np0005464891 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  1 11:37:36 np0005464891 systemd: Starting Generate network units from Kernel command line...
Oct  1 11:37:36 np0005464891 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  1 11:37:36 np0005464891 systemd: Starting Remount Root and Kernel File Systems...
Oct  1 11:37:36 np0005464891 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  1 11:37:36 np0005464891 kernel: fuse: init (API version 7.37)
Oct  1 11:37:36 np0005464891 systemd: Starting Apply Kernel Variables...
Oct  1 11:37:36 np0005464891 systemd: Starting Coldplug All udev Devices...
Oct  1 11:37:36 np0005464891 systemd: Mounted Huge Pages File System.
Oct  1 11:37:36 np0005464891 systemd: Mounted POSIX Message Queue File System.
Oct  1 11:37:36 np0005464891 systemd: Mounted Kernel Debug File System.
Oct  1 11:37:36 np0005464891 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  1 11:37:36 np0005464891 systemd: Mounted Kernel Trace File System.
Oct  1 11:37:36 np0005464891 systemd-journald[686]: Journal started
Oct  1 11:37:36 np0005464891 systemd-journald[686]: Runtime Journal (/run/log/journal/21983c68f36a73745cc172a394ebc51d) is 8.0M, max 153.5M, 145.5M free.
Oct  1 11:37:36 np0005464891 systemd[1]: Queued start job for default target Multi-User System.
Oct  1 11:37:36 np0005464891 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  1 11:37:36 np0005464891 systemd: Started Journal Service.
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Create List of Static Device Nodes.
Oct  1 11:37:36 np0005464891 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Load Kernel Module configfs.
Oct  1 11:37:36 np0005464891 kernel: ACPI: bus type drm_connector registered
Oct  1 11:37:36 np0005464891 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct  1 11:37:36 np0005464891 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Load Kernel Module drm.
Oct  1 11:37:36 np0005464891 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Load Kernel Module fuse.
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Generate network units from Kernel command line.
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Apply Kernel Variables.
Oct  1 11:37:36 np0005464891 systemd[1]: Mounting FUSE Control File System...
Oct  1 11:37:36 np0005464891 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  1 11:37:36 np0005464891 systemd[1]: Starting Rebuild Hardware Database...
Oct  1 11:37:36 np0005464891 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  1 11:37:36 np0005464891 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  1 11:37:36 np0005464891 systemd[1]: Starting Load/Save OS Random Seed...
Oct  1 11:37:36 np0005464891 systemd[1]: Starting Create System Users...
Oct  1 11:37:36 np0005464891 systemd[1]: Mounted FUSE Control File System.
Oct  1 11:37:36 np0005464891 systemd-journald[686]: Runtime Journal (/run/log/journal/21983c68f36a73745cc172a394ebc51d) is 8.0M, max 153.5M, 145.5M free.
Oct  1 11:37:36 np0005464891 systemd-journald[686]: Received client request to flush runtime journal.
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Load/Save OS Random Seed.
Oct  1 11:37:36 np0005464891 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Create System Users.
Oct  1 11:37:36 np0005464891 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Coldplug All udev Devices.
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  1 11:37:36 np0005464891 systemd[1]: Reached target Preparation for Local File Systems.
Oct  1 11:37:36 np0005464891 systemd[1]: Reached target Local File Systems.
Oct  1 11:37:36 np0005464891 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct  1 11:37:36 np0005464891 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  1 11:37:36 np0005464891 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  1 11:37:36 np0005464891 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  1 11:37:36 np0005464891 systemd[1]: Starting Automatic Boot Loader Update...
Oct  1 11:37:36 np0005464891 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  1 11:37:36 np0005464891 systemd[1]: Starting Create Volatile Files and Directories...
Oct  1 11:37:36 np0005464891 bootctl[703]: Couldn't find EFI system partition, skipping.
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Automatic Boot Loader Update.
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Create Volatile Files and Directories.
Oct  1 11:37:36 np0005464891 systemd[1]: Starting Security Auditing Service...
Oct  1 11:37:36 np0005464891 systemd[1]: Starting RPC Bind...
Oct  1 11:37:36 np0005464891 systemd[1]: Starting Rebuild Journal Catalog...
Oct  1 11:37:36 np0005464891 auditd[709]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  1 11:37:36 np0005464891 auditd[709]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Rebuild Journal Catalog.
Oct  1 11:37:36 np0005464891 systemd[1]: Started RPC Bind.
Oct  1 11:37:36 np0005464891 augenrules[714]: /sbin/augenrules: No change
Oct  1 11:37:36 np0005464891 augenrules[729]: No rules
Oct  1 11:37:36 np0005464891 augenrules[729]: enabled 1
Oct  1 11:37:36 np0005464891 augenrules[729]: failure 1
Oct  1 11:37:36 np0005464891 augenrules[729]: pid 709
Oct  1 11:37:36 np0005464891 augenrules[729]: rate_limit 0
Oct  1 11:37:36 np0005464891 augenrules[729]: backlog_limit 8192
Oct  1 11:37:36 np0005464891 augenrules[729]: lost 0
Oct  1 11:37:36 np0005464891 augenrules[729]: backlog 3
Oct  1 11:37:36 np0005464891 augenrules[729]: backlog_wait_time 60000
Oct  1 11:37:36 np0005464891 augenrules[729]: backlog_wait_time_actual 0
Oct  1 11:37:36 np0005464891 augenrules[729]: enabled 1
Oct  1 11:37:36 np0005464891 augenrules[729]: failure 1
Oct  1 11:37:36 np0005464891 augenrules[729]: pid 709
Oct  1 11:37:36 np0005464891 augenrules[729]: rate_limit 0
Oct  1 11:37:36 np0005464891 augenrules[729]: backlog_limit 8192
Oct  1 11:37:36 np0005464891 augenrules[729]: lost 0
Oct  1 11:37:36 np0005464891 augenrules[729]: backlog 0
Oct  1 11:37:36 np0005464891 augenrules[729]: backlog_wait_time 60000
Oct  1 11:37:36 np0005464891 augenrules[729]: backlog_wait_time_actual 0
Oct  1 11:37:36 np0005464891 augenrules[729]: enabled 1
Oct  1 11:37:36 np0005464891 augenrules[729]: failure 1
Oct  1 11:37:36 np0005464891 augenrules[729]: pid 709
Oct  1 11:37:36 np0005464891 augenrules[729]: rate_limit 0
Oct  1 11:37:36 np0005464891 augenrules[729]: backlog_limit 8192
Oct  1 11:37:36 np0005464891 augenrules[729]: lost 0
Oct  1 11:37:36 np0005464891 augenrules[729]: backlog 3
Oct  1 11:37:36 np0005464891 augenrules[729]: backlog_wait_time 60000
Oct  1 11:37:36 np0005464891 augenrules[729]: backlog_wait_time_actual 0
Oct  1 11:37:36 np0005464891 systemd[1]: Started Security Auditing Service.
Oct  1 11:37:36 np0005464891 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Rebuild Hardware Database.
Oct  1 11:37:36 np0005464891 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  1 11:37:36 np0005464891 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct  1 11:37:37 np0005464891 systemd[1]: Starting Update is Completed...
Oct  1 11:37:37 np0005464891 systemd[1]: Finished Update is Completed.
Oct  1 11:37:37 np0005464891 systemd-udevd[737]: Using default interface naming scheme 'rhel-9.0'.
Oct  1 11:37:37 np0005464891 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  1 11:37:37 np0005464891 systemd[1]: Reached target System Initialization.
Oct  1 11:37:37 np0005464891 systemd[1]: Started dnf makecache --timer.
Oct  1 11:37:37 np0005464891 systemd[1]: Started Daily rotation of log files.
Oct  1 11:37:37 np0005464891 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  1 11:37:37 np0005464891 systemd[1]: Reached target Timer Units.
Oct  1 11:37:37 np0005464891 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  1 11:37:37 np0005464891 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  1 11:37:37 np0005464891 systemd[1]: Reached target Socket Units.
Oct  1 11:37:37 np0005464891 systemd[1]: Starting D-Bus System Message Bus...
Oct  1 11:37:37 np0005464891 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  1 11:37:37 np0005464891 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  1 11:37:37 np0005464891 systemd[1]: Starting Load Kernel Module configfs...
Oct  1 11:37:37 np0005464891 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  1 11:37:37 np0005464891 systemd[1]: Finished Load Kernel Module configfs.
Oct  1 11:37:37 np0005464891 systemd-udevd[744]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 11:37:37 np0005464891 systemd[1]: Started D-Bus System Message Bus.
Oct  1 11:37:37 np0005464891 systemd[1]: Reached target Basic System.
Oct  1 11:37:37 np0005464891 dbus-broker-lau[764]: Ready
Oct  1 11:37:37 np0005464891 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct  1 11:37:37 np0005464891 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  1 11:37:37 np0005464891 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  1 11:37:37 np0005464891 systemd[1]: Starting NTP client/server...
Oct  1 11:37:37 np0005464891 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  1 11:37:37 np0005464891 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  1 11:37:37 np0005464891 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  1 11:37:37 np0005464891 systemd[1]: Starting IPv4 firewall with iptables...
Oct  1 11:37:37 np0005464891 systemd[1]: Started irqbalance daemon.
Oct  1 11:37:37 np0005464891 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  1 11:37:37 np0005464891 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  1 11:37:37 np0005464891 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  1 11:37:37 np0005464891 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  1 11:37:37 np0005464891 systemd[1]: Reached target sshd-keygen.target.
Oct  1 11:37:37 np0005464891 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  1 11:37:37 np0005464891 systemd[1]: Reached target User and Group Name Lookups.
Oct  1 11:37:37 np0005464891 systemd[1]: Starting User Login Management...
Oct  1 11:37:37 np0005464891 chronyd[803]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  1 11:37:37 np0005464891 chronyd[803]: Loaded 0 symmetric keys
Oct  1 11:37:37 np0005464891 chronyd[803]: Using right/UTC timezone to obtain leap second data
Oct  1 11:37:37 np0005464891 chronyd[803]: Loaded seccomp filter (level 2)
Oct  1 11:37:37 np0005464891 systemd[1]: Started NTP client/server.
Oct  1 11:37:37 np0005464891 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  1 11:37:37 np0005464891 systemd-logind[801]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  1 11:37:37 np0005464891 systemd-logind[801]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  1 11:37:37 np0005464891 systemd-logind[801]: New seat seat0.
Oct  1 11:37:37 np0005464891 systemd[1]: Started User Login Management.
Oct  1 11:37:37 np0005464891 kernel: kvm_amd: TSC scaling supported
Oct  1 11:37:37 np0005464891 kernel: kvm_amd: Nested Virtualization enabled
Oct  1 11:37:37 np0005464891 kernel: kvm_amd: Nested Paging enabled
Oct  1 11:37:37 np0005464891 kernel: kvm_amd: LBR virtualization supported
Oct  1 11:37:37 np0005464891 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct  1 11:37:37 np0005464891 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct  1 11:37:37 np0005464891 kernel: Console: switching to colour dummy device 80x25
Oct  1 11:37:37 np0005464891 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  1 11:37:37 np0005464891 kernel: [drm] features: -context_init
Oct  1 11:37:37 np0005464891 kernel: [drm] number of scanouts: 1
Oct  1 11:37:37 np0005464891 kernel: [drm] number of cap sets: 0
Oct  1 11:37:37 np0005464891 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct  1 11:37:37 np0005464891 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  1 11:37:37 np0005464891 kernel: Console: switching to colour frame buffer device 128x48
Oct  1 11:37:37 np0005464891 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  1 11:37:37 np0005464891 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct  1 11:37:37 np0005464891 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct  1 11:37:37 np0005464891 iptables.init[792]: iptables: Applying firewall rules: [  OK  ]
Oct  1 11:37:37 np0005464891 systemd[1]: Finished IPv4 firewall with iptables.
Oct  1 11:37:38 np0005464891 cloud-init[847]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 01 Oct 2025 15:37:38 +0000. Up 6.65 seconds.
Oct  1 11:37:38 np0005464891 systemd[1]: run-cloud\x2dinit-tmp-tmpgewmxo1d.mount: Deactivated successfully.
Oct  1 11:37:38 np0005464891 systemd[1]: Starting Hostname Service...
Oct  1 11:37:38 np0005464891 systemd[1]: Started Hostname Service.
Oct  1 11:37:38 np0005464891 systemd-hostnamed[861]: Hostname set to <np0005464891.novalocal> (static)
Oct  1 11:37:38 np0005464891 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  1 11:37:38 np0005464891 systemd[1]: Reached target Preparation for Network.
Oct  1 11:37:38 np0005464891 systemd[1]: Starting Network Manager...
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.7276] NetworkManager (version 1.54.1-1.el9) is starting... (boot:253335a6-81f6-44a5-9cf6-dd6e8292df9d)
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.7282] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.7457] manager[0x5613e36dc080]: monitoring kernel firmware directory '/lib/firmware'.
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.7523] hostname: hostname: using hostnamed
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.7524] hostname: static hostname changed from (none) to "np0005464891.novalocal"
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.7530] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.7806] manager[0x5613e36dc080]: rfkill: Wi-Fi hardware radio set enabled
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.7808] manager[0x5613e36dc080]: rfkill: WWAN hardware radio set enabled
Oct  1 11:37:38 np0005464891 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.7958] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.7958] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.7959] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.7965] manager: Networking is enabled by state file
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.7968] settings: Loaded settings plugin: keyfile (internal)
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8013] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8040] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8064] dhcp: init: Using DHCP client 'internal'
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8066] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8079] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8092] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  1 11:37:38 np0005464891 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8099] device (lo): Activation: starting connection 'lo' (3276f51a-eaaa-4d65-b64d-271c8adeb767)
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8107] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8110] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 11:37:38 np0005464891 systemd[1]: Started Network Manager.
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8141] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8146] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8149] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8150] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8152] device (eth0): carrier: link connected
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8156] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  1 11:37:38 np0005464891 systemd[1]: Reached target Network.
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8162] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8174] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8179] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8180] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8182] manager: NetworkManager state is now CONNECTING
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8183] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8189] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8192] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  1 11:37:38 np0005464891 systemd[1]: Starting Network Manager Wait Online...
Oct  1 11:37:38 np0005464891 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  1 11:37:38 np0005464891 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8277] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8280] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  1 11:37:38 np0005464891 NetworkManager[865]: <info>  [1759333058.8286] device (lo): Activation: successful, device activated.
Oct  1 11:37:38 np0005464891 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  1 11:37:38 np0005464891 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  1 11:37:38 np0005464891 systemd[1]: Reached target NFS client services.
Oct  1 11:37:38 np0005464891 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  1 11:37:38 np0005464891 systemd[1]: Reached target Remote File Systems.
Oct  1 11:37:38 np0005464891 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  1 11:37:39 np0005464891 NetworkManager[865]: <info>  [1759333059.5306] dhcp4 (eth0): state changed new lease, address=38.102.83.177
Oct  1 11:37:39 np0005464891 NetworkManager[865]: <info>  [1759333059.5324] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  1 11:37:39 np0005464891 NetworkManager[865]: <info>  [1759333059.5350] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 11:37:39 np0005464891 NetworkManager[865]: <info>  [1759333059.5397] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 11:37:39 np0005464891 NetworkManager[865]: <info>  [1759333059.5400] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 11:37:39 np0005464891 NetworkManager[865]: <info>  [1759333059.5407] manager: NetworkManager state is now CONNECTED_SITE
Oct  1 11:37:39 np0005464891 NetworkManager[865]: <info>  [1759333059.5412] device (eth0): Activation: successful, device activated.
Oct  1 11:37:39 np0005464891 NetworkManager[865]: <info>  [1759333059.5418] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  1 11:37:39 np0005464891 NetworkManager[865]: <info>  [1759333059.5422] manager: startup complete
Oct  1 11:37:39 np0005464891 systemd[1]: Finished Network Manager Wait Online.
Oct  1 11:37:39 np0005464891 systemd[1]: Starting Cloud-init: Network Stage...
Oct  1 11:37:39 np0005464891 cloud-init[928]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 01 Oct 2025 15:37:39 +0000. Up 8.51 seconds.
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: |  eth0  | True |        38.102.83.177         | 255.255.255.0 | global | fa:16:3e:bc:4c:c5 |
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: |  eth0  | True | fe80::f816:3eff:febc:4cc5/64 |       .       |  link  | fa:16:3e:bc:4c:c5 |
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct  1 11:37:39 np0005464891 cloud-init[928]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  1 11:37:41 np0005464891 cloud-init[928]: Generating public/private rsa key pair.
Oct  1 11:37:41 np0005464891 cloud-init[928]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct  1 11:37:41 np0005464891 cloud-init[928]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct  1 11:37:41 np0005464891 cloud-init[928]: The key fingerprint is:
Oct  1 11:37:41 np0005464891 cloud-init[928]: SHA256:5y77i/Y9M87jqbRePqbYn2F+xl031LQgP9637meECxE root@np0005464891.novalocal
Oct  1 11:37:41 np0005464891 cloud-init[928]: The key's randomart image is:
Oct  1 11:37:41 np0005464891 cloud-init[928]: +---[RSA 3072]----+
Oct  1 11:37:41 np0005464891 cloud-init[928]: |                 |
Oct  1 11:37:41 np0005464891 cloud-init[928]: |           .E.  .|
Oct  1 11:37:41 np0005464891 cloud-init[928]: |            o...o|
Oct  1 11:37:41 np0005464891 cloud-init[928]: |            .o o.|
Oct  1 11:37:41 np0005464891 cloud-init[928]: |        S . ..+. |
Oct  1 11:37:41 np0005464891 cloud-init[928]: |         o  ...o=|
Oct  1 11:37:41 np0005464891 cloud-init[928]: |          o +o +*|
Oct  1 11:37:41 np0005464891 cloud-init[928]: |        o* OB+=.+|
Oct  1 11:37:41 np0005464891 cloud-init[928]: |       .+*@O@Oo+.|
Oct  1 11:37:41 np0005464891 cloud-init[928]: +----[SHA256]-----+
Oct  1 11:37:41 np0005464891 cloud-init[928]: Generating public/private ecdsa key pair.
Oct  1 11:37:41 np0005464891 cloud-init[928]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct  1 11:37:41 np0005464891 cloud-init[928]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct  1 11:37:41 np0005464891 cloud-init[928]: The key fingerprint is:
Oct  1 11:37:41 np0005464891 cloud-init[928]: SHA256:10BTtgyLUqu5uj4t2Hy04aTjLmFrXiaIEDd7OEgbjfA root@np0005464891.novalocal
Oct  1 11:37:41 np0005464891 cloud-init[928]: The key's randomart image is:
Oct  1 11:37:41 np0005464891 cloud-init[928]: +---[ECDSA 256]---+
Oct  1 11:37:41 np0005464891 cloud-init[928]: |.       . +.o    |
Oct  1 11:37:41 np0005464891 cloud-init[928]: |..o    . + * .   |
Oct  1 11:37:41 np0005464891 cloud-init[928]: |.+E.  . o o o    |
Oct  1 11:37:41 np0005464891 cloud-init[928]: |.+o+   +   o     |
Oct  1 11:37:41 np0005464891 cloud-init[928]: |o.+ . o S . .    |
Oct  1 11:37:41 np0005464891 cloud-init[928]: |o.oo  +. .       |
Oct  1 11:37:41 np0005464891 cloud-init[928]: |o..*o*.o         |
Oct  1 11:37:41 np0005464891 cloud-init[928]: |  =+B.=          |
Oct  1 11:37:41 np0005464891 cloud-init[928]: | o.=**           |
Oct  1 11:37:41 np0005464891 cloud-init[928]: +----[SHA256]-----+
Oct  1 11:37:41 np0005464891 cloud-init[928]: Generating public/private ed25519 key pair.
Oct  1 11:37:41 np0005464891 cloud-init[928]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct  1 11:37:41 np0005464891 cloud-init[928]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct  1 11:37:41 np0005464891 cloud-init[928]: The key fingerprint is:
Oct  1 11:37:41 np0005464891 cloud-init[928]: SHA256:SbsvR6UnId1f7YLnaaVwzQ6Wi8iaKxUm3XjFdGK440s root@np0005464891.novalocal
Oct  1 11:37:41 np0005464891 cloud-init[928]: The key's randomart image is:
Oct  1 11:37:41 np0005464891 cloud-init[928]: +--[ED25519 256]--+
Oct  1 11:37:41 np0005464891 cloud-init[928]: |           ++ .  |
Oct  1 11:37:41 np0005464891 cloud-init[928]: |          ..oo   |
Oct  1 11:37:41 np0005464891 cloud-init[928]: |       ..+ +    .|
Oct  1 11:37:41 np0005464891 cloud-init[928]: |      ..*oB o   o|
Oct  1 11:37:41 np0005464891 cloud-init[928]: |       oS= = o * |
Oct  1 11:37:41 np0005464891 cloud-init[928]: |        ..E + O =|
Oct  1 11:37:41 np0005464891 cloud-init[928]: |       ..+ = B O |
Oct  1 11:37:41 np0005464891 cloud-init[928]: |      . .o= . * .|
Oct  1 11:37:41 np0005464891 cloud-init[928]: |       .+=.  .   |
Oct  1 11:37:41 np0005464891 cloud-init[928]: +----[SHA256]-----+
Oct  1 11:37:41 np0005464891 systemd[1]: Finished Cloud-init: Network Stage.
Oct  1 11:37:41 np0005464891 systemd[1]: Reached target Cloud-config availability.
Oct  1 11:37:41 np0005464891 systemd[1]: Reached target Network is Online.
Oct  1 11:37:41 np0005464891 systemd[1]: Starting Cloud-init: Config Stage...
Oct  1 11:37:41 np0005464891 systemd[1]: Starting Notify NFS peers of a restart...
Oct  1 11:37:41 np0005464891 systemd[1]: Starting System Logging Service...
Oct  1 11:37:41 np0005464891 systemd[1]: Starting OpenSSH server daemon...
Oct  1 11:37:41 np0005464891 sm-notify[1010]: Version 2.5.4 starting
Oct  1 11:37:41 np0005464891 systemd[1]: Starting Permit User Sessions...
Oct  1 11:37:41 np0005464891 systemd[1]: Started Notify NFS peers of a restart.
Oct  1 11:37:41 np0005464891 systemd[1]: Started OpenSSH server daemon.
Oct  1 11:37:41 np0005464891 systemd[1]: Finished Permit User Sessions.
Oct  1 11:37:41 np0005464891 systemd[1]: Started Command Scheduler.
Oct  1 11:37:41 np0005464891 systemd[1]: Started Getty on tty1.
Oct  1 11:37:41 np0005464891 systemd[1]: Started Serial Getty on ttyS0.
Oct  1 11:37:41 np0005464891 systemd[1]: Reached target Login Prompts.
Oct  1 11:37:41 np0005464891 rsyslogd[1011]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1011" x-info="https://www.rsyslog.com"] start
Oct  1 11:37:41 np0005464891 rsyslogd[1011]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct  1 11:37:41 np0005464891 systemd[1]: Started System Logging Service.
Oct  1 11:37:41 np0005464891 systemd[1]: Reached target Multi-User System.
Oct  1 11:37:41 np0005464891 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  1 11:37:41 np0005464891 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  1 11:37:41 np0005464891 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  1 11:37:41 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 11:37:41 np0005464891 cloud-init[1023]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 01 Oct 2025 15:37:41 +0000. Up 10.56 seconds.
Oct  1 11:37:42 np0005464891 systemd[1]: Finished Cloud-init: Config Stage.
Oct  1 11:37:42 np0005464891 systemd[1]: Starting Cloud-init: Final Stage...
Oct  1 11:37:42 np0005464891 cloud-init[1027]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 01 Oct 2025 15:37:42 +0000. Up 10.95 seconds.
Oct  1 11:37:42 np0005464891 cloud-init[1029]: #############################################################
Oct  1 11:37:42 np0005464891 cloud-init[1030]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct  1 11:37:42 np0005464891 cloud-init[1032]: 256 SHA256:10BTtgyLUqu5uj4t2Hy04aTjLmFrXiaIEDd7OEgbjfA root@np0005464891.novalocal (ECDSA)
Oct  1 11:37:42 np0005464891 cloud-init[1034]: 256 SHA256:SbsvR6UnId1f7YLnaaVwzQ6Wi8iaKxUm3XjFdGK440s root@np0005464891.novalocal (ED25519)
Oct  1 11:37:42 np0005464891 cloud-init[1036]: 3072 SHA256:5y77i/Y9M87jqbRePqbYn2F+xl031LQgP9637meECxE root@np0005464891.novalocal (RSA)
Oct  1 11:37:42 np0005464891 cloud-init[1037]: -----END SSH HOST KEY FINGERPRINTS-----
Oct  1 11:37:42 np0005464891 cloud-init[1038]: #############################################################
Oct  1 11:37:42 np0005464891 cloud-init[1027]: Cloud-init v. 24.4-7.el9 finished at Wed, 01 Oct 2025 15:37:42 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.11 seconds
Oct  1 11:37:42 np0005464891 systemd[1]: Finished Cloud-init: Final Stage.
Oct  1 11:37:42 np0005464891 systemd[1]: Reached target Cloud-init target.
Oct  1 11:37:42 np0005464891 systemd[1]: Startup finished in 1.591s (kernel) + 2.541s (initrd) + 7.049s (userspace) = 11.182s.
Oct  1 11:37:44 np0005464891 chronyd[803]: Selected source 138.197.164.54 (2.centos.pool.ntp.org)
Oct  1 11:37:44 np0005464891 chronyd[803]: System clock TAI offset set to 37 seconds
Oct  1 11:37:47 np0005464891 irqbalance[794]: Cannot change IRQ 35 affinity: Operation not permitted
Oct  1 11:37:47 np0005464891 irqbalance[794]: IRQ 35 affinity is now unmanaged
Oct  1 11:37:47 np0005464891 irqbalance[794]: Cannot change IRQ 33 affinity: Operation not permitted
Oct  1 11:37:47 np0005464891 irqbalance[794]: IRQ 33 affinity is now unmanaged
Oct  1 11:37:47 np0005464891 irqbalance[794]: Cannot change IRQ 31 affinity: Operation not permitted
Oct  1 11:37:47 np0005464891 irqbalance[794]: IRQ 31 affinity is now unmanaged
Oct  1 11:37:47 np0005464891 irqbalance[794]: Cannot change IRQ 28 affinity: Operation not permitted
Oct  1 11:37:47 np0005464891 irqbalance[794]: IRQ 28 affinity is now unmanaged
Oct  1 11:37:47 np0005464891 irqbalance[794]: Cannot change IRQ 34 affinity: Operation not permitted
Oct  1 11:37:47 np0005464891 irqbalance[794]: IRQ 34 affinity is now unmanaged
Oct  1 11:37:47 np0005464891 irqbalance[794]: Cannot change IRQ 32 affinity: Operation not permitted
Oct  1 11:37:47 np0005464891 irqbalance[794]: IRQ 32 affinity is now unmanaged
Oct  1 11:37:47 np0005464891 irqbalance[794]: Cannot change IRQ 30 affinity: Operation not permitted
Oct  1 11:37:47 np0005464891 irqbalance[794]: IRQ 30 affinity is now unmanaged
Oct  1 11:37:47 np0005464891 irqbalance[794]: Cannot change IRQ 29 affinity: Operation not permitted
Oct  1 11:37:47 np0005464891 irqbalance[794]: IRQ 29 affinity is now unmanaged
Oct  1 11:37:49 np0005464891 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  1 11:37:57 np0005464891 systemd[1]: Created slice User Slice of UID 1000.
Oct  1 11:37:57 np0005464891 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  1 11:37:57 np0005464891 systemd-logind[801]: New session 1 of user zuul.
Oct  1 11:37:57 np0005464891 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  1 11:37:57 np0005464891 systemd[1]: Starting User Manager for UID 1000...
Oct  1 11:37:57 np0005464891 systemd[1064]: Queued start job for default target Main User Target.
Oct  1 11:37:57 np0005464891 systemd[1064]: Created slice User Application Slice.
Oct  1 11:37:57 np0005464891 systemd[1064]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  1 11:37:57 np0005464891 systemd[1064]: Started Daily Cleanup of User's Temporary Directories.
Oct  1 11:37:57 np0005464891 systemd[1064]: Reached target Paths.
Oct  1 11:37:57 np0005464891 systemd[1064]: Reached target Timers.
Oct  1 11:37:57 np0005464891 systemd[1064]: Starting D-Bus User Message Bus Socket...
Oct  1 11:37:57 np0005464891 systemd[1064]: Starting Create User's Volatile Files and Directories...
Oct  1 11:37:57 np0005464891 systemd[1064]: Finished Create User's Volatile Files and Directories.
Oct  1 11:37:57 np0005464891 systemd[1064]: Listening on D-Bus User Message Bus Socket.
Oct  1 11:37:57 np0005464891 systemd[1064]: Reached target Sockets.
Oct  1 11:37:57 np0005464891 systemd[1064]: Reached target Basic System.
Oct  1 11:37:57 np0005464891 systemd[1064]: Reached target Main User Target.
Oct  1 11:37:57 np0005464891 systemd[1064]: Startup finished in 125ms.
Oct  1 11:37:57 np0005464891 systemd[1]: Started User Manager for UID 1000.
Oct  1 11:37:57 np0005464891 systemd[1]: Started Session 1 of User zuul.
Oct  1 11:37:58 np0005464891 python3[1146]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 11:38:00 np0005464891 python3[1174]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 11:38:06 np0005464891 python3[1232]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 11:38:07 np0005464891 python3[1272]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct  1 11:38:08 np0005464891 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  1 11:38:09 np0005464891 python3[1300]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0AjFYOdb6z/xIQpOz82mfrLDuXpRfTiPWXkUvRVLc0I5mX4c5ZrADrXVocLhDMLWTDyRJIAGqWmkPrHjl/Cn/r5HlfoeiAFiPhqLg8L++JOWZ7qAhrnFAxARxBSW6FJiOSKl6uJMO7Kxf/8Yy0q2DN3Qx7S80iGHz81eA+rSi9Y1e4Wg6VnjMCR9aypgzE4hU1W6Ovadadccs+0Q/KweFMWec26WyOpGbCZ2Gjiuh6ZjrU1651Mh+bWEZeOd2YPczxpDUf+fT7tldmjpmsmeTu7OX4mEAoD6jL8veRXIaR0bfKoeb56PFIAikFISdcGzI/N6quJLg/yEyx9BqTibcRwtg8oUm9Odzkp9X+YsALzHpL84Kyda5oGLrynCJEtZnBStY8ZOCCsi53sT+3YgcupjL94SVpKArcqGZQj1vcqj4zbg+g09lKVrUrXWjZDBTKVDx7uY0xQMQxym+Si1fw/ZRykgqMMBMJUCI5AEn6HSWJObQJBokRsrFXEGyg/8= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:10 np0005464891 python3[1324]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:10 np0005464891 python3[1423]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:38:10 np0005464891 python3[1494]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759333090.2472837-207-67044140502371/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=19e8b456972d4ee497ace81020a9d6dc_id_rsa follow=False checksum=52a71351b72a09024f4a870ab6642e819072ba53 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:11 np0005464891 python3[1617]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:38:11 np0005464891 python3[1688]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759333091.1253235-240-168988457824857/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=19e8b456972d4ee497ace81020a9d6dc_id_rsa.pub follow=False checksum=0449d0895e7ce99def737eb3a46c8e87e92b56a4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:13 np0005464891 python3[1736]: ansible-ping Invoked with data=pong
Oct  1 11:38:14 np0005464891 python3[1760]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 11:38:15 np0005464891 python3[1818]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct  1 11:38:16 np0005464891 python3[1850]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:16 np0005464891 python3[1874]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:17 np0005464891 python3[1898]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:17 np0005464891 python3[1922]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:17 np0005464891 python3[1946]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:18 np0005464891 python3[1970]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:19 np0005464891 python3[1996]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:20 np0005464891 python3[2074]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:38:20 np0005464891 python3[2147]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759333099.944285-21-25090002964447/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:21 np0005464891 python3[2195]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:21 np0005464891 python3[2219]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:22 np0005464891 python3[2243]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:22 np0005464891 python3[2267]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:22 np0005464891 python3[2291]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:22 np0005464891 python3[2315]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:23 np0005464891 python3[2339]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:23 np0005464891 python3[2363]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:23 np0005464891 python3[2387]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:24 np0005464891 python3[2411]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:24 np0005464891 python3[2435]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:24 np0005464891 python3[2459]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:24 np0005464891 python3[2483]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:25 np0005464891 python3[2507]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:25 np0005464891 python3[2531]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:25 np0005464891 python3[2555]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:26 np0005464891 python3[2579]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:26 np0005464891 python3[2603]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:26 np0005464891 python3[2627]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:27 np0005464891 python3[2651]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:27 np0005464891 python3[2675]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:27 np0005464891 python3[2699]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:27 np0005464891 python3[2723]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:28 np0005464891 python3[2747]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:28 np0005464891 python3[2771]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:28 np0005464891 python3[2795]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:38:31 np0005464891 python3[2821]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  1 11:38:31 np0005464891 systemd[1]: Starting Time & Date Service...
Oct  1 11:38:31 np0005464891 systemd[1]: Started Time & Date Service.
Oct  1 11:38:31 np0005464891 systemd-timedated[2823]: Changed time zone to 'UTC' (UTC).
Oct  1 11:38:32 np0005464891 python3[2852]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:33 np0005464891 python3[2928]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:38:33 np0005464891 python3[2999]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759333113.0750618-153-160794432059631/source _original_basename=tmpbgs80jic follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:34 np0005464891 python3[3099]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:38:34 np0005464891 python3[3170]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759333113.9681227-183-139788258432648/source _original_basename=tmp8x_yigvc follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:35 np0005464891 python3[3272]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:38:35 np0005464891 python3[3345]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759333115.0827532-231-242122772926456/source _original_basename=tmp4ua74gfv follow=False checksum=1bcc824686558cc83916b394196cc422cefa4598 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:36 np0005464891 python3[3393]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 11:38:36 np0005464891 python3[3419]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 11:38:36 np0005464891 python3[3499]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:38:37 np0005464891 python3[3572]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759333116.6829612-273-184948855901050/source _original_basename=tmpogrjingy follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:37 np0005464891 python3[3623]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-99da-b248-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 11:38:38 np0005464891 python3[3651]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-99da-b248-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct  1 11:38:39 np0005464891 python3[3679]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:38:50 np0005464891 chronyd[803]: Selected source 172.97.210.214 (2.centos.pool.ntp.org)
Oct  1 11:38:56 np0005464891 python3[3705]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:39:01 np0005464891 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  1 11:39:30 np0005464891 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  1 11:39:30 np0005464891 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct  1 11:39:30 np0005464891 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct  1 11:39:30 np0005464891 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct  1 11:39:30 np0005464891 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct  1 11:39:30 np0005464891 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct  1 11:39:30 np0005464891 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct  1 11:39:30 np0005464891 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct  1 11:39:30 np0005464891 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct  1 11:39:30 np0005464891 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct  1 11:39:30 np0005464891 NetworkManager[865]: <info>  [1759333170.8017] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  1 11:39:30 np0005464891 systemd-udevd[3708]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 11:39:30 np0005464891 NetworkManager[865]: <info>  [1759333170.8208] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 11:39:30 np0005464891 NetworkManager[865]: <info>  [1759333170.8247] settings: (eth1): created default wired connection 'Wired connection 1'
Oct  1 11:39:30 np0005464891 NetworkManager[865]: <info>  [1759333170.8252] device (eth1): carrier: link connected
Oct  1 11:39:30 np0005464891 NetworkManager[865]: <info>  [1759333170.8254] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  1 11:39:30 np0005464891 NetworkManager[865]: <info>  [1759333170.8261] policy: auto-activating connection 'Wired connection 1' (3bc61b06-f82d-36ce-9d85-6821398aba72)
Oct  1 11:39:30 np0005464891 NetworkManager[865]: <info>  [1759333170.8267] device (eth1): Activation: starting connection 'Wired connection 1' (3bc61b06-f82d-36ce-9d85-6821398aba72)
Oct  1 11:39:30 np0005464891 NetworkManager[865]: <info>  [1759333170.8268] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 11:39:30 np0005464891 NetworkManager[865]: <info>  [1759333170.8272] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 11:39:30 np0005464891 NetworkManager[865]: <info>  [1759333170.8277] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 11:39:30 np0005464891 NetworkManager[865]: <info>  [1759333170.8283] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  1 11:39:31 np0005464891 python3[3735]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-4976-57bb-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 11:39:41 np0005464891 python3[3815]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:39:41 np0005464891 python3[3888]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759333181.2178044-102-8614066397935/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=13d99856ad412fd55215b593cf0472c70d347ac3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:39:42 np0005464891 python3[3938]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 11:39:42 np0005464891 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  1 11:39:42 np0005464891 systemd[1]: Stopped Network Manager Wait Online.
Oct  1 11:39:42 np0005464891 systemd[1]: Stopping Network Manager Wait Online...
Oct  1 11:39:42 np0005464891 NetworkManager[865]: <info>  [1759333182.7388] caught SIGTERM, shutting down normally.
Oct  1 11:39:42 np0005464891 systemd[1]: Stopping Network Manager...
Oct  1 11:39:42 np0005464891 NetworkManager[865]: <info>  [1759333182.7399] dhcp4 (eth0): canceled DHCP transaction
Oct  1 11:39:42 np0005464891 NetworkManager[865]: <info>  [1759333182.7399] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  1 11:39:42 np0005464891 NetworkManager[865]: <info>  [1759333182.7399] dhcp4 (eth0): state changed no lease
Oct  1 11:39:42 np0005464891 NetworkManager[865]: <info>  [1759333182.7402] manager: NetworkManager state is now CONNECTING
Oct  1 11:39:42 np0005464891 NetworkManager[865]: <info>  [1759333182.7495] dhcp4 (eth1): canceled DHCP transaction
Oct  1 11:39:42 np0005464891 NetworkManager[865]: <info>  [1759333182.7495] dhcp4 (eth1): state changed no lease
Oct  1 11:39:42 np0005464891 NetworkManager[865]: <info>  [1759333182.7533] exiting (success)
Oct  1 11:39:42 np0005464891 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  1 11:39:42 np0005464891 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  1 11:39:42 np0005464891 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  1 11:39:42 np0005464891 systemd[1]: Stopped Network Manager.
Oct  1 11:39:42 np0005464891 systemd[1]: Starting Network Manager...
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.8158] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:253335a6-81f6-44a5-9cf6-dd6e8292df9d)
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.8161] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.8228] manager[0x55b117706070]: monitoring kernel firmware directory '/lib/firmware'.
Oct  1 11:39:42 np0005464891 systemd[1]: Starting Hostname Service...
Oct  1 11:39:42 np0005464891 systemd[1]: Started Hostname Service.
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9305] hostname: hostname: using hostnamed
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9308] hostname: static hostname changed from (none) to "np0005464891.novalocal"
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9315] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9322] manager[0x55b117706070]: rfkill: Wi-Fi hardware radio set enabled
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9322] manager[0x55b117706070]: rfkill: WWAN hardware radio set enabled
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9357] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9357] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9358] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9359] manager: Networking is enabled by state file
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9363] settings: Loaded settings plugin: keyfile (internal)
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9369] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9416] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9429] dhcp: init: Using DHCP client 'internal'
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9432] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9440] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9451] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9464] device (lo): Activation: starting connection 'lo' (3276f51a-eaaa-4d65-b64d-271c8adeb767)
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9474] device (eth0): carrier: link connected
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9479] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9488] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9489] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9502] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9513] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9523] device (eth1): carrier: link connected
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9528] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9538] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (3bc61b06-f82d-36ce-9d85-6821398aba72) (indicated)
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9538] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9548] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9562] device (eth1): Activation: starting connection 'Wired connection 1' (3bc61b06-f82d-36ce-9d85-6821398aba72)
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9569] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  1 11:39:42 np0005464891 systemd[1]: Started Network Manager.
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9589] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9592] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9594] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9596] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9599] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9601] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9603] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9606] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9616] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9620] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9631] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9634] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9658] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9660] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9669] device (lo): Activation: successful, device activated.
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9677] dhcp4 (eth0): state changed new lease, address=38.102.83.177
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9683] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9781] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  1 11:39:42 np0005464891 systemd[1]: Starting Network Manager Wait Online...
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9809] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9811] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9815] manager: NetworkManager state is now CONNECTED_SITE
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9819] device (eth0): Activation: successful, device activated.
Oct  1 11:39:42 np0005464891 NetworkManager[3948]: <info>  [1759333182.9827] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  1 11:39:43 np0005464891 python3[4022]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-4976-57bb-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 11:39:53 np0005464891 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  1 11:40:03 np0005464891 systemd[1064]: Starting Mark boot as successful...
Oct  1 11:40:04 np0005464891 systemd[1064]: Finished Mark boot as successful.
Oct  1 11:40:12 np0005464891 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.3664] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  1 11:40:28 np0005464891 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  1 11:40:28 np0005464891 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.4020] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.4023] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.4032] device (eth1): Activation: successful, device activated.
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.4039] manager: startup complete
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.4041] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <warn>  [1759333228.4045] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.4051] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct  1 11:40:28 np0005464891 systemd[1]: Finished Network Manager Wait Online.
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.4185] dhcp4 (eth1): canceled DHCP transaction
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.4186] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.4186] dhcp4 (eth1): state changed no lease
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.4202] policy: auto-activating connection 'ci-private-network' (aace28b1-91e6-58d1-b9ab-328121bea078)
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.4206] device (eth1): Activation: starting connection 'ci-private-network' (aace28b1-91e6-58d1-b9ab-328121bea078)
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.4207] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.4210] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.4216] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.4223] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.5156] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.5158] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 11:40:28 np0005464891 NetworkManager[3948]: <info>  [1759333228.5166] device (eth1): Activation: successful, device activated.
Oct  1 11:40:38 np0005464891 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  1 11:40:43 np0005464891 systemd-logind[801]: Session 1 logged out. Waiting for processes to exit.
Oct  1 11:40:44 np0005464891 systemd-logind[801]: New session 3 of user zuul.
Oct  1 11:40:44 np0005464891 systemd[1]: Started Session 3 of User zuul.
Oct  1 11:40:44 np0005464891 python3[4134]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:40:45 np0005464891 python3[4207]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759333244.405622-267-20922452443329/source _original_basename=tmp5zb5akgi follow=False checksum=97f51ebd227331f4326265295805157860eeb3ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:40:47 np0005464891 systemd[1]: session-3.scope: Deactivated successfully.
Oct  1 11:40:47 np0005464891 systemd-logind[801]: Session 3 logged out. Waiting for processes to exit.
Oct  1 11:40:47 np0005464891 systemd-logind[801]: Removed session 3.
Oct  1 11:43:03 np0005464891 systemd[1064]: Created slice User Background Tasks Slice.
Oct  1 11:43:03 np0005464891 systemd[1064]: Starting Cleanup of User's Temporary Files and Directories...
Oct  1 11:43:03 np0005464891 systemd[1064]: Finished Cleanup of User's Temporary Files and Directories.
Oct  1 11:45:59 np0005464891 systemd-logind[801]: New session 4 of user zuul.
Oct  1 11:45:59 np0005464891 systemd[1]: Started Session 4 of User zuul.
Oct  1 11:45:59 np0005464891 python3[4269]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-03c5-fcfe-000000001ce6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 11:46:00 np0005464891 python3[4297]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:46:00 np0005464891 python3[4323]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:46:00 np0005464891 python3[4350]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:46:00 np0005464891 python3[4376]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:46:01 np0005464891 python3[4402]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:46:01 np0005464891 python3[4402]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct  1 11:46:02 np0005464891 python3[4428]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 11:46:02 np0005464891 systemd[1]: Reloading.
Oct  1 11:46:02 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 11:46:03 np0005464891 python3[4484]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct  1 11:46:04 np0005464891 python3[4510]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 11:46:04 np0005464891 python3[4538]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 11:46:04 np0005464891 python3[4566]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 11:46:05 np0005464891 python3[4594]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 11:46:05 np0005464891 python3[4621]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-03c5-fcfe-000000001cec-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 11:46:06 np0005464891 python3[4651]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 11:46:07 np0005464891 systemd[1]: session-4.scope: Deactivated successfully.
Oct  1 11:46:07 np0005464891 systemd[1]: session-4.scope: Consumed 3.451s CPU time.
Oct  1 11:46:07 np0005464891 systemd-logind[801]: Session 4 logged out. Waiting for processes to exit.
Oct  1 11:46:07 np0005464891 systemd-logind[801]: Removed session 4.
Oct  1 11:46:09 np0005464891 systemd-logind[801]: New session 5 of user zuul.
Oct  1 11:46:09 np0005464891 systemd[1]: Started Session 5 of User zuul.
Oct  1 11:46:09 np0005464891 python3[4684]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  1 11:46:38 np0005464891 kernel: SELinux:  Converting 363 SID table entries...
Oct  1 11:46:38 np0005464891 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 11:46:38 np0005464891 kernel: SELinux:  policy capability open_perms=1
Oct  1 11:46:38 np0005464891 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 11:46:38 np0005464891 kernel: SELinux:  policy capability always_check_network=0
Oct  1 11:46:38 np0005464891 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 11:46:38 np0005464891 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 11:46:38 np0005464891 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 11:46:48 np0005464891 kernel: SELinux:  Converting 363 SID table entries...
Oct  1 11:46:48 np0005464891 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 11:46:48 np0005464891 kernel: SELinux:  policy capability open_perms=1
Oct  1 11:46:48 np0005464891 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 11:46:48 np0005464891 kernel: SELinux:  policy capability always_check_network=0
Oct  1 11:46:48 np0005464891 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 11:46:48 np0005464891 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 11:46:48 np0005464891 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 11:46:56 np0005464891 kernel: SELinux:  Converting 363 SID table entries...
Oct  1 11:46:56 np0005464891 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 11:46:56 np0005464891 kernel: SELinux:  policy capability open_perms=1
Oct  1 11:46:56 np0005464891 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 11:46:56 np0005464891 kernel: SELinux:  policy capability always_check_network=0
Oct  1 11:46:56 np0005464891 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 11:46:56 np0005464891 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 11:46:56 np0005464891 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 11:46:58 np0005464891 setsebool[4767]: The virt_use_nfs policy boolean was changed to 1 by root
Oct  1 11:46:58 np0005464891 setsebool[4767]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct  1 11:47:08 np0005464891 kernel: SELinux:  Converting 366 SID table entries...
Oct  1 11:47:08 np0005464891 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 11:47:08 np0005464891 kernel: SELinux:  policy capability open_perms=1
Oct  1 11:47:08 np0005464891 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 11:47:08 np0005464891 kernel: SELinux:  policy capability always_check_network=0
Oct  1 11:47:08 np0005464891 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 11:47:08 np0005464891 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 11:47:08 np0005464891 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 11:47:27 np0005464891 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  1 11:47:27 np0005464891 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 11:47:27 np0005464891 systemd[1]: Starting man-db-cache-update.service...
Oct  1 11:47:27 np0005464891 systemd[1]: Reloading.
Oct  1 11:47:28 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 11:47:28 np0005464891 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  1 11:47:29 np0005464891 systemd[1]: Starting PackageKit Daemon...
Oct  1 11:47:29 np0005464891 systemd[1]: Starting Authorization Manager...
Oct  1 11:47:29 np0005464891 polkitd[6412]: Started polkitd version 0.117
Oct  1 11:47:29 np0005464891 systemd[1]: Started Authorization Manager.
Oct  1 11:47:29 np0005464891 systemd[1]: Started PackageKit Daemon.
Oct  1 11:47:32 np0005464891 python3[9218]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-8676-e088-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 11:47:33 np0005464891 kernel: evm: overlay not supported
Oct  1 11:47:33 np0005464891 systemd[1064]: Starting D-Bus User Message Bus...
Oct  1 11:47:33 np0005464891 dbus-broker-launch[10097]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  1 11:47:33 np0005464891 dbus-broker-launch[10097]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  1 11:47:33 np0005464891 systemd[1064]: Started D-Bus User Message Bus.
Oct  1 11:47:33 np0005464891 dbus-broker-lau[10097]: Ready
Oct  1 11:47:33 np0005464891 systemd[1064]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  1 11:47:33 np0005464891 systemd[1064]: Created slice Slice /user.
Oct  1 11:47:33 np0005464891 systemd[1064]: podman-9957.scope: unit configures an IP firewall, but not running as root.
Oct  1 11:47:33 np0005464891 systemd[1064]: (This warning is only shown for the first unit using IP firewalling.)
Oct  1 11:47:33 np0005464891 systemd[1064]: Started podman-9957.scope.
Oct  1 11:47:33 np0005464891 systemd[1064]: Started podman-pause-d3121755.scope.
Oct  1 11:47:34 np0005464891 python3[10488]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.80:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.80:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:47:34 np0005464891 systemd[1]: session-5.scope: Deactivated successfully.
Oct  1 11:47:34 np0005464891 systemd[1]: session-5.scope: Consumed 1min 10.057s CPU time.
Oct  1 11:47:34 np0005464891 systemd-logind[801]: Session 5 logged out. Waiting for processes to exit.
Oct  1 11:47:34 np0005464891 systemd-logind[801]: Removed session 5.
Oct  1 11:47:57 np0005464891 systemd-logind[801]: New session 6 of user zuul.
Oct  1 11:47:57 np0005464891 systemd[1]: Started Session 6 of User zuul.
Oct  1 11:47:57 np0005464891 python3[18692]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIyx7QOGRxjk9+2lhQXNeOlolgM4zNEWsOWZdshkugDNNGrZjnv0eT3iQBZtO0tnHKYpuJJXeqdaKB8NBJLj+fs= zuul@np0005464890.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:47:57 np0005464891 python3[18935]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIyx7QOGRxjk9+2lhQXNeOlolgM4zNEWsOWZdshkugDNNGrZjnv0eT3iQBZtO0tnHKYpuJJXeqdaKB8NBJLj+fs= zuul@np0005464890.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:47:58 np0005464891 python3[19418]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005464891.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct  1 11:47:59 np0005464891 python3[19687]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIyx7QOGRxjk9+2lhQXNeOlolgM4zNEWsOWZdshkugDNNGrZjnv0eT3iQBZtO0tnHKYpuJJXeqdaKB8NBJLj+fs= zuul@np0005464890.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 11:47:59 np0005464891 python3[19938]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:48:00 np0005464891 python3[20171]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759333679.6601999-135-267510462813087/source _original_basename=tmpxoajx6vl follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:48:01 np0005464891 python3[20488]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct  1 11:48:01 np0005464891 systemd[1]: Starting Hostname Service...
Oct  1 11:48:01 np0005464891 systemd[1]: Started Hostname Service.
Oct  1 11:48:01 np0005464891 systemd-hostnamed[20604]: Changed pretty hostname to 'compute-0'
Oct  1 11:48:01 np0005464891 systemd-hostnamed[20604]: Hostname set to <compute-0> (static)
Oct  1 11:48:01 np0005464891 NetworkManager[3948]: <info>  [1759333681.3462] hostname: static hostname changed from "np0005464891.novalocal" to "compute-0"
Oct  1 11:48:01 np0005464891 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  1 11:48:01 np0005464891 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  1 11:48:01 np0005464891 systemd[1]: session-6.scope: Deactivated successfully.
Oct  1 11:48:01 np0005464891 systemd[1]: session-6.scope: Consumed 2.088s CPU time.
Oct  1 11:48:01 np0005464891 systemd-logind[801]: Session 6 logged out. Waiting for processes to exit.
Oct  1 11:48:01 np0005464891 systemd-logind[801]: Removed session 6.
Oct  1 11:48:11 np0005464891 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  1 11:48:19 np0005464891 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 11:48:19 np0005464891 systemd[1]: Finished man-db-cache-update.service.
Oct  1 11:48:19 np0005464891 systemd[1]: man-db-cache-update.service: Consumed 53.569s CPU time.
Oct  1 11:48:20 np0005464891 systemd[1]: run-r8f4b947775664467a8edf7f1d7a94061.service: Deactivated successfully.
Oct  1 11:48:31 np0005464891 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  1 11:51:49 np0005464891 chronyd[803]: Selected source 138.197.164.54 (2.centos.pool.ntp.org)
Oct  1 11:52:09 np0005464891 systemd-logind[801]: New session 7 of user zuul.
Oct  1 11:52:09 np0005464891 systemd[1]: Started Session 7 of User zuul.
Oct  1 11:52:10 np0005464891 python3[26642]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 11:52:11 np0005464891 python3[26758]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:52:12 np0005464891 python3[26831]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759333931.659117-30239-274200680963139/source mode=0755 _original_basename=delorean.repo follow=False checksum=bb4c2ff9dad546f135d54d9729ea11b84117755d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:52:12 np0005464891 python3[26857]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:52:13 np0005464891 python3[26930]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759333931.659117-30239-274200680963139/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:52:13 np0005464891 python3[26956]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:52:13 np0005464891 python3[27029]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759333931.659117-30239-274200680963139/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:52:13 np0005464891 python3[27055]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:52:14 np0005464891 python3[27128]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759333931.659117-30239-274200680963139/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:52:14 np0005464891 python3[27154]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:52:14 np0005464891 python3[27227]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759333931.659117-30239-274200680963139/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:52:15 np0005464891 python3[27253]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:52:15 np0005464891 python3[27326]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759333931.659117-30239-274200680963139/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:52:15 np0005464891 python3[27352]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 11:52:16 np0005464891 python3[27425]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759333931.659117-30239-274200680963139/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=d911291791b114a72daf18f370e91cb1ae300933 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 11:52:27 np0005464891 python3[27483]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 11:52:31 np0005464891 systemd[1]: Starting Cleanup of Temporary Directories...
Oct  1 11:52:31 np0005464891 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct  1 11:52:31 np0005464891 systemd[1]: Finished Cleanup of Temporary Directories.
Oct  1 11:52:31 np0005464891 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct  1 11:52:34 np0005464891 systemd[1]: packagekit.service: Deactivated successfully.
Oct  1 11:57:27 np0005464891 systemd[1]: session-7.scope: Deactivated successfully.
Oct  1 11:57:27 np0005464891 systemd[1]: session-7.scope: Consumed 4.731s CPU time.
Oct  1 11:57:27 np0005464891 systemd-logind[801]: Session 7 logged out. Waiting for processes to exit.
Oct  1 11:57:27 np0005464891 systemd-logind[801]: Removed session 7.
Oct  1 12:03:54 np0005464891 systemd-logind[801]: New session 8 of user zuul.
Oct  1 12:03:54 np0005464891 systemd[1]: Started Session 8 of User zuul.
Oct  1 12:03:55 np0005464891 python3.9[27664]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:03:56 np0005464891 python3.9[27845]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:04:04 np0005464891 systemd[1]: session-8.scope: Deactivated successfully.
Oct  1 12:04:04 np0005464891 systemd[1]: session-8.scope: Consumed 8.137s CPU time.
Oct  1 12:04:04 np0005464891 systemd-logind[801]: Session 8 logged out. Waiting for processes to exit.
Oct  1 12:04:04 np0005464891 systemd-logind[801]: Removed session 8.
Oct  1 12:04:19 np0005464891 systemd-logind[801]: New session 9 of user zuul.
Oct  1 12:04:19 np0005464891 systemd[1]: Started Session 9 of User zuul.
Oct  1 12:04:20 np0005464891 python3.9[28059]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  1 12:04:21 np0005464891 python3.9[28233]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:04:22 np0005464891 python3.9[28385]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:04:23 np0005464891 python3.9[28538]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:04:24 np0005464891 python3.9[28690]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:04:25 np0005464891 python3.9[28842]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:04:25 np0005464891 python3.9[28965]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759334664.531626-73-59263381226323/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:04:26 np0005464891 python3.9[29117]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:04:27 np0005464891 python3.9[29273]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:04:28 np0005464891 python3.9[29423]: ansible-ansible.builtin.service_facts Invoked
Oct  1 12:04:32 np0005464891 python3.9[29678]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:04:33 np0005464891 python3.9[29828]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:04:34 np0005464891 python3.9[29982]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:04:35 np0005464891 python3.9[30140]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 12:04:36 np0005464891 python3.9[30224]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:05:28 np0005464891 systemd[1]: Reloading.
Oct  1 12:05:28 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:05:28 np0005464891 systemd[1]: Starting dnf makecache...
Oct  1 12:05:28 np0005464891 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct  1 12:05:28 np0005464891 dnf[30433]: Failed determining last makecache time.
Oct  1 12:05:28 np0005464891 systemd[1]: Reloading.
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-openstack-barbican-42b4c41831408a8e323  82 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7  93 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-openstack-cinder-1c00d6490d88e436f26ef 127 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-python-stevedore-c4acc5639fd2329372142 109 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-python-cloudkitty-tests-tempest-3961dc 174 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-os-net-config-28598c2978b9e2207dd19fc4 193 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 128 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-python-designate-tests-tempest-347fdbc 144 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-openstack-glance-1fd12c29b339f30fe823e 137 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 118 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-openstack-manila-3c01b7181572c95dac462 121 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-python-whitebox-neutron-tests-tempest- 138 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 systemd[1]: Reloading.
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-openstack-octavia-ba397f07a7331190208c 136 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-openstack-watcher-c014f81a8647287f6dcc 147 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-edpm-image-builder-55ba53cf215b14ed95b 141 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-puppet-ceph-b0c245ccde541a63fde0564366 160 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-openstack-swift-dc98a8463506ac520c469a 155 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-python-tempestconf-8515371b7cceebd4282 167 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 dnf[30433]: delorean-openstack-heat-ui-013accbfd179753bc3f0 165 kB/s | 3.0 kB     00:00
Oct  1 12:05:28 np0005464891 systemd[1]: Listening on LVM2 poll daemon socket.
Oct  1 12:05:29 np0005464891 dnf[30433]: CentOS Stream 9 - BaseOS                         58 kB/s | 6.7 kB     00:00
Oct  1 12:05:29 np0005464891 dnf[30433]: CentOS Stream 9 - AppStream                      64 kB/s | 6.8 kB     00:00
Oct  1 12:05:29 np0005464891 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct  1 12:05:29 np0005464891 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct  1 12:05:29 np0005464891 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct  1 12:05:29 np0005464891 dnf[30433]: CentOS Stream 9 - CRB                            25 kB/s | 6.6 kB     00:00
Oct  1 12:05:29 np0005464891 dnf[30433]: CentOS Stream 9 - Extras packages                65 kB/s | 8.0 kB     00:00
Oct  1 12:05:29 np0005464891 dnf[30433]: dlrn-antelope-testing                           100 kB/s | 3.0 kB     00:00
Oct  1 12:05:29 np0005464891 dnf[30433]: dlrn-antelope-build-deps                        153 kB/s | 3.0 kB     00:00
Oct  1 12:05:29 np0005464891 dnf[30433]: centos9-rabbitmq                                 89 kB/s | 3.0 kB     00:00
Oct  1 12:05:29 np0005464891 dnf[30433]: centos9-storage                                 109 kB/s | 3.0 kB     00:00
Oct  1 12:05:29 np0005464891 dnf[30433]: centos9-opstools                                117 kB/s | 3.0 kB     00:00
Oct  1 12:05:29 np0005464891 dnf[30433]: NFV SIG OpenvSwitch                             118 kB/s | 3.0 kB     00:00
Oct  1 12:05:29 np0005464891 dnf[30433]: repo-setup-centos-appstream                     162 kB/s | 4.4 kB     00:00
Oct  1 12:05:30 np0005464891 dnf[30433]: repo-setup-centos-baseos                        171 kB/s | 3.9 kB     00:00
Oct  1 12:05:30 np0005464891 dnf[30433]: repo-setup-centos-highavailability              174 kB/s | 3.9 kB     00:00
Oct  1 12:05:30 np0005464891 dnf[30433]: repo-setup-centos-powertools                    190 kB/s | 4.3 kB     00:00
Oct  1 12:05:30 np0005464891 dnf[30433]: Extra Packages for Enterprise Linux 9 - x86_64  214 kB/s |  34 kB     00:00
Oct  1 12:05:30 np0005464891 dnf[30433]: Metadata cache created.
Oct  1 12:05:30 np0005464891 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  1 12:05:30 np0005464891 systemd[1]: Finished dnf makecache.
Oct  1 12:05:30 np0005464891 systemd[1]: dnf-makecache.service: Consumed 1.874s CPU time.
Oct  1 12:06:45 np0005464891 kernel: SELinux:  Converting 2714 SID table entries...
Oct  1 12:06:45 np0005464891 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 12:06:45 np0005464891 kernel: SELinux:  policy capability open_perms=1
Oct  1 12:06:45 np0005464891 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 12:06:45 np0005464891 kernel: SELinux:  policy capability always_check_network=0
Oct  1 12:06:45 np0005464891 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 12:06:45 np0005464891 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 12:06:45 np0005464891 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 12:06:45 np0005464891 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct  1 12:06:45 np0005464891 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 12:06:45 np0005464891 systemd[1]: Starting man-db-cache-update.service...
Oct  1 12:06:45 np0005464891 systemd[1]: Reloading.
Oct  1 12:06:45 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:06:45 np0005464891 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  1 12:06:46 np0005464891 systemd[1]: Starting PackageKit Daemon...
Oct  1 12:06:46 np0005464891 systemd[1]: Started PackageKit Daemon.
Oct  1 12:06:46 np0005464891 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 12:06:46 np0005464891 systemd[1]: Finished man-db-cache-update.service.
Oct  1 12:06:46 np0005464891 systemd[1]: man-db-cache-update.service: Consumed 1.190s CPU time.
Oct  1 12:06:46 np0005464891 systemd[1]: run-r842ea3eb453c45e0a2222e922a8042da.service: Deactivated successfully.
Oct  1 12:06:47 np0005464891 python3.9[31782]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:06:49 np0005464891 python3.9[32063]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  1 12:06:50 np0005464891 python3.9[32215]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  1 12:06:52 np0005464891 python3.9[32368]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:06:53 np0005464891 python3.9[32520]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  1 12:06:54 np0005464891 python3.9[32672]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:06:55 np0005464891 python3.9[32824]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:06:55 np0005464891 python3.9[32947]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759334814.7408478-227-121452200454957/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5550135599e6eebc154c2000aa6ebbcac01cf5a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:07:00 np0005464891 python3.9[33099]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  1 12:07:01 np0005464891 python3.9[33252]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  1 12:07:01 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 12:07:01 np0005464891 python3.9[33411]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  1 12:07:02 np0005464891 python3.9[33571]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  1 12:07:03 np0005464891 python3.9[33724]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  1 12:07:04 np0005464891 python3.9[33882]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  1 12:07:05 np0005464891 python3.9[34034]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:07:07 np0005464891 python3.9[34187]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:07:08 np0005464891 python3.9[34339]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:07:08 np0005464891 python3.9[34462]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759334827.6515212-322-19561564993177/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:07:10 np0005464891 python3.9[34614]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:07:10 np0005464891 systemd[1]: Starting Load Kernel Modules...
Oct  1 12:07:10 np0005464891 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  1 12:07:10 np0005464891 kernel: Bridge firewalling registered
Oct  1 12:07:10 np0005464891 systemd-modules-load[34618]: Inserted module 'br_netfilter'
Oct  1 12:07:10 np0005464891 systemd[1]: Finished Load Kernel Modules.
Oct  1 12:07:10 np0005464891 python3.9[34773]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:07:11 np0005464891 python3.9[34896]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759334830.40527-345-70512880724054/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:07:12 np0005464891 python3.9[35048]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:07:15 np0005464891 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct  1 12:07:15 np0005464891 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct  1 12:07:16 np0005464891 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 12:07:16 np0005464891 systemd[1]: Starting man-db-cache-update.service...
Oct  1 12:07:16 np0005464891 systemd[1]: Reloading.
Oct  1 12:07:16 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:07:16 np0005464891 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  1 12:07:17 np0005464891 python3.9[36296]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:07:18 np0005464891 python3.9[37244]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  1 12:07:19 np0005464891 python3.9[38073]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:07:19 np0005464891 python3.9[38923]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:07:20 np0005464891 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  1 12:07:20 np0005464891 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 12:07:20 np0005464891 systemd[1]: Finished man-db-cache-update.service.
Oct  1 12:07:20 np0005464891 systemd[1]: man-db-cache-update.service: Consumed 4.712s CPU time.
Oct  1 12:07:20 np0005464891 systemd[1]: run-r2e5dc5900a3446a3b2b82f4a9385cb02.service: Deactivated successfully.
Oct  1 12:07:20 np0005464891 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  1 12:07:21 np0005464891 python3.9[39593]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:07:21 np0005464891 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  1 12:07:21 np0005464891 systemd[1]: tuned.service: Deactivated successfully.
Oct  1 12:07:21 np0005464891 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  1 12:07:21 np0005464891 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  1 12:07:21 np0005464891 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  1 12:07:22 np0005464891 python3.9[39755]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  1 12:07:24 np0005464891 python3.9[39907]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:07:24 np0005464891 systemd[1]: Reloading.
Oct  1 12:07:24 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:07:25 np0005464891 python3.9[40095]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:07:25 np0005464891 systemd[1]: Reloading.
Oct  1 12:07:25 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:07:26 np0005464891 python3.9[40284]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:07:27 np0005464891 python3.9[40437]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:07:27 np0005464891 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  1 12:07:28 np0005464891 python3.9[40590]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:07:30 np0005464891 python3.9[40752]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:07:31 np0005464891 python3.9[40905]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:07:31 np0005464891 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  1 12:07:31 np0005464891 systemd[1]: Stopped Apply Kernel Variables.
Oct  1 12:07:31 np0005464891 systemd[1]: Stopping Apply Kernel Variables...
Oct  1 12:07:31 np0005464891 systemd[1]: Starting Apply Kernel Variables...
Oct  1 12:07:31 np0005464891 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  1 12:07:31 np0005464891 systemd[1]: Finished Apply Kernel Variables.
Oct  1 12:07:31 np0005464891 systemd[1]: session-9.scope: Deactivated successfully.
Oct  1 12:07:31 np0005464891 systemd[1]: session-9.scope: Consumed 2min 14.437s CPU time.
Oct  1 12:07:31 np0005464891 systemd-logind[801]: Session 9 logged out. Waiting for processes to exit.
Oct  1 12:07:31 np0005464891 systemd-logind[801]: Removed session 9.
Oct  1 12:07:37 np0005464891 systemd-logind[801]: New session 10 of user zuul.
Oct  1 12:07:37 np0005464891 systemd[1]: Started Session 10 of User zuul.
Oct  1 12:07:38 np0005464891 python3.9[41089]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:07:40 np0005464891 python3.9[41245]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  1 12:07:40 np0005464891 python3.9[41398]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  1 12:07:41 np0005464891 python3.9[41556]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  1 12:07:42 np0005464891 python3.9[41716]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 12:07:43 np0005464891 python3.9[41800]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  1 12:07:46 np0005464891 python3.9[41963]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:07:57 np0005464891 kernel: SELinux:  Converting 2724 SID table entries...
Oct  1 12:07:57 np0005464891 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 12:07:57 np0005464891 kernel: SELinux:  policy capability open_perms=1
Oct  1 12:07:57 np0005464891 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 12:07:57 np0005464891 kernel: SELinux:  policy capability always_check_network=0
Oct  1 12:07:57 np0005464891 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 12:07:57 np0005464891 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 12:07:57 np0005464891 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 12:07:58 np0005464891 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct  1 12:07:58 np0005464891 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  1 12:07:59 np0005464891 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 12:07:59 np0005464891 systemd[1]: Starting man-db-cache-update.service...
Oct  1 12:07:59 np0005464891 systemd[1]: Reloading.
Oct  1 12:07:59 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:07:59 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:07:59 np0005464891 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  1 12:08:00 np0005464891 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 12:08:00 np0005464891 systemd[1]: Finished man-db-cache-update.service.
Oct  1 12:08:00 np0005464891 systemd[1]: run-r47bbfb6eacca4fd69afc3d9cd24363c6.service: Deactivated successfully.
Oct  1 12:08:01 np0005464891 python3.9[43067]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 12:08:01 np0005464891 systemd[1]: Reloading.
Oct  1 12:08:01 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:08:01 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:08:01 np0005464891 systemd[1]: Starting Open vSwitch Database Unit...
Oct  1 12:08:01 np0005464891 chown[43109]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  1 12:08:01 np0005464891 ovs-ctl[43114]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct  1 12:08:01 np0005464891 ovs-ctl[43114]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct  1 12:08:01 np0005464891 ovs-ctl[43114]: Starting ovsdb-server [  OK  ]
Oct  1 12:08:01 np0005464891 ovs-vsctl[43163]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  1 12:08:02 np0005464891 ovs-vsctl[43183]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"7f6af0d3-69fd-4a3a-8e45-081fa1f83992\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  1 12:08:02 np0005464891 ovs-ctl[43114]: Configuring Open vSwitch system IDs [  OK  ]
Oct  1 12:08:02 np0005464891 ovs-ctl[43114]: Enabling remote OVSDB managers [  OK  ]
Oct  1 12:08:02 np0005464891 ovs-vsctl[43189]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  1 12:08:02 np0005464891 systemd[1]: Started Open vSwitch Database Unit.
Oct  1 12:08:02 np0005464891 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  1 12:08:02 np0005464891 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  1 12:08:02 np0005464891 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  1 12:08:02 np0005464891 kernel: openvswitch: Open vSwitch switching datapath
Oct  1 12:08:02 np0005464891 ovs-ctl[43234]: Inserting openvswitch module [  OK  ]
Oct  1 12:08:02 np0005464891 ovs-ctl[43203]: Starting ovs-vswitchd [  OK  ]
Oct  1 12:08:02 np0005464891 ovs-vsctl[43252]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  1 12:08:02 np0005464891 ovs-ctl[43203]: Enabling remote OVSDB managers [  OK  ]
Oct  1 12:08:02 np0005464891 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  1 12:08:02 np0005464891 systemd[1]: Starting Open vSwitch...
Oct  1 12:08:02 np0005464891 systemd[1]: Finished Open vSwitch.
Oct  1 12:08:03 np0005464891 python3.9[43404]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:08:04 np0005464891 python3.9[43556]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  1 12:08:05 np0005464891 kernel: SELinux:  Converting 2738 SID table entries...
Oct  1 12:08:05 np0005464891 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 12:08:05 np0005464891 kernel: SELinux:  policy capability open_perms=1
Oct  1 12:08:05 np0005464891 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 12:08:05 np0005464891 kernel: SELinux:  policy capability always_check_network=0
Oct  1 12:08:05 np0005464891 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 12:08:05 np0005464891 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 12:08:05 np0005464891 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 12:08:06 np0005464891 python3.9[43711]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:08:07 np0005464891 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct  1 12:08:07 np0005464891 python3.9[43869]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:08:09 np0005464891 python3.9[44022]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:08:11 np0005464891 python3.9[44309]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  1 12:08:12 np0005464891 python3.9[44459]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:08:12 np0005464891 python3.9[44613]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:08:14 np0005464891 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 12:08:14 np0005464891 systemd[1]: Starting man-db-cache-update.service...
Oct  1 12:08:15 np0005464891 systemd[1]: Reloading.
Oct  1 12:08:15 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:08:15 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:08:15 np0005464891 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  1 12:08:15 np0005464891 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 12:08:15 np0005464891 systemd[1]: Finished man-db-cache-update.service.
Oct  1 12:08:15 np0005464891 systemd[1]: run-rb8c19003cf8249afa103c40e2f36efb0.service: Deactivated successfully.
Oct  1 12:08:16 np0005464891 python3.9[44930]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:08:16 np0005464891 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  1 12:08:16 np0005464891 systemd[1]: Stopped Network Manager Wait Online.
Oct  1 12:08:16 np0005464891 systemd[1]: Stopping Network Manager Wait Online...
Oct  1 12:08:16 np0005464891 systemd[1]: Stopping Network Manager...
Oct  1 12:08:16 np0005464891 NetworkManager[3948]: <info>  [1759334896.3975] caught SIGTERM, shutting down normally.
Oct  1 12:08:16 np0005464891 NetworkManager[3948]: <info>  [1759334896.3990] dhcp4 (eth0): canceled DHCP transaction
Oct  1 12:08:16 np0005464891 NetworkManager[3948]: <info>  [1759334896.3991] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  1 12:08:16 np0005464891 NetworkManager[3948]: <info>  [1759334896.3991] dhcp4 (eth0): state changed no lease
Oct  1 12:08:16 np0005464891 NetworkManager[3948]: <info>  [1759334896.3993] manager: NetworkManager state is now CONNECTED_SITE
Oct  1 12:08:16 np0005464891 NetworkManager[3948]: <info>  [1759334896.4055] exiting (success)
Oct  1 12:08:16 np0005464891 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  1 12:08:16 np0005464891 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  1 12:08:16 np0005464891 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  1 12:08:16 np0005464891 systemd[1]: Stopped Network Manager.
Oct  1 12:08:16 np0005464891 systemd[1]: NetworkManager.service: Consumed 10.075s CPU time, 4.1M memory peak, read 0B from disk, written 37.5K to disk.
Oct  1 12:08:16 np0005464891 systemd[1]: Starting Network Manager...
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.4989] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:253335a6-81f6-44a5-9cf6-dd6e8292df9d)
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.4993] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.5067] manager[0x5602035ee090]: monitoring kernel firmware directory '/lib/firmware'.
Oct  1 12:08:16 np0005464891 systemd[1]: Starting Hostname Service...
Oct  1 12:08:16 np0005464891 systemd[1]: Started Hostname Service.
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6020] hostname: hostname: using hostnamed
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6020] hostname: static hostname changed from (none) to "compute-0"
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6030] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6037] manager[0x5602035ee090]: rfkill: Wi-Fi hardware radio set enabled
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6037] manager[0x5602035ee090]: rfkill: WWAN hardware radio set enabled
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6079] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6095] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6096] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6097] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6098] manager: Networking is enabled by state file
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6103] settings: Loaded settings plugin: keyfile (internal)
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6110] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6155] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6173] dhcp: init: Using DHCP client 'internal'
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6178] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6186] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6195] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6207] device (lo): Activation: starting connection 'lo' (3276f51a-eaaa-4d65-b64d-271c8adeb767)
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6218] device (eth0): carrier: link connected
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6227] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6236] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6237] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6248] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6260] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6271] device (eth1): carrier: link connected
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6278] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6288] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (aace28b1-91e6-58d1-b9ab-328121bea078) (indicated)
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6289] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6296] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6310] device (eth1): Activation: starting connection 'ci-private-network' (aace28b1-91e6-58d1-b9ab-328121bea078)
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6324] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6340] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6344] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6347] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6350] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6353] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6356] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  1 12:08:16 np0005464891 systemd[1]: Started Network Manager.
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6367] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6391] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6406] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6412] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6473] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  1 12:08:16 np0005464891 systemd[1]: Starting Network Manager Wait Online...
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6503] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6519] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6526] dhcp4 (eth0): state changed new lease, address=38.102.83.177
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6532] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6543] device (lo): Activation: successful, device activated.
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6565] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6666] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6673] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6677] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6680] manager: NetworkManager state is now CONNECTED_LOCAL
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6684] device (eth1): Activation: successful, device activated.
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6693] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6706] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6710] manager: NetworkManager state is now CONNECTED_SITE
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6715] device (eth0): Activation: successful, device activated.
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6721] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  1 12:08:16 np0005464891 NetworkManager[44940]: <info>  [1759334896.6725] manager: startup complete
Oct  1 12:08:16 np0005464891 systemd[1]: Finished Network Manager Wait Online.
Oct  1 12:08:17 np0005464891 python3.9[45157]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:08:21 np0005464891 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 12:08:22 np0005464891 systemd[1]: Starting man-db-cache-update.service...
Oct  1 12:08:22 np0005464891 systemd[1]: Reloading.
Oct  1 12:08:22 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:08:22 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:08:22 np0005464891 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  1 12:08:23 np0005464891 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 12:08:23 np0005464891 systemd[1]: Finished man-db-cache-update.service.
Oct  1 12:08:23 np0005464891 systemd[1]: run-ra731cda6c2ab495eae03991012f75949.service: Deactivated successfully.
Oct  1 12:08:24 np0005464891 python3.9[45620]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:08:24 np0005464891 python3.9[45772]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:08:25 np0005464891 python3.9[45926]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:08:26 np0005464891 python3.9[46078]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:08:26 np0005464891 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  1 12:08:26 np0005464891 python3.9[46230]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:08:27 np0005464891 python3.9[46382]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:08:28 np0005464891 python3.9[46534]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:08:29 np0005464891 python3.9[46657]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759334907.9317725-229-73879007777229/.source _original_basename=.bcljss34 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:08:29 np0005464891 python3.9[46809]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:08:30 np0005464891 python3.9[46961]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct  1 12:08:31 np0005464891 python3.9[47113]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:08:33 np0005464891 python3.9[47540]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct  1 12:08:34 np0005464891 ansible-async_wrapper.py[47715]: Invoked with j247654277925 300 /home/zuul/.ansible/tmp/ansible-tmp-1759334913.9656765-295-77923626403758/AnsiballZ_edpm_os_net_config.py _
Oct  1 12:08:34 np0005464891 ansible-async_wrapper.py[47718]: Starting module and watcher
Oct  1 12:08:34 np0005464891 ansible-async_wrapper.py[47718]: Start watching 47719 (300)
Oct  1 12:08:34 np0005464891 ansible-async_wrapper.py[47719]: Start module (47719)
Oct  1 12:08:34 np0005464891 ansible-async_wrapper.py[47715]: Return async_wrapper task started.
Oct  1 12:08:35 np0005464891 python3.9[47720]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct  1 12:08:35 np0005464891 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct  1 12:08:35 np0005464891 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct  1 12:08:35 np0005464891 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct  1 12:08:35 np0005464891 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct  1 12:08:35 np0005464891 kernel: cfg80211: failed to load regulatory.db
Oct  1 12:08:36 np0005464891 NetworkManager[44940]: <info>  [1759334916.9985] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0002] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0531] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0533] audit: op="connection-add" uuid="f12d3221-b565-4f1b-9993-2456def956c1" name="br-ex-br" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0551] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0552] audit: op="connection-add" uuid="6dd4d3f0-814a-4db8-b3b7-4d833cbfbbaf" name="br-ex-port" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0569] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0570] audit: op="connection-add" uuid="c51d3a07-7fa2-4352-9768-35abe898b4fe" name="eth1-port" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0585] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0587] audit: op="connection-add" uuid="7d23a46a-7227-4784-92f7-2cdc5075f14a" name="vlan20-port" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0600] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0602] audit: op="connection-add" uuid="3cd0fd2e-e8f9-410f-8a54-2a0dea579630" name="vlan21-port" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0617] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0618] audit: op="connection-add" uuid="ee23ecd9-7cfc-4fc8-a9de-1efcbbce996a" name="vlan22-port" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0632] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0637] audit: op="connection-add" uuid="e7359a08-48e7-4812-bb68-d987f4d5215d" name="vlan23-port" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0658] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0689] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0691] audit: op="connection-add" uuid="b386a58f-316d-411b-842e-4e22e6330a4a" name="br-ex-if" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0745] audit: op="connection-update" uuid="aace28b1-91e6-58d1-b9ab-328121bea078" name="ci-private-network" args="connection.timestamp,connection.controller,connection.master,connection.port-type,connection.slave-type,ipv4.method,ipv4.routes,ipv4.addresses,ipv4.never-default,ipv4.routing-rules,ipv4.dns,ipv6.method,ipv6.routes,ipv6.addresses,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.dns,ovs-external-ids.data,ovs-interface.type" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0771] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0773] audit: op="connection-add" uuid="ab520ef8-792f-40b4-b327-941994e70f59" name="vlan20-if" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0799] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0801] audit: op="connection-add" uuid="08c68bac-169e-43bf-b682-0566027fc2a3" name="vlan21-if" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0821] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0823] audit: op="connection-add" uuid="99e9af9b-563f-476b-bdab-01a2b411a377" name="vlan22-if" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0847] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0849] audit: op="connection-add" uuid="eeaa66cc-fdff-4c2f-a568-d64b3330ef19" name="vlan23-if" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0866] audit: op="connection-delete" uuid="3bc61b06-f82d-36ce-9d85-6821398aba72" name="Wired connection 1" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0881] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0894] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0900] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (f12d3221-b565-4f1b-9993-2456def956c1)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0900] audit: op="connection-activate" uuid="f12d3221-b565-4f1b-9993-2456def956c1" name="br-ex-br" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0903] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0913] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0919] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (6dd4d3f0-814a-4db8-b3b7-4d833cbfbbaf)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0921] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0928] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0932] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (c51d3a07-7fa2-4352-9768-35abe898b4fe)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0935] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0943] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0947] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (7d23a46a-7227-4784-92f7-2cdc5075f14a)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0950] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0957] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0962] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (3cd0fd2e-e8f9-410f-8a54-2a0dea579630)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0964] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0971] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0976] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (ee23ecd9-7cfc-4fc8-a9de-1efcbbce996a)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0979] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0988] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0995] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (e7359a08-48e7-4812-bb68-d987f4d5215d)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0996] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.0999] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1002] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1009] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1016] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1021] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (b386a58f-316d-411b-842e-4e22e6330a4a)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1021] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1025] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1028] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1029] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1030] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1043] device (eth1): disconnecting for new activation request.
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1044] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1047] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1049] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1051] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1055] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1060] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1064] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (ab520ef8-792f-40b4-b327-941994e70f59)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1065] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1068] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1071] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1073] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1077] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1082] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1087] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (08c68bac-169e-43bf-b682-0566027fc2a3)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1088] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1090] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1092] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1094] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1097] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1102] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1107] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (99e9af9b-563f-476b-bdab-01a2b411a377)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1108] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1112] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1114] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1115] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1118] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1123] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1128] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (eeaa66cc-fdff-4c2f-a568-d64b3330ef19)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1129] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1132] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1135] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1137] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1139] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1158] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.method,ipv6.addr-gen-mode" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1161] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1165] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1167] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1659] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1664] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1668] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1672] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 kernel: ovs-system: entered promiscuous mode
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1674] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1679] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1685] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1689] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1693] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 systemd-udevd[47727]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:08:37 np0005464891 kernel: Timeout policy base is empty
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1700] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1704] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1707] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1708] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1714] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1718] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1722] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1730] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1738] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1741] dhcp4 (eth0): canceled DHCP transaction
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1741] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1742] dhcp4 (eth0): state changed no lease
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1743] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1757] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1760] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47721 uid=0 result="fail" reason="Device is not activated"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1765] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  1 12:08:37 np0005464891 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1826] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1830] dhcp4 (eth0): state changed new lease, address=38.102.83.177
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1867] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1971] device (eth1): Activation: starting connection 'ci-private-network' (aace28b1-91e6-58d1-b9ab-328121bea078)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1976] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1982] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1993] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.1999] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2004] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2010] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 kernel: br-ex: entered promiscuous mode
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2014] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2020] device (eth1): state change: config -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2022] device (eth1): released from controller device eth1
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2028] device (eth1): disconnecting for new activation request.
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2029] audit: op="connection-activate" uuid="aace28b1-91e6-58d1-b9ab-328121bea078" name="ci-private-network" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2030] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2033] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2035] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2037] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2039] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2041] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2049] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2055] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2060] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2065] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2070] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2076] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2080] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2085] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2090] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2095] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2099] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2105] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2116] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2126] device (eth1): Activation: starting connection 'ci-private-network' (aace28b1-91e6-58d1-b9ab-328121bea078)
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2129] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47721 uid=0 result="success"
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2134] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2145] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2152] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2154] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2170] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 kernel: vlan22: entered promiscuous mode
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2224] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2229] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2234] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2241] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2253] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 kernel: vlan20: entered promiscuous mode
Oct  1 12:08:37 np0005464891 systemd-udevd[47725]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2302] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2312] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2318] device (eth1): Activation: successful, device activated.
Oct  1 12:08:37 np0005464891 kernel: vlan23: entered promiscuous mode
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2341] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2375] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 systemd-udevd[47834]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:08:37 np0005464891 kernel: vlan21: entered promiscuous mode
Oct  1 12:08:37 np0005464891 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2427] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2428] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2434] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2442] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2472] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2529] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2530] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2536] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2543] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2553] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2592] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2601] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2614] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2617] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2623] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2632] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2634] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 12:08:37 np0005464891 NetworkManager[44940]: <info>  [1759334917.2641] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  1 12:08:38 np0005464891 NetworkManager[44940]: <info>  [1759334918.4082] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47721 uid=0 result="success"
Oct  1 12:08:38 np0005464891 NetworkManager[44940]: <info>  [1759334918.5793] checkpoint[0x5602035c4950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct  1 12:08:38 np0005464891 NetworkManager[44940]: <info>  [1759334918.5795] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47721 uid=0 result="success"
Oct  1 12:08:38 np0005464891 python3.9[48084]: ansible-ansible.legacy.async_status Invoked with jid=j247654277925.47715 mode=status _async_dir=/root/.ansible_async
Oct  1 12:08:38 np0005464891 NetworkManager[44940]: <info>  [1759334918.8607] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47721 uid=0 result="success"
Oct  1 12:08:38 np0005464891 NetworkManager[44940]: <info>  [1759334918.8623] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47721 uid=0 result="success"
Oct  1 12:08:39 np0005464891 NetworkManager[44940]: <info>  [1759334919.0393] audit: op="networking-control" arg="global-dns-configuration" pid=47721 uid=0 result="success"
Oct  1 12:08:39 np0005464891 NetworkManager[44940]: <info>  [1759334919.0418] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct  1 12:08:39 np0005464891 NetworkManager[44940]: <info>  [1759334919.0469] audit: op="networking-control" arg="global-dns-configuration" pid=47721 uid=0 result="success"
Oct  1 12:08:39 np0005464891 NetworkManager[44940]: <info>  [1759334919.0492] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47721 uid=0 result="success"
Oct  1 12:08:39 np0005464891 NetworkManager[44940]: <info>  [1759334919.1753] checkpoint[0x5602035c4a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct  1 12:08:39 np0005464891 NetworkManager[44940]: <info>  [1759334919.1757] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47721 uid=0 result="success"
Oct  1 12:08:39 np0005464891 ansible-async_wrapper.py[47719]: Module complete (47719)
Oct  1 12:08:39 np0005464891 ansible-async_wrapper.py[47718]: Done in kid B.
Oct  1 12:08:42 np0005464891 python3.9[48190]: ansible-ansible.legacy.async_status Invoked with jid=j247654277925.47715 mode=status _async_dir=/root/.ansible_async
Oct  1 12:08:42 np0005464891 python3.9[48290]: ansible-ansible.legacy.async_status Invoked with jid=j247654277925.47715 mode=cleanup _async_dir=/root/.ansible_async
Oct  1 12:08:43 np0005464891 python3.9[48442]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:08:44 np0005464891 python3.9[48565]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759334923.1782649-322-219465704629069/.source.returncode _original_basename=.nl7f4331 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:08:45 np0005464891 python3.9[48717]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:08:45 np0005464891 python3.9[48841]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759334924.4687533-338-56668956834432/.source.cfg _original_basename=.f_s_nlha follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:08:46 np0005464891 python3.9[48993]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:08:46 np0005464891 systemd[1]: Reloading Network Manager...
Oct  1 12:08:46 np0005464891 NetworkManager[44940]: <info>  [1759334926.5533] audit: op="reload" arg="0" pid=48997 uid=0 result="success"
Oct  1 12:08:46 np0005464891 NetworkManager[44940]: <info>  [1759334926.5547] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct  1 12:08:46 np0005464891 systemd[1]: Reloaded Network Manager.
Oct  1 12:08:46 np0005464891 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  1 12:08:47 np0005464891 systemd-logind[801]: Session 10 logged out. Waiting for processes to exit.
Oct  1 12:08:47 np0005464891 systemd[1]: session-10.scope: Deactivated successfully.
Oct  1 12:08:47 np0005464891 systemd[1]: session-10.scope: Consumed 50.420s CPU time.
Oct  1 12:08:47 np0005464891 systemd-logind[801]: Removed session 10.
Oct  1 12:08:52 np0005464891 systemd-logind[801]: New session 11 of user zuul.
Oct  1 12:08:52 np0005464891 systemd[1]: Started Session 11 of User zuul.
Oct  1 12:08:53 np0005464891 python3.9[49184]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:08:54 np0005464891 python3.9[49338]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 12:08:55 np0005464891 python3.9[49532]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:08:56 np0005464891 systemd[1]: session-11.scope: Deactivated successfully.
Oct  1 12:08:56 np0005464891 systemd[1]: session-11.scope: Consumed 2.585s CPU time.
Oct  1 12:08:56 np0005464891 systemd-logind[801]: Session 11 logged out. Waiting for processes to exit.
Oct  1 12:08:56 np0005464891 systemd-logind[801]: Removed session 11.
Oct  1 12:08:56 np0005464891 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  1 12:09:01 np0005464891 systemd-logind[801]: New session 12 of user zuul.
Oct  1 12:09:01 np0005464891 systemd[1]: Started Session 12 of User zuul.
Oct  1 12:09:02 np0005464891 python3.9[49714]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:09:03 np0005464891 python3.9[49868]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:09:04 np0005464891 python3.9[50024]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 12:09:05 np0005464891 python3.9[50109]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:09:07 np0005464891 python3.9[50262]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 12:09:08 np0005464891 python3.9[50458]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:09:09 np0005464891 python3.9[50610]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:09:09 np0005464891 systemd[1]: var-lib-containers-storage-overlay-compat3036202968-merged.mount: Deactivated successfully.
Oct  1 12:09:09 np0005464891 podman[50611]: 2025-10-01 16:09:09.708698789 +0000 UTC m=+0.079745081 system refresh
Oct  1 12:09:10 np0005464891 python3.9[50772]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:09:10 np0005464891 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 12:09:11 np0005464891 python3.9[50895]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759334949.9051075-79-270565149298947/.source.json follow=False _original_basename=podman_network_config.j2 checksum=ccdc18c22b613779df01ed28e04eb89cfbe68059 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:09:12 np0005464891 python3.9[51047]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:09:12 np0005464891 python3.9[51170]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759334951.5267096-94-103998590716263/.source.conf follow=False _original_basename=registries.conf.j2 checksum=a4fd3ca7d18166099562a65af8d6da655db34efc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:09:13 np0005464891 python3.9[51322]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:09:14 np0005464891 python3.9[51474]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:09:14 np0005464891 python3.9[51626]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:09:15 np0005464891 python3.9[51778]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:09:16 np0005464891 python3.9[51930]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:09:18 np0005464891 python3.9[52083]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:09:18 np0005464891 python3.9[52237]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:09:19 np0005464891 python3.9[52389]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:09:20 np0005464891 python3.9[52541]: ansible-service_facts Invoked
Oct  1 12:09:20 np0005464891 network[52558]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 12:09:20 np0005464891 network[52559]: 'network-scripts' will be removed from distribution in near future.
Oct  1 12:09:20 np0005464891 network[52560]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 12:09:26 np0005464891 python3.9[53014]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:09:28 np0005464891 python3.9[53167]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  1 12:09:29 np0005464891 python3.9[53319]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:09:30 np0005464891 python3.9[53444]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759334969.0692077-226-90351111666052/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:09:31 np0005464891 python3.9[53598]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:09:31 np0005464891 python3.9[53723]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759334970.4180472-241-108548978387302/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:09:32 np0005464891 python3.9[53877]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:09:34 np0005464891 python3.9[54031]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 12:09:35 np0005464891 python3.9[54115]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:09:36 np0005464891 python3.9[54269]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 12:09:37 np0005464891 python3.9[54353]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:09:37 np0005464891 chronyd[803]: chronyd exiting
Oct  1 12:09:37 np0005464891 systemd[1]: Stopping NTP client/server...
Oct  1 12:09:37 np0005464891 systemd[1]: chronyd.service: Deactivated successfully.
Oct  1 12:09:37 np0005464891 systemd[1]: Stopped NTP client/server.
Oct  1 12:09:37 np0005464891 systemd[1]: Starting NTP client/server...
Oct  1 12:09:37 np0005464891 chronyd[54362]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  1 12:09:37 np0005464891 chronyd[54362]: Frequency -27.475 +/- 0.289 ppm read from /var/lib/chrony/drift
Oct  1 12:09:37 np0005464891 chronyd[54362]: Loaded seccomp filter (level 2)
Oct  1 12:09:37 np0005464891 systemd[1]: Started NTP client/server.
Oct  1 12:09:37 np0005464891 systemd[1]: session-12.scope: Deactivated successfully.
Oct  1 12:09:37 np0005464891 systemd[1]: session-12.scope: Consumed 25.536s CPU time.
Oct  1 12:09:37 np0005464891 systemd-logind[801]: Session 12 logged out. Waiting for processes to exit.
Oct  1 12:09:37 np0005464891 systemd-logind[801]: Removed session 12.
Oct  1 12:09:43 np0005464891 systemd-logind[801]: New session 13 of user zuul.
Oct  1 12:09:43 np0005464891 systemd[1]: Started Session 13 of User zuul.
Oct  1 12:09:44 np0005464891 python3.9[54543]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:09:45 np0005464891 python3.9[54695]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:09:45 np0005464891 python3.9[54818]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759334984.392333-34-264141320849937/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:09:46 np0005464891 systemd[1]: session-13.scope: Deactivated successfully.
Oct  1 12:09:46 np0005464891 systemd[1]: session-13.scope: Consumed 1.908s CPU time.
Oct  1 12:09:46 np0005464891 systemd-logind[801]: Session 13 logged out. Waiting for processes to exit.
Oct  1 12:09:46 np0005464891 systemd-logind[801]: Removed session 13.
Oct  1 12:09:51 np0005464891 systemd-logind[801]: New session 14 of user zuul.
Oct  1 12:09:51 np0005464891 systemd[1]: Started Session 14 of User zuul.
Oct  1 12:09:52 np0005464891 python3.9[54996]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:09:53 np0005464891 python3.9[55152]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:09:54 np0005464891 python3.9[55327]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:09:54 np0005464891 python3.9[55450]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759334993.4728577-41-52347304156349/.source.json _original_basename=.5nx7_jh3 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:09:55 np0005464891 python3.9[55602]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:09:56 np0005464891 python3.9[55725]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759334995.2796042-64-262835562539567/.source _original_basename=.34nwiiy_ follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:09:57 np0005464891 python3.9[55877]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:09:57 np0005464891 python3.9[56029]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:09:58 np0005464891 python3.9[56152]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759334997.2256598-88-33312720207546/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:09:58 np0005464891 python3.9[56304]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:09:59 np0005464891 python3.9[56427]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759334998.468949-88-230282773997521/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:10:00 np0005464891 python3.9[56579]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:10:00 np0005464891 python3.9[56731]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:10:01 np0005464891 python3.9[56854]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335000.4390328-125-63529302959002/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:10:02 np0005464891 python3.9[57006]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:10:02 np0005464891 python3.9[57129]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335001.7323656-140-260494384755387/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:10:03 np0005464891 python3.9[57281]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:10:04 np0005464891 systemd[1]: Reloading.
Oct  1 12:10:04 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:10:04 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:10:04 np0005464891 systemd[1]: Reloading.
Oct  1 12:10:04 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:10:04 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:10:04 np0005464891 systemd[1]: Starting EDPM Container Shutdown...
Oct  1 12:10:04 np0005464891 systemd[1]: Finished EDPM Container Shutdown.
Oct  1 12:10:05 np0005464891 python3.9[57509]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:10:06 np0005464891 python3.9[57632]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335004.8583117-163-213282398765962/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:10:06 np0005464891 python3.9[57784]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:10:07 np0005464891 python3.9[57907]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335006.1893156-178-92027470270559/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:10:08 np0005464891 python3.9[58059]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:10:08 np0005464891 systemd[1]: Reloading.
Oct  1 12:10:08 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:10:08 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:10:08 np0005464891 systemd[1]: Reloading.
Oct  1 12:10:08 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:10:08 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:10:08 np0005464891 systemd[1]: Starting Create netns directory...
Oct  1 12:10:08 np0005464891 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  1 12:10:08 np0005464891 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  1 12:10:08 np0005464891 systemd[1]: Finished Create netns directory.
Oct  1 12:10:09 np0005464891 python3.9[58284]: ansible-ansible.builtin.service_facts Invoked
Oct  1 12:10:09 np0005464891 network[58301]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 12:10:09 np0005464891 network[58302]: 'network-scripts' will be removed from distribution in near future.
Oct  1 12:10:09 np0005464891 network[58303]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 12:10:13 np0005464891 python3.9[58567]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:10:13 np0005464891 systemd[1]: Reloading.
Oct  1 12:10:13 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:10:13 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:10:13 np0005464891 systemd[1]: Stopping IPv4 firewall with iptables...
Oct  1 12:10:13 np0005464891 iptables.init[58607]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct  1 12:10:13 np0005464891 iptables.init[58607]: iptables: Flushing firewall rules: [  OK  ]
Oct  1 12:10:13 np0005464891 systemd[1]: iptables.service: Deactivated successfully.
Oct  1 12:10:13 np0005464891 systemd[1]: Stopped IPv4 firewall with iptables.
Oct  1 12:10:14 np0005464891 python3.9[58803]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:10:15 np0005464891 python3.9[58957]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:10:15 np0005464891 systemd[1]: Reloading.
Oct  1 12:10:15 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:10:15 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:10:15 np0005464891 systemd[1]: Starting Netfilter Tables...
Oct  1 12:10:15 np0005464891 systemd[1]: Finished Netfilter Tables.
Oct  1 12:10:16 np0005464891 python3.9[59149]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:10:17 np0005464891 python3.9[59302]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:10:18 np0005464891 python3.9[59427]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759335016.9964137-247-241088585984427/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:10:19 np0005464891 python3.9[59578]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:10:44 np0005464891 systemd-logind[801]: Session 14 logged out. Waiting for processes to exit.
Oct  1 12:10:44 np0005464891 systemd[1]: session-14.scope: Deactivated successfully.
Oct  1 12:10:44 np0005464891 systemd[1]: session-14.scope: Consumed 20.409s CPU time.
Oct  1 12:10:44 np0005464891 systemd-logind[801]: Removed session 14.
Oct  1 12:10:56 np0005464891 systemd-logind[801]: New session 15 of user zuul.
Oct  1 12:10:56 np0005464891 systemd[1]: Started Session 15 of User zuul.
Oct  1 12:10:57 np0005464891 python3.9[59773]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:10:58 np0005464891 python3.9[59929]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:10:59 np0005464891 python3.9[60104]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:00 np0005464891 python3.9[60182]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.ucaxj64z recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:01 np0005464891 python3.9[60334]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:01 np0005464891 python3.9[60412]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.sbnurzv8 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:02 np0005464891 python3.9[60564]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:11:03 np0005464891 python3.9[60716]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:03 np0005464891 python3.9[60794]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:11:04 np0005464891 python3.9[60946]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:04 np0005464891 python3.9[61024]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:11:05 np0005464891 python3.9[61176]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:06 np0005464891 python3.9[61328]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:06 np0005464891 python3.9[61406]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:07 np0005464891 python3.9[61558]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:08 np0005464891 python3.9[61636]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:09 np0005464891 python3.9[61788]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:11:09 np0005464891 systemd[1]: Reloading.
Oct  1 12:11:09 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:11:09 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:11:10 np0005464891 python3.9[61978]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:11 np0005464891 python3.9[62056]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:11 np0005464891 python3.9[62208]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:12 np0005464891 python3.9[62286]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:13 np0005464891 python3.9[62438]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:11:13 np0005464891 systemd[1]: Reloading.
Oct  1 12:11:13 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:11:13 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:11:13 np0005464891 systemd[1]: Starting Create netns directory...
Oct  1 12:11:13 np0005464891 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  1 12:11:13 np0005464891 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  1 12:11:13 np0005464891 systemd[1]: Finished Create netns directory.
Oct  1 12:11:14 np0005464891 python3.9[62631]: ansible-ansible.builtin.service_facts Invoked
Oct  1 12:11:14 np0005464891 network[62648]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 12:11:14 np0005464891 network[62649]: 'network-scripts' will be removed from distribution in near future.
Oct  1 12:11:14 np0005464891 network[62650]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 12:11:19 np0005464891 python3.9[62913]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:19 np0005464891 python3.9[62991]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:20 np0005464891 python3.9[63143]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:21 np0005464891 python3.9[63295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:22 np0005464891 python3.9[63418]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335080.6891232-216-158139534577454/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:23 np0005464891 python3.9[63570]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  1 12:11:23 np0005464891 systemd[1]: Starting Time & Date Service...
Oct  1 12:11:23 np0005464891 systemd[1]: Started Time & Date Service.
Oct  1 12:11:24 np0005464891 python3.9[63726]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:24 np0005464891 python3.9[63878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:25 np0005464891 python3.9[64001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759335084.3426898-251-180211090571583/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:26 np0005464891 python3.9[64153]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:26 np0005464891 python3.9[64276]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759335085.6474695-266-229029305729936/.source.yaml _original_basename=.qy21ct4c follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:27 np0005464891 python3.9[64428]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:28 np0005464891 python3.9[64551]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335086.9715977-281-212040954801873/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:29 np0005464891 python3.9[64703]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:11:29 np0005464891 python3.9[64856]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:11:30 np0005464891 python3[65009]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  1 12:11:31 np0005464891 python3.9[65161]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:32 np0005464891 python3.9[65284]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335091.1192777-320-8014747851930/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:33 np0005464891 python3.9[65436]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:33 np0005464891 python3.9[65559]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335092.6293757-335-135216177155483/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:34 np0005464891 python3.9[65711]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:35 np0005464891 python3.9[65834]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335094.0930605-350-162247879882666/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:36 np0005464891 python3.9[65986]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:36 np0005464891 python3.9[66109]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335095.585012-365-46434460294542/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:37 np0005464891 python3.9[66261]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:11:38 np0005464891 python3.9[66384]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335097.0446136-380-236018142414703/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:39 np0005464891 python3.9[66536]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:39 np0005464891 python3.9[66688]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:11:40 np0005464891 python3.9[66847]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:41 np0005464891 python3.9[67000]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:42 np0005464891 python3.9[67152]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:43 np0005464891 python3.9[67304]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  1 12:11:44 np0005464891 python3.9[67457]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  1 12:11:44 np0005464891 systemd[1]: session-15.scope: Deactivated successfully.
Oct  1 12:11:44 np0005464891 systemd[1]: session-15.scope: Consumed 35.617s CPU time.
Oct  1 12:11:44 np0005464891 systemd-logind[801]: Session 15 logged out. Waiting for processes to exit.
Oct  1 12:11:44 np0005464891 systemd-logind[801]: Removed session 15.
Oct  1 12:11:46 np0005464891 chronyd[54362]: Selected source 172.97.210.214 (pool.ntp.org)
Oct  1 12:11:49 np0005464891 systemd-logind[801]: New session 16 of user zuul.
Oct  1 12:11:49 np0005464891 systemd[1]: Started Session 16 of User zuul.
Oct  1 12:11:50 np0005464891 python3.9[67638]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  1 12:11:51 np0005464891 python3.9[67790]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:11:52 np0005464891 python3.9[67942]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:11:53 np0005464891 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  1 12:11:53 np0005464891 python3.9[68096]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4hOP3QCxrOdsa7WBbefy0n2KvT8H5MFb7vhedousiQtIDtfQG88361GnDSbYiNsMctn9YWcyB3bvj3SNuQyq26F6oD3WCIGA6G85exG/LQ3aqQfASJCXnbGmmUDjSIfPcahJjp/RQegPuXZRNCzYOw1Ov4k+Q+ajDcYnoKOKhL5/I/NFUChQ4623v9YjiyGyFVw+obms9D+Xmu84VwfjkiIiM1KHkxz4cmZT3CEkEwjJEPTaRuoR5Ne2LLDZJ3sRpYiUX915IlN02zycveY1kLbbKRcbf5UMD4PhezWic783KHvTFq2n7f/coSTiu+yObWXdBZxwFfU7Eefos02eSRkpix/lO+8vRSqcp+A98+JAM/Xwdxkp+OFX8E3VSqjh67zKCygLiOhHUkkSbRCXDhsQxuR1LcOHQUaA+lTFzDPWA0/jH9gZDZ+lGQoXnLw4nruJhWKvVTMTm07/Tppp5bVuQsfnpTsCA5mYgxdEsUZMICn1sV+ZVgaXQ8XfTkLc=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINX8HwxLVwxENs9tCFtflAI5hi67Do7RqwmxtF2aVjMJ#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE1K9wYJJZgF+UvKp1gousr20Dexp/t9lquorq16XUwZo+6SmIYlX4LQwKuPQaD8nV6Hg+7ZlPBdy2aLkm4OOZc=#012 create=True mode=0644 path=/tmp/ansible.y0srs7lw state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:54 np0005464891 python3.9[68248]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.y0srs7lw' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:11:55 np0005464891 python3.9[68402]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.y0srs7lw state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:11:55 np0005464891 systemd[1]: session-16.scope: Deactivated successfully.
Oct  1 12:11:55 np0005464891 systemd[1]: session-16.scope: Consumed 4.063s CPU time.
Oct  1 12:11:55 np0005464891 systemd-logind[801]: Session 16 logged out. Waiting for processes to exit.
Oct  1 12:11:55 np0005464891 systemd-logind[801]: Removed session 16.
Oct  1 12:12:01 np0005464891 systemd-logind[801]: New session 17 of user zuul.
Oct  1 12:12:01 np0005464891 systemd[1]: Started Session 17 of User zuul.
Oct  1 12:12:02 np0005464891 python3.9[68580]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:12:03 np0005464891 python3.9[68736]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  1 12:12:04 np0005464891 python3.9[68890]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:12:05 np0005464891 python3.9[69043]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:12:06 np0005464891 python3.9[69196]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:12:07 np0005464891 python3.9[69350]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:12:08 np0005464891 python3.9[69505]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:12:08 np0005464891 systemd[1]: session-17.scope: Deactivated successfully.
Oct  1 12:12:08 np0005464891 systemd[1]: session-17.scope: Consumed 4.813s CPU time.
Oct  1 12:12:08 np0005464891 systemd-logind[801]: Session 17 logged out. Waiting for processes to exit.
Oct  1 12:12:08 np0005464891 systemd-logind[801]: Removed session 17.
Oct  1 12:12:13 np0005464891 systemd-logind[801]: New session 18 of user zuul.
Oct  1 12:12:13 np0005464891 systemd[1]: Started Session 18 of User zuul.
Oct  1 12:12:14 np0005464891 python3.9[69683]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:12:16 np0005464891 python3.9[69839]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 12:12:16 np0005464891 python3.9[69923]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  1 12:12:18 np0005464891 python3.9[70074]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:12:20 np0005464891 python3.9[70225]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  1 12:12:21 np0005464891 python3.9[70375]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:12:21 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 12:12:21 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 12:12:21 np0005464891 python3.9[70526]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:12:22 np0005464891 systemd[1]: session-18.scope: Deactivated successfully.
Oct  1 12:12:22 np0005464891 systemd[1]: session-18.scope: Consumed 6.163s CPU time.
Oct  1 12:12:22 np0005464891 systemd-logind[801]: Session 18 logged out. Waiting for processes to exit.
Oct  1 12:12:22 np0005464891 systemd-logind[801]: Removed session 18.
Oct  1 12:12:29 np0005464891 systemd-logind[801]: New session 19 of user zuul.
Oct  1 12:12:29 np0005464891 systemd[1]: Started Session 19 of User zuul.
Oct  1 12:12:35 np0005464891 python3[71292]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:12:37 np0005464891 python3[71387]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  1 12:12:38 np0005464891 python3[71414]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:12:39 np0005464891 python3[71440]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:12:39 np0005464891 kernel: loop: module loaded
Oct  1 12:12:39 np0005464891 kernel: loop3: detected capacity change from 0 to 41943040
Oct  1 12:12:39 np0005464891 python3[71475]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:12:39 np0005464891 lvm[71478]: PV /dev/loop3 not used.
Oct  1 12:12:39 np0005464891 lvm[71480]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  1 12:12:39 np0005464891 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct  1 12:12:39 np0005464891 lvm[71482]:  0 logical volume(s) in volume group "ceph_vg0" now active
Oct  1 12:12:40 np0005464891 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct  1 12:12:40 np0005464891 lvm[71490]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  1 12:12:40 np0005464891 lvm[71490]: VG ceph_vg0 finished
Oct  1 12:12:40 np0005464891 python3[71568]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 12:12:41 np0005464891 python3[71642]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759335160.3205163-32937-67571392476874/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:12:41 np0005464891 python3[71692]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:12:41 np0005464891 systemd[1]: Reloading.
Oct  1 12:12:41 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:12:41 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:12:42 np0005464891 systemd[1]: Starting Ceph OSD losetup...
Oct  1 12:12:42 np0005464891 bash[71732]: /dev/loop3: [64513]:4328139 (/var/lib/ceph-osd-0.img)
Oct  1 12:12:42 np0005464891 systemd[1]: Finished Ceph OSD losetup.
Oct  1 12:12:42 np0005464891 lvm[71733]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  1 12:12:42 np0005464891 lvm[71733]: VG ceph_vg0 finished
Oct  1 12:12:42 np0005464891 python3[71759]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  1 12:12:44 np0005464891 python3[71786]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:12:44 np0005464891 python3[71812]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:12:44 np0005464891 kernel: loop4: detected capacity change from 0 to 41943040
Oct  1 12:12:44 np0005464891 python3[71844]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:12:44 np0005464891 lvm[71847]: PV /dev/loop4 not used.
Oct  1 12:12:44 np0005464891 lvm[71849]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  1 12:12:44 np0005464891 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Oct  1 12:12:44 np0005464891 lvm[71857]:  1 logical volume(s) in volume group "ceph_vg1" now active
Oct  1 12:12:44 np0005464891 lvm[71860]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  1 12:12:44 np0005464891 lvm[71860]: VG ceph_vg1 finished
Oct  1 12:12:45 np0005464891 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Oct  1 12:12:45 np0005464891 python3[71938]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 12:12:45 np0005464891 python3[72011]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759335165.1905277-32964-16164518447753/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:12:46 np0005464891 python3[72061]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:12:47 np0005464891 systemd[1]: Reloading.
Oct  1 12:12:47 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:12:47 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:12:47 np0005464891 systemd[1]: Starting Ceph OSD losetup...
Oct  1 12:12:47 np0005464891 bash[72101]: /dev/loop4: [64513]:4328187 (/var/lib/ceph-osd-1.img)
Oct  1 12:12:47 np0005464891 systemd[1]: Finished Ceph OSD losetup.
Oct  1 12:12:47 np0005464891 lvm[72103]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  1 12:12:47 np0005464891 lvm[72103]: VG ceph_vg1 finished
Oct  1 12:12:48 np0005464891 python3[72129]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  1 12:12:49 np0005464891 python3[72156]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:12:49 np0005464891 python3[72182]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:12:49 np0005464891 kernel: loop5: detected capacity change from 0 to 41943040
Oct  1 12:12:50 np0005464891 python3[72213]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:12:50 np0005464891 lvm[72216]: PV /dev/loop5 not used.
Oct  1 12:12:50 np0005464891 lvm[72218]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  1 12:12:50 np0005464891 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Oct  1 12:12:50 np0005464891 lvm[72224]:  1 logical volume(s) in volume group "ceph_vg2" now active
Oct  1 12:12:50 np0005464891 lvm[72229]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  1 12:12:50 np0005464891 lvm[72229]: VG ceph_vg2 finished
Oct  1 12:12:50 np0005464891 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Oct  1 12:12:51 np0005464891 python3[72307]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 12:12:51 np0005464891 python3[72380]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759335170.7824204-32991-86312288366838/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:12:51 np0005464891 python3[72430]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:12:52 np0005464891 systemd[1]: Reloading.
Oct  1 12:12:52 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:12:52 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:12:52 np0005464891 systemd[1]: Starting Ceph OSD losetup...
Oct  1 12:12:52 np0005464891 bash[72471]: /dev/loop5: [64513]:4328614 (/var/lib/ceph-osd-2.img)
Oct  1 12:12:52 np0005464891 systemd[1]: Finished Ceph OSD losetup.
Oct  1 12:12:52 np0005464891 lvm[72473]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  1 12:12:52 np0005464891 lvm[72473]: VG ceph_vg2 finished
Oct  1 12:12:54 np0005464891 python3[72497]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:12:56 np0005464891 python3[72590]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  1 12:12:57 np0005464891 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 12:12:57 np0005464891 systemd[1]: Starting man-db-cache-update.service...
Oct  1 12:12:58 np0005464891 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 12:12:58 np0005464891 systemd[1]: Finished man-db-cache-update.service.
Oct  1 12:12:58 np0005464891 systemd[1]: run-r03145f6ac03b48b9a2c6647b7101ba83.service: Deactivated successfully.
Oct  1 12:12:58 np0005464891 python3[72705]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:12:58 np0005464891 python3[72733]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:12:59 np0005464891 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 12:12:59 np0005464891 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 12:12:59 np0005464891 python3[72798]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:13:00 np0005464891 python3[72824]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:13:00 np0005464891 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 12:13:00 np0005464891 python3[72902]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 12:13:01 np0005464891 python3[72975]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759335180.5466037-33138-227393314790767/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:13:02 np0005464891 python3[73077]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 12:13:02 np0005464891 python3[73150]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759335181.7651217-33156-224181841416529/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:13:02 np0005464891 python3[73200]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:13:03 np0005464891 python3[73228]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:13:03 np0005464891 python3[73256]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:13:04 np0005464891 python3[73284]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:13:04 np0005464891 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 12:13:04 np0005464891 systemd[1]: Created slice User Slice of UID 42477.
Oct  1 12:13:04 np0005464891 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  1 12:13:04 np0005464891 systemd-logind[801]: New session 20 of user ceph-admin.
Oct  1 12:13:04 np0005464891 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  1 12:13:04 np0005464891 systemd[1]: Starting User Manager for UID 42477...
Oct  1 12:13:04 np0005464891 systemd[73302]: Queued start job for default target Main User Target.
Oct  1 12:13:04 np0005464891 systemd[73302]: Created slice User Application Slice.
Oct  1 12:13:04 np0005464891 systemd[73302]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  1 12:13:04 np0005464891 systemd[73302]: Started Daily Cleanup of User's Temporary Directories.
Oct  1 12:13:04 np0005464891 systemd[73302]: Reached target Paths.
Oct  1 12:13:04 np0005464891 systemd[73302]: Reached target Timers.
Oct  1 12:13:04 np0005464891 systemd[73302]: Starting D-Bus User Message Bus Socket...
Oct  1 12:13:04 np0005464891 systemd[73302]: Starting Create User's Volatile Files and Directories...
Oct  1 12:13:04 np0005464891 systemd[73302]: Listening on D-Bus User Message Bus Socket.
Oct  1 12:13:04 np0005464891 systemd[73302]: Reached target Sockets.
Oct  1 12:13:04 np0005464891 systemd[73302]: Finished Create User's Volatile Files and Directories.
Oct  1 12:13:04 np0005464891 systemd[73302]: Reached target Basic System.
Oct  1 12:13:04 np0005464891 systemd[73302]: Reached target Main User Target.
Oct  1 12:13:04 np0005464891 systemd[73302]: Startup finished in 151ms.
Oct  1 12:13:04 np0005464891 systemd[1]: Started User Manager for UID 42477.
Oct  1 12:13:04 np0005464891 systemd[1]: Started Session 20 of User ceph-admin.
Oct  1 12:13:04 np0005464891 systemd[1]: session-20.scope: Deactivated successfully.
Oct  1 12:13:04 np0005464891 systemd-logind[801]: Session 20 logged out. Waiting for processes to exit.
Oct  1 12:13:04 np0005464891 systemd-logind[801]: Removed session 20.
Oct  1 12:13:07 np0005464891 systemd[1]: var-lib-containers-storage-overlay-compat3793727709-lower\x2dmapped.mount: Deactivated successfully.
Oct  1 12:13:14 np0005464891 systemd[1]: Stopping User Manager for UID 42477...
Oct  1 12:13:14 np0005464891 systemd[73302]: Activating special unit Exit the Session...
Oct  1 12:13:14 np0005464891 systemd[73302]: Stopped target Main User Target.
Oct  1 12:13:14 np0005464891 systemd[73302]: Stopped target Basic System.
Oct  1 12:13:14 np0005464891 systemd[73302]: Stopped target Paths.
Oct  1 12:13:14 np0005464891 systemd[73302]: Stopped target Sockets.
Oct  1 12:13:14 np0005464891 systemd[73302]: Stopped target Timers.
Oct  1 12:13:14 np0005464891 systemd[73302]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  1 12:13:14 np0005464891 systemd[73302]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  1 12:13:14 np0005464891 systemd[73302]: Closed D-Bus User Message Bus Socket.
Oct  1 12:13:14 np0005464891 systemd[73302]: Stopped Create User's Volatile Files and Directories.
Oct  1 12:13:14 np0005464891 systemd[73302]: Removed slice User Application Slice.
Oct  1 12:13:14 np0005464891 systemd[73302]: Reached target Shutdown.
Oct  1 12:13:14 np0005464891 systemd[73302]: Finished Exit the Session.
Oct  1 12:13:14 np0005464891 systemd[73302]: Reached target Exit the Session.
Oct  1 12:13:14 np0005464891 systemd[1]: user@42477.service: Deactivated successfully.
Oct  1 12:13:14 np0005464891 systemd[1]: Stopped User Manager for UID 42477.
Oct  1 12:13:14 np0005464891 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct  1 12:13:14 np0005464891 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct  1 12:13:14 np0005464891 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct  1 12:13:14 np0005464891 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct  1 12:13:14 np0005464891 systemd[1]: Removed slice User Slice of UID 42477.
Oct  1 12:13:18 np0005464891 podman[73356]: 2025-10-01 16:13:18.304916178 +0000 UTC m=+13.506477639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:18 np0005464891 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 12:13:18 np0005464891 podman[73417]: 2025-10-01 16:13:18.410977055 +0000 UTC m=+0.065773513 container create 3681ea80a3735cb2344802cebfef7afa4d27024bb329c06f30b380281b16b911 (image=quay.io/ceph/ceph:v18, name=nifty_curie, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:13:18 np0005464891 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct  1 12:13:18 np0005464891 systemd[1]: Started libpod-conmon-3681ea80a3735cb2344802cebfef7afa4d27024bb329c06f30b380281b16b911.scope.
Oct  1 12:13:18 np0005464891 podman[73417]: 2025-10-01 16:13:18.386069121 +0000 UTC m=+0.040865569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:18 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:18 np0005464891 podman[73417]: 2025-10-01 16:13:18.515896365 +0000 UTC m=+0.170692883 container init 3681ea80a3735cb2344802cebfef7afa4d27024bb329c06f30b380281b16b911 (image=quay.io/ceph/ceph:v18, name=nifty_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:13:18 np0005464891 podman[73417]: 2025-10-01 16:13:18.527985965 +0000 UTC m=+0.182782383 container start 3681ea80a3735cb2344802cebfef7afa4d27024bb329c06f30b380281b16b911 (image=quay.io/ceph/ceph:v18, name=nifty_curie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 12:13:18 np0005464891 podman[73417]: 2025-10-01 16:13:18.531613415 +0000 UTC m=+0.186409923 container attach 3681ea80a3735cb2344802cebfef7afa4d27024bb329c06f30b380281b16b911 (image=quay.io/ceph/ceph:v18, name=nifty_curie, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:13:18 np0005464891 nifty_curie[73434]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct  1 12:13:18 np0005464891 systemd[1]: libpod-3681ea80a3735cb2344802cebfef7afa4d27024bb329c06f30b380281b16b911.scope: Deactivated successfully.
Oct  1 12:13:18 np0005464891 podman[73417]: 2025-10-01 16:13:18.911113637 +0000 UTC m=+0.565910125 container died 3681ea80a3735cb2344802cebfef7afa4d27024bb329c06f30b380281b16b911 (image=quay.io/ceph/ceph:v18, name=nifty_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 12:13:18 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c4c2728c3ab01ac689ab1314d21e7fe4295a092982901649f85902d7e21ed9e9-merged.mount: Deactivated successfully.
Oct  1 12:13:18 np0005464891 podman[73417]: 2025-10-01 16:13:18.975370494 +0000 UTC m=+0.630166922 container remove 3681ea80a3735cb2344802cebfef7afa4d27024bb329c06f30b380281b16b911 (image=quay.io/ceph/ceph:v18, name=nifty_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:13:18 np0005464891 systemd[1]: libpod-conmon-3681ea80a3735cb2344802cebfef7afa4d27024bb329c06f30b380281b16b911.scope: Deactivated successfully.
Oct  1 12:13:19 np0005464891 podman[73452]: 2025-10-01 16:13:19.035999442 +0000 UTC m=+0.035862168 container create b184d471eeab3482c3051d22c31f4caac9b5057fbcafd9ad2724cc5862b9b300 (image=quay.io/ceph/ceph:v18, name=recursing_varahamihira, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:13:19 np0005464891 systemd[1]: Started libpod-conmon-b184d471eeab3482c3051d22c31f4caac9b5057fbcafd9ad2724cc5862b9b300.scope.
Oct  1 12:13:19 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:19 np0005464891 podman[73452]: 2025-10-01 16:13:19.091055786 +0000 UTC m=+0.090918532 container init b184d471eeab3482c3051d22c31f4caac9b5057fbcafd9ad2724cc5862b9b300 (image=quay.io/ceph/ceph:v18, name=recursing_varahamihira, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 12:13:19 np0005464891 podman[73452]: 2025-10-01 16:13:19.097296954 +0000 UTC m=+0.097159680 container start b184d471eeab3482c3051d22c31f4caac9b5057fbcafd9ad2724cc5862b9b300 (image=quay.io/ceph/ceph:v18, name=recursing_varahamihira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:13:19 np0005464891 recursing_varahamihira[73469]: 167 167
Oct  1 12:13:19 np0005464891 podman[73452]: 2025-10-01 16:13:19.100163148 +0000 UTC m=+0.100025894 container attach b184d471eeab3482c3051d22c31f4caac9b5057fbcafd9ad2724cc5862b9b300 (image=quay.io/ceph/ceph:v18, name=recursing_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:13:19 np0005464891 systemd[1]: libpod-b184d471eeab3482c3051d22c31f4caac9b5057fbcafd9ad2724cc5862b9b300.scope: Deactivated successfully.
Oct  1 12:13:19 np0005464891 podman[73452]: 2025-10-01 16:13:19.101554468 +0000 UTC m=+0.101417194 container died b184d471eeab3482c3051d22c31f4caac9b5057fbcafd9ad2724cc5862b9b300 (image=quay.io/ceph/ceph:v18, name=recursing_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:13:19 np0005464891 podman[73452]: 2025-10-01 16:13:19.021865208 +0000 UTC m=+0.021727954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:19 np0005464891 podman[73452]: 2025-10-01 16:13:19.129403867 +0000 UTC m=+0.129266593 container remove b184d471eeab3482c3051d22c31f4caac9b5057fbcafd9ad2724cc5862b9b300 (image=quay.io/ceph/ceph:v18, name=recursing_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:13:19 np0005464891 systemd[1]: libpod-conmon-b184d471eeab3482c3051d22c31f4caac9b5057fbcafd9ad2724cc5862b9b300.scope: Deactivated successfully.
Oct  1 12:13:19 np0005464891 podman[73485]: 2025-10-01 16:13:19.19659731 +0000 UTC m=+0.045023902 container create f2997700bc509b04853a3ebe310d8ae3142e4228ef1e6631ee8cee7b9e147dd5 (image=quay.io/ceph/ceph:v18, name=goofy_almeida, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:13:19 np0005464891 systemd[1]: Started libpod-conmon-f2997700bc509b04853a3ebe310d8ae3142e4228ef1e6631ee8cee7b9e147dd5.scope.
Oct  1 12:13:19 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:19 np0005464891 podman[73485]: 2025-10-01 16:13:19.171938653 +0000 UTC m=+0.020365235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:19 np0005464891 podman[73485]: 2025-10-01 16:13:19.27355989 +0000 UTC m=+0.121986472 container init f2997700bc509b04853a3ebe310d8ae3142e4228ef1e6631ee8cee7b9e147dd5 (image=quay.io/ceph/ceph:v18, name=goofy_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 12:13:19 np0005464891 podman[73485]: 2025-10-01 16:13:19.279769798 +0000 UTC m=+0.128196350 container start f2997700bc509b04853a3ebe310d8ae3142e4228ef1e6631ee8cee7b9e147dd5 (image=quay.io/ceph/ceph:v18, name=goofy_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:13:19 np0005464891 podman[73485]: 2025-10-01 16:13:19.285052546 +0000 UTC m=+0.133479138 container attach f2997700bc509b04853a3ebe310d8ae3142e4228ef1e6631ee8cee7b9e147dd5 (image=quay.io/ceph/ceph:v18, name=goofy_almeida, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:13:19 np0005464891 goofy_almeida[73501]: AQAfU91oN6A3EhAADxaWMOFDitd3nESBggxDEg==
Oct  1 12:13:19 np0005464891 podman[73485]: 2025-10-01 16:13:19.309256464 +0000 UTC m=+0.157683056 container died f2997700bc509b04853a3ebe310d8ae3142e4228ef1e6631ee8cee7b9e147dd5 (image=quay.io/ceph/ceph:v18, name=goofy_almeida, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:13:19 np0005464891 systemd[1]: libpod-f2997700bc509b04853a3ebe310d8ae3142e4228ef1e6631ee8cee7b9e147dd5.scope: Deactivated successfully.
Oct  1 12:13:19 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ba3378fae60c57a57002f4d92fe53ce591d5a8682643d755689667f172e0de07-merged.mount: Deactivated successfully.
Oct  1 12:13:19 np0005464891 podman[73485]: 2025-10-01 16:13:19.347555654 +0000 UTC m=+0.195982246 container remove f2997700bc509b04853a3ebe310d8ae3142e4228ef1e6631ee8cee7b9e147dd5 (image=quay.io/ceph/ceph:v18, name=goofy_almeida, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:13:19 np0005464891 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 12:13:19 np0005464891 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 12:13:19 np0005464891 systemd[1]: libpod-conmon-f2997700bc509b04853a3ebe310d8ae3142e4228ef1e6631ee8cee7b9e147dd5.scope: Deactivated successfully.
Oct  1 12:13:19 np0005464891 podman[73522]: 2025-10-01 16:13:19.420399743 +0000 UTC m=+0.048248543 container create a6484a485ffa5c46be3f0647467151f7a6569934c87dff2073ba722870a68173 (image=quay.io/ceph/ceph:v18, name=silly_elgamal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 12:13:19 np0005464891 systemd[1]: Started libpod-conmon-a6484a485ffa5c46be3f0647467151f7a6569934c87dff2073ba722870a68173.scope.
Oct  1 12:13:19 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:19 np0005464891 podman[73522]: 2025-10-01 16:13:19.483110827 +0000 UTC m=+0.110959657 container init a6484a485ffa5c46be3f0647467151f7a6569934c87dff2073ba722870a68173 (image=quay.io/ceph/ceph:v18, name=silly_elgamal, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:13:19 np0005464891 podman[73522]: 2025-10-01 16:13:19.492372752 +0000 UTC m=+0.120221562 container start a6484a485ffa5c46be3f0647467151f7a6569934c87dff2073ba722870a68173 (image=quay.io/ceph/ceph:v18, name=silly_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:13:19 np0005464891 podman[73522]: 2025-10-01 16:13:19.398292751 +0000 UTC m=+0.026141581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:19 np0005464891 podman[73522]: 2025-10-01 16:13:19.495859839 +0000 UTC m=+0.123708719 container attach a6484a485ffa5c46be3f0647467151f7a6569934c87dff2073ba722870a68173 (image=quay.io/ceph/ceph:v18, name=silly_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:13:19 np0005464891 silly_elgamal[73538]: AQAfU91omsLTHhAAkzz6IM+qvr2OZF44cGTJ+g==
Oct  1 12:13:19 np0005464891 systemd[1]: libpod-a6484a485ffa5c46be3f0647467151f7a6569934c87dff2073ba722870a68173.scope: Deactivated successfully.
Oct  1 12:13:19 np0005464891 podman[73522]: 2025-10-01 16:13:19.521887777 +0000 UTC m=+0.149736607 container died a6484a485ffa5c46be3f0647467151f7a6569934c87dff2073ba722870a68173 (image=quay.io/ceph/ceph:v18, name=silly_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 12:13:19 np0005464891 podman[73522]: 2025-10-01 16:13:19.560103297 +0000 UTC m=+0.187952097 container remove a6484a485ffa5c46be3f0647467151f7a6569934c87dff2073ba722870a68173 (image=quay.io/ceph/ceph:v18, name=silly_elgamal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 12:13:19 np0005464891 systemd[1]: libpod-conmon-a6484a485ffa5c46be3f0647467151f7a6569934c87dff2073ba722870a68173.scope: Deactivated successfully.
Oct  1 12:13:19 np0005464891 podman[73557]: 2025-10-01 16:13:19.622244597 +0000 UTC m=+0.044036458 container create aa96852ece7469ac79132ee6d2a9cf2bbfdb3749913569a4f02512be6bdd3ef1 (image=quay.io/ceph/ceph:v18, name=relaxed_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 12:13:19 np0005464891 systemd[1]: Started libpod-conmon-aa96852ece7469ac79132ee6d2a9cf2bbfdb3749913569a4f02512be6bdd3ef1.scope.
Oct  1 12:13:19 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:19 np0005464891 podman[73557]: 2025-10-01 16:13:19.602435247 +0000 UTC m=+0.024227138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:20 np0005464891 podman[73557]: 2025-10-01 16:13:20.285130466 +0000 UTC m=+0.706922417 container init aa96852ece7469ac79132ee6d2a9cf2bbfdb3749913569a4f02512be6bdd3ef1 (image=quay.io/ceph/ceph:v18, name=relaxed_golick, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:13:20 np0005464891 podman[73557]: 2025-10-01 16:13:20.294728549 +0000 UTC m=+0.716520400 container start aa96852ece7469ac79132ee6d2a9cf2bbfdb3749913569a4f02512be6bdd3ef1 (image=quay.io/ceph/ceph:v18, name=relaxed_golick, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:13:20 np0005464891 podman[73557]: 2025-10-01 16:13:20.303691609 +0000 UTC m=+0.725483540 container attach aa96852ece7469ac79132ee6d2a9cf2bbfdb3749913569a4f02512be6bdd3ef1 (image=quay.io/ceph/ceph:v18, name=relaxed_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:13:20 np0005464891 relaxed_golick[73573]: AQAgU91olp/MExAAHomfIgu7LJlnui9Rrr5YXw==
Oct  1 12:13:20 np0005464891 systemd[1]: libpod-aa96852ece7469ac79132ee6d2a9cf2bbfdb3749913569a4f02512be6bdd3ef1.scope: Deactivated successfully.
Oct  1 12:13:20 np0005464891 podman[73580]: 2025-10-01 16:13:20.398042135 +0000 UTC m=+0.038831764 container died aa96852ece7469ac79132ee6d2a9cf2bbfdb3749913569a4f02512be6bdd3ef1 (image=quay.io/ceph/ceph:v18, name=relaxed_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 12:13:20 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c694862ebb99a5869c427dfe1002d059662f81279cf418f682d97afca67989e5-merged.mount: Deactivated successfully.
Oct  1 12:13:20 np0005464891 podman[73580]: 2025-10-01 16:13:20.433552844 +0000 UTC m=+0.074342443 container remove aa96852ece7469ac79132ee6d2a9cf2bbfdb3749913569a4f02512be6bdd3ef1 (image=quay.io/ceph/ceph:v18, name=relaxed_golick, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 12:13:20 np0005464891 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 12:13:20 np0005464891 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 12:13:20 np0005464891 systemd[1]: libpod-conmon-aa96852ece7469ac79132ee6d2a9cf2bbfdb3749913569a4f02512be6bdd3ef1.scope: Deactivated successfully.
Oct  1 12:13:20 np0005464891 podman[73595]: 2025-10-01 16:13:20.525070088 +0000 UTC m=+0.057123651 container create 11ce419b0d5ac53a22eef920e2f188fdfc4c2578454d45812090d2715adfa5f8 (image=quay.io/ceph/ceph:v18, name=frosty_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct  1 12:13:20 np0005464891 systemd[1]: Started libpod-conmon-11ce419b0d5ac53a22eef920e2f188fdfc4c2578454d45812090d2715adfa5f8.scope.
Oct  1 12:13:20 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:20 np0005464891 podman[73595]: 2025-10-01 16:13:20.497874633 +0000 UTC m=+0.029928276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ec248ee47bb0af951305292616a8ba7e47de75a026aacce2648b75b8f30d2e/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:20 np0005464891 podman[73595]: 2025-10-01 16:13:20.604750038 +0000 UTC m=+0.136803621 container init 11ce419b0d5ac53a22eef920e2f188fdfc4c2578454d45812090d2715adfa5f8 (image=quay.io/ceph/ceph:v18, name=frosty_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 12:13:20 np0005464891 podman[73595]: 2025-10-01 16:13:20.611246142 +0000 UTC m=+0.143299705 container start 11ce419b0d5ac53a22eef920e2f188fdfc4c2578454d45812090d2715adfa5f8 (image=quay.io/ceph/ceph:v18, name=frosty_tharp, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 12:13:20 np0005464891 podman[73595]: 2025-10-01 16:13:20.615570438 +0000 UTC m=+0.147624031 container attach 11ce419b0d5ac53a22eef920e2f188fdfc4c2578454d45812090d2715adfa5f8 (image=quay.io/ceph/ceph:v18, name=frosty_tharp, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 12:13:20 np0005464891 frosty_tharp[73611]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct  1 12:13:20 np0005464891 frosty_tharp[73611]: setting min_mon_release = pacific
Oct  1 12:13:20 np0005464891 frosty_tharp[73611]: /usr/bin/monmaptool: set fsid to 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5
Oct  1 12:13:20 np0005464891 frosty_tharp[73611]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct  1 12:13:20 np0005464891 systemd[1]: libpod-11ce419b0d5ac53a22eef920e2f188fdfc4c2578454d45812090d2715adfa5f8.scope: Deactivated successfully.
Oct  1 12:13:20 np0005464891 podman[73595]: 2025-10-01 16:13:20.66068064 +0000 UTC m=+0.192734233 container died 11ce419b0d5ac53a22eef920e2f188fdfc4c2578454d45812090d2715adfa5f8 (image=quay.io/ceph/ceph:v18, name=frosty_tharp, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:13:20 np0005464891 podman[73595]: 2025-10-01 16:13:20.688305414 +0000 UTC m=+0.220359017 container remove 11ce419b0d5ac53a22eef920e2f188fdfc4c2578454d45812090d2715adfa5f8 (image=quay.io/ceph/ceph:v18, name=frosty_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 12:13:20 np0005464891 systemd[1]: libpod-conmon-11ce419b0d5ac53a22eef920e2f188fdfc4c2578454d45812090d2715adfa5f8.scope: Deactivated successfully.
Oct  1 12:13:20 np0005464891 podman[73630]: 2025-10-01 16:13:20.746568659 +0000 UTC m=+0.038971037 container create 9378bc8670d70eca0134395f6885b0b1d33f2804ebaac78ec1175ca5a7f5e335 (image=quay.io/ceph/ceph:v18, name=focused_euclid, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:13:20 np0005464891 systemd[1]: Started libpod-conmon-9378bc8670d70eca0134395f6885b0b1d33f2804ebaac78ec1175ca5a7f5e335.scope.
Oct  1 12:13:20 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ebe1c47eba24bc9ce7fd2479d4ffe4574df900e2df6af16eb068f385e973c5/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ebe1c47eba24bc9ce7fd2479d4ffe4574df900e2df6af16eb068f385e973c5/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ebe1c47eba24bc9ce7fd2479d4ffe4574df900e2df6af16eb068f385e973c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ebe1c47eba24bc9ce7fd2479d4ffe4574df900e2df6af16eb068f385e973c5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:20 np0005464891 podman[73630]: 2025-10-01 16:13:20.810246053 +0000 UTC m=+0.102648401 container init 9378bc8670d70eca0134395f6885b0b1d33f2804ebaac78ec1175ca5a7f5e335 (image=quay.io/ceph/ceph:v18, name=focused_euclid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 12:13:20 np0005464891 podman[73630]: 2025-10-01 16:13:20.81952747 +0000 UTC m=+0.111929808 container start 9378bc8670d70eca0134395f6885b0b1d33f2804ebaac78ec1175ca5a7f5e335 (image=quay.io/ceph/ceph:v18, name=focused_euclid, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 12:13:20 np0005464891 podman[73630]: 2025-10-01 16:13:20.822322972 +0000 UTC m=+0.114725330 container attach 9378bc8670d70eca0134395f6885b0b1d33f2804ebaac78ec1175ca5a7f5e335 (image=quay.io/ceph/ceph:v18, name=focused_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 12:13:20 np0005464891 podman[73630]: 2025-10-01 16:13:20.727506175 +0000 UTC m=+0.019908533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:20 np0005464891 systemd[1]: libpod-9378bc8670d70eca0134395f6885b0b1d33f2804ebaac78ec1175ca5a7f5e335.scope: Deactivated successfully.
Oct  1 12:13:20 np0005464891 podman[73630]: 2025-10-01 16:13:20.89920785 +0000 UTC m=+0.191610228 container died 9378bc8670d70eca0134395f6885b0b1d33f2804ebaac78ec1175ca5a7f5e335 (image=quay.io/ceph/ceph:v18, name=focused_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:13:20 np0005464891 podman[73630]: 2025-10-01 16:13:20.942930212 +0000 UTC m=+0.235332590 container remove 9378bc8670d70eca0134395f6885b0b1d33f2804ebaac78ec1175ca5a7f5e335 (image=quay.io/ceph/ceph:v18, name=focused_euclid, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:13:20 np0005464891 systemd[1]: libpod-conmon-9378bc8670d70eca0134395f6885b0b1d33f2804ebaac78ec1175ca5a7f5e335.scope: Deactivated successfully.
Oct  1 12:13:21 np0005464891 systemd[1]: Reloading.
Oct  1 12:13:21 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:13:21 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:13:21 np0005464891 systemd[1]: Reloading.
Oct  1 12:13:21 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:13:21 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:13:21 np0005464891 systemd[1]: Reached target All Ceph clusters and services.
Oct  1 12:13:21 np0005464891 systemd[1]: Reloading.
Oct  1 12:13:21 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:13:21 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:13:21 np0005464891 systemd[1]: Reached target Ceph cluster 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5.
Oct  1 12:13:21 np0005464891 systemd[1]: Reloading.
Oct  1 12:13:21 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:13:21 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:13:22 np0005464891 systemd[1]: Reloading.
Oct  1 12:13:22 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:13:22 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:13:22 np0005464891 systemd[1]: Created slice Slice /system/ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5.
Oct  1 12:13:22 np0005464891 systemd[1]: Reached target System Time Set.
Oct  1 12:13:22 np0005464891 systemd[1]: Reached target System Time Synchronized.
Oct  1 12:13:22 np0005464891 systemd[1]: Starting Ceph mon.compute-0 for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5...
Oct  1 12:13:22 np0005464891 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 12:13:22 np0005464891 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 12:13:22 np0005464891 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 12:13:22 np0005464891 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 12:13:22 np0005464891 podman[73926]: 2025-10-01 16:13:22.623335549 +0000 UTC m=+0.043843776 container create 8a2a2365489553be4f9208242545a4c1eb690ca7f0e2859817fb3449ddafb9dd (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:13:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2dbe3e648be96e4a6a13a45129440385aa8d2e875b40bce12f5eace796c0e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2dbe3e648be96e4a6a13a45129440385aa8d2e875b40bce12f5eace796c0e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2dbe3e648be96e4a6a13a45129440385aa8d2e875b40bce12f5eace796c0e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2dbe3e648be96e4a6a13a45129440385aa8d2e875b40bce12f5eace796c0e5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:22 np0005464891 podman[73926]: 2025-10-01 16:13:22.679888125 +0000 UTC m=+0.100396392 container init 8a2a2365489553be4f9208242545a4c1eb690ca7f0e2859817fb3449ddafb9dd (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:13:22 np0005464891 podman[73926]: 2025-10-01 16:13:22.60586485 +0000 UTC m=+0.026373077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:22 np0005464891 podman[73926]: 2025-10-01 16:13:22.703640162 +0000 UTC m=+0.124148399 container start 8a2a2365489553be4f9208242545a4c1eb690ca7f0e2859817fb3449ddafb9dd (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:13:22 np0005464891 bash[73926]: 8a2a2365489553be4f9208242545a4c1eb690ca7f0e2859817fb3449ddafb9dd
Oct  1 12:13:22 np0005464891 systemd[1]: Started Ceph mon.compute-0 for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5.
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: pidfile_write: ignore empty --pid-file
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: load: jerasure load: lrc 
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: RocksDB version: 7.9.2
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Git sha 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: DB SUMMARY
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: DB Session ID:  9NEDVR4JQKSHF5V3KQHV
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: CURRENT file:  CURRENT
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: IDENTITY file:  IDENTITY
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                         Options.error_if_exists: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                       Options.create_if_missing: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                         Options.paranoid_checks: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                                     Options.env: 0x563f3fadac40
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                                Options.info_log: 0x563f40cd8e80
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                Options.max_file_opening_threads: 16
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                              Options.statistics: (nil)
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                               Options.use_fsync: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                       Options.max_log_file_size: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                         Options.allow_fallocate: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                        Options.use_direct_reads: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:          Options.create_missing_column_families: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                              Options.db_log_dir: 
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                                 Options.wal_dir: 
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                   Options.advise_random_on_open: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                    Options.write_buffer_manager: 0x563f40ce8b40
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                            Options.rate_limiter: (nil)
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                  Options.unordered_write: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                               Options.row_cache: None
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                              Options.wal_filter: None
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.allow_ingest_behind: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.two_write_queues: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.manual_wal_flush: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.wal_compression: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.atomic_flush: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                 Options.log_readahead_size: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.allow_data_in_errors: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.db_host_id: __hostname__
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.max_background_jobs: 2
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.max_background_compactions: -1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.max_subcompactions: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.max_total_wal_size: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                          Options.max_open_files: -1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                          Options.bytes_per_sync: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:       Options.compaction_readahead_size: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                  Options.max_background_flushes: -1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Compression algorithms supported:
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: #011kZSTD supported: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: #011kXpressCompression supported: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: #011kBZip2Compression supported: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: #011kLZ4Compression supported: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: #011kZlibCompression supported: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: #011kSnappyCompression supported: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:           Options.merge_operator: 
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:        Options.compaction_filter: None
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f40cd8a80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f40cd11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:        Options.write_buffer_size: 33554432
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:  Options.max_write_buffer_number: 2
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:          Options.compression: NoCompression
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.num_levels: 7
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4cdc7836-3ae4-40a3-8b66-898644585cc0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335202760358, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335202762993, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "9NEDVR4JQKSHF5V3KQHV", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335202763241, "job": 1, "event": "recovery_finished"}
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563f40cfae00
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: DB pointer 0x563f40d84000
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f40cd11f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@-1(???) e0 preinit fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(probing) e0 win_standalone_election
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(probing) e1 win_standalone_election
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-10-01T16:13:20.850703Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025,kernel_version=5.14.0-617.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864116,os=Linux}
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).mds e1 new map
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: log_channel(cluster) log [DBG] : fsmap 
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mkfs 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct  1 12:13:22 np0005464891 podman[73946]: 2025-10-01 16:13:22.820303595 +0000 UTC m=+0.064092195 container create 6edd5c1efe3869159240b9e0d681eaf7e317b1f5dd594509578367c0898d71db (image=quay.io/ceph/ceph:v18, name=infallible_thompson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:13:22 np0005464891 ceph-mon[73945]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  1 12:13:22 np0005464891 systemd[1]: Started libpod-conmon-6edd5c1efe3869159240b9e0d681eaf7e317b1f5dd594509578367c0898d71db.scope.
Oct  1 12:13:22 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:22 np0005464891 podman[73946]: 2025-10-01 16:13:22.800331291 +0000 UTC m=+0.044119901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2f987749c1c20a43f180f70d16300d4cc8a54d999ac3dc0ec07bcf5057701f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2f987749c1c20a43f180f70d16300d4cc8a54d999ac3dc0ec07bcf5057701f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2f987749c1c20a43f180f70d16300d4cc8a54d999ac3dc0ec07bcf5057701f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:22 np0005464891 podman[73946]: 2025-10-01 16:13:22.910377836 +0000 UTC m=+0.154166436 container init 6edd5c1efe3869159240b9e0d681eaf7e317b1f5dd594509578367c0898d71db (image=quay.io/ceph/ceph:v18, name=infallible_thompson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 12:13:22 np0005464891 podman[73946]: 2025-10-01 16:13:22.921566185 +0000 UTC m=+0.165354775 container start 6edd5c1efe3869159240b9e0d681eaf7e317b1f5dd594509578367c0898d71db (image=quay.io/ceph/ceph:v18, name=infallible_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct  1 12:13:22 np0005464891 podman[73946]: 2025-10-01 16:13:22.924440229 +0000 UTC m=+0.168228819 container attach 6edd5c1efe3869159240b9e0d681eaf7e317b1f5dd594509578367c0898d71db (image=quay.io/ceph/ceph:v18, name=infallible_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:13:23 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  1 12:13:23 np0005464891 ceph-mon[73945]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2337228104' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  1 12:13:23 np0005464891 infallible_thompson[74000]:  cluster:
Oct  1 12:13:23 np0005464891 infallible_thompson[74000]:    id:     6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5
Oct  1 12:13:23 np0005464891 infallible_thompson[74000]:    health: HEALTH_OK
Oct  1 12:13:23 np0005464891 infallible_thompson[74000]: 
Oct  1 12:13:23 np0005464891 infallible_thompson[74000]:  services:
Oct  1 12:13:23 np0005464891 infallible_thompson[74000]:    mon: 1 daemons, quorum compute-0 (age 0.527143s)
Oct  1 12:13:23 np0005464891 infallible_thompson[74000]:    mgr: no daemons active
Oct  1 12:13:23 np0005464891 infallible_thompson[74000]:    osd: 0 osds: 0 up, 0 in
Oct  1 12:13:23 np0005464891 infallible_thompson[74000]: 
Oct  1 12:13:23 np0005464891 infallible_thompson[74000]:  data:
Oct  1 12:13:23 np0005464891 infallible_thompson[74000]:    pools:   0 pools, 0 pgs
Oct  1 12:13:23 np0005464891 infallible_thompson[74000]:    objects: 0 objects, 0 B
Oct  1 12:13:23 np0005464891 infallible_thompson[74000]:    usage:   0 B used, 0 B / 0 B avail
Oct  1 12:13:23 np0005464891 infallible_thompson[74000]:    pgs:     
Oct  1 12:13:23 np0005464891 infallible_thompson[74000]: 
Oct  1 12:13:23 np0005464891 systemd[1]: libpod-6edd5c1efe3869159240b9e0d681eaf7e317b1f5dd594509578367c0898d71db.scope: Deactivated successfully.
Oct  1 12:13:23 np0005464891 podman[73946]: 2025-10-01 16:13:23.34440322 +0000 UTC m=+0.588191820 container died 6edd5c1efe3869159240b9e0d681eaf7e317b1f5dd594509578367c0898d71db (image=quay.io/ceph/ceph:v18, name=infallible_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 12:13:23 np0005464891 podman[73946]: 2025-10-01 16:13:23.386174718 +0000 UTC m=+0.629963318 container remove 6edd5c1efe3869159240b9e0d681eaf7e317b1f5dd594509578367c0898d71db (image=quay.io/ceph/ceph:v18, name=infallible_thompson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 12:13:23 np0005464891 systemd[1]: libpod-conmon-6edd5c1efe3869159240b9e0d681eaf7e317b1f5dd594509578367c0898d71db.scope: Deactivated successfully.
Oct  1 12:13:23 np0005464891 podman[74038]: 2025-10-01 16:13:23.451879537 +0000 UTC m=+0.044117990 container create bd3199900be84d860015278ca03b6e951841d6d048fe712cb3fd71bc6b067916 (image=quay.io/ceph/ceph:v18, name=laughing_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 12:13:23 np0005464891 systemd[1]: Started libpod-conmon-bd3199900be84d860015278ca03b6e951841d6d048fe712cb3fd71bc6b067916.scope.
Oct  1 12:13:23 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:23 np0005464891 podman[74038]: 2025-10-01 16:13:23.431157197 +0000 UTC m=+0.023395680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae6ce3ac68e1186c51e2abc6c563999a6bf16ae52ef7a40a4eed55fa8156fc3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae6ce3ac68e1186c51e2abc6c563999a6bf16ae52ef7a40a4eed55fa8156fc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae6ce3ac68e1186c51e2abc6c563999a6bf16ae52ef7a40a4eed55fa8156fc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae6ce3ac68e1186c51e2abc6c563999a6bf16ae52ef7a40a4eed55fa8156fc3/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:23 np0005464891 podman[74038]: 2025-10-01 16:13:23.54605149 +0000 UTC m=+0.138289973 container init bd3199900be84d860015278ca03b6e951841d6d048fe712cb3fd71bc6b067916 (image=quay.io/ceph/ceph:v18, name=laughing_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 12:13:23 np0005464891 podman[74038]: 2025-10-01 16:13:23.560122393 +0000 UTC m=+0.152360846 container start bd3199900be84d860015278ca03b6e951841d6d048fe712cb3fd71bc6b067916 (image=quay.io/ceph/ceph:v18, name=laughing_goodall, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:13:23 np0005464891 podman[74038]: 2025-10-01 16:13:23.563333524 +0000 UTC m=+0.155571977 container attach bd3199900be84d860015278ca03b6e951841d6d048fe712cb3fd71bc6b067916 (image=quay.io/ceph/ceph:v18, name=laughing_goodall, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:13:23 np0005464891 ceph-mon[73945]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  1 12:13:23 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct  1 12:13:23 np0005464891 ceph-mon[73945]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3796464522' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  1 12:13:23 np0005464891 ceph-mon[73945]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3796464522' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  1 12:13:23 np0005464891 laughing_goodall[74054]: 
Oct  1 12:13:23 np0005464891 laughing_goodall[74054]: [global]
Oct  1 12:13:23 np0005464891 laughing_goodall[74054]: #011fsid = 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5
Oct  1 12:13:23 np0005464891 laughing_goodall[74054]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct  1 12:13:23 np0005464891 laughing_goodall[74054]: #011osd_crush_chooseleaf_type = 0
Oct  1 12:13:23 np0005464891 systemd[1]: libpod-bd3199900be84d860015278ca03b6e951841d6d048fe712cb3fd71bc6b067916.scope: Deactivated successfully.
Oct  1 12:13:23 np0005464891 conmon[74054]: conmon bd3199900be84d860015 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd3199900be84d860015278ca03b6e951841d6d048fe712cb3fd71bc6b067916.scope/container/memory.events
Oct  1 12:13:24 np0005464891 podman[74080]: 2025-10-01 16:13:24.043356399 +0000 UTC m=+0.029354513 container died bd3199900be84d860015278ca03b6e951841d6d048fe712cb3fd71bc6b067916 (image=quay.io/ceph/ceph:v18, name=laughing_goodall, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 12:13:24 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9ae6ce3ac68e1186c51e2abc6c563999a6bf16ae52ef7a40a4eed55fa8156fc3-merged.mount: Deactivated successfully.
Oct  1 12:13:24 np0005464891 podman[74080]: 2025-10-01 16:13:24.098845102 +0000 UTC m=+0.084843136 container remove bd3199900be84d860015278ca03b6e951841d6d048fe712cb3fd71bc6b067916 (image=quay.io/ceph/ceph:v18, name=laughing_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:13:24 np0005464891 systemd[1]: libpod-conmon-bd3199900be84d860015278ca03b6e951841d6d048fe712cb3fd71bc6b067916.scope: Deactivated successfully.
Oct  1 12:13:24 np0005464891 podman[74095]: 2025-10-01 16:13:24.19417288 +0000 UTC m=+0.055643047 container create ecc239d6eae8640cf05cb5a594a8c7a79b01600e5e0ab1cc56293ce5d4d434a1 (image=quay.io/ceph/ceph:v18, name=sad_benz, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 12:13:24 np0005464891 systemd[1]: Started libpod-conmon-ecc239d6eae8640cf05cb5a594a8c7a79b01600e5e0ab1cc56293ce5d4d434a1.scope.
Oct  1 12:13:24 np0005464891 podman[74095]: 2025-10-01 16:13:24.166597727 +0000 UTC m=+0.028067964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:24 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c2e5b55b3177efbce4f349637d8676ac034ff77d98ce0d7b597c06c458a1d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c2e5b55b3177efbce4f349637d8676ac034ff77d98ce0d7b597c06c458a1d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c2e5b55b3177efbce4f349637d8676ac034ff77d98ce0d7b597c06c458a1d0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c2e5b55b3177efbce4f349637d8676ac034ff77d98ce0d7b597c06c458a1d0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:24 np0005464891 podman[74095]: 2025-10-01 16:13:24.289766964 +0000 UTC m=+0.151237181 container init ecc239d6eae8640cf05cb5a594a8c7a79b01600e5e0ab1cc56293ce5d4d434a1 (image=quay.io/ceph/ceph:v18, name=sad_benz, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 12:13:24 np0005464891 podman[74095]: 2025-10-01 16:13:24.301218809 +0000 UTC m=+0.162688996 container start ecc239d6eae8640cf05cb5a594a8c7a79b01600e5e0ab1cc56293ce5d4d434a1 (image=quay.io/ceph/ceph:v18, name=sad_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 12:13:24 np0005464891 podman[74095]: 2025-10-01 16:13:24.30533099 +0000 UTC m=+0.166801247 container attach ecc239d6eae8640cf05cb5a594a8c7a79b01600e5e0ab1cc56293ce5d4d434a1 (image=quay.io/ceph/ceph:v18, name=sad_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:13:24 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:13:24 np0005464891 ceph-mon[73945]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3939176696' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:13:24 np0005464891 systemd[1]: libpod-ecc239d6eae8640cf05cb5a594a8c7a79b01600e5e0ab1cc56293ce5d4d434a1.scope: Deactivated successfully.
Oct  1 12:13:24 np0005464891 podman[74095]: 2025-10-01 16:13:24.71442507 +0000 UTC m=+0.575895317 container died ecc239d6eae8640cf05cb5a594a8c7a79b01600e5e0ab1cc56293ce5d4d434a1 (image=quay.io/ceph/ceph:v18, name=sad_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:13:24 np0005464891 systemd[1]: var-lib-containers-storage-overlay-99c2e5b55b3177efbce4f349637d8676ac034ff77d98ce0d7b597c06c458a1d0-merged.mount: Deactivated successfully.
Oct  1 12:13:24 np0005464891 podman[74095]: 2025-10-01 16:13:24.754337567 +0000 UTC m=+0.615807714 container remove ecc239d6eae8640cf05cb5a594a8c7a79b01600e5e0ab1cc56293ce5d4d434a1 (image=quay.io/ceph/ceph:v18, name=sad_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:13:24 np0005464891 systemd[1]: libpod-conmon-ecc239d6eae8640cf05cb5a594a8c7a79b01600e5e0ab1cc56293ce5d4d434a1.scope: Deactivated successfully.
Oct  1 12:13:24 np0005464891 systemd[1]: Stopping Ceph mon.compute-0 for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5...
Oct  1 12:13:24 np0005464891 ceph-mon[73945]: from='client.? 192.168.122.100:0/3796464522' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  1 12:13:24 np0005464891 ceph-mon[73945]: from='client.? 192.168.122.100:0/3796464522' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  1 12:13:24 np0005464891 ceph-mon[73945]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct  1 12:13:24 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct  1 12:13:24 np0005464891 ceph-mon[73945]: mon.compute-0@0(leader) e1 shutdown
Oct  1 12:13:24 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0[73941]: 2025-10-01T16:13:24.963+0000 7f7205aec640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct  1 12:13:24 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0[73941]: 2025-10-01T16:13:24.963+0000 7f7205aec640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct  1 12:13:24 np0005464891 ceph-mon[73945]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  1 12:13:24 np0005464891 ceph-mon[73945]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  1 12:13:25 np0005464891 podman[74183]: 2025-10-01 16:13:25.097318658 +0000 UTC m=+0.168575357 container died 8a2a2365489553be4f9208242545a4c1eb690ca7f0e2859817fb3449ddafb9dd (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:13:25 np0005464891 systemd[1]: var-lib-containers-storage-overlay-3b2dbe3e648be96e4a6a13a45129440385aa8d2e875b40bce12f5eace796c0e5-merged.mount: Deactivated successfully.
Oct  1 12:13:25 np0005464891 podman[74183]: 2025-10-01 16:13:25.144657559 +0000 UTC m=+0.215914268 container remove 8a2a2365489553be4f9208242545a4c1eb690ca7f0e2859817fb3449ddafb9dd (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 12:13:25 np0005464891 bash[74183]: ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0
Oct  1 12:13:25 np0005464891 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 12:13:25 np0005464891 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 12:13:25 np0005464891 systemd[1]: ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5@mon.compute-0.service: Deactivated successfully.
Oct  1 12:13:25 np0005464891 systemd[1]: Stopped Ceph mon.compute-0 for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5.
Oct  1 12:13:25 np0005464891 systemd[1]: ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5@mon.compute-0.service: Consumed 1.048s CPU time.
Oct  1 12:13:25 np0005464891 systemd[1]: Starting Ceph mon.compute-0 for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5...
Oct  1 12:13:25 np0005464891 podman[74283]: 2025-10-01 16:13:25.551432667 +0000 UTC m=+0.054987303 container create 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 12:13:25 np0005464891 podman[74283]: 2025-10-01 16:13:25.523849184 +0000 UTC m=+0.027403870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ada1ace435f9d8acb51a5d309a5ba284578807a141847c85c8786d228c31a78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ada1ace435f9d8acb51a5d309a5ba284578807a141847c85c8786d228c31a78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ada1ace435f9d8acb51a5d309a5ba284578807a141847c85c8786d228c31a78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ada1ace435f9d8acb51a5d309a5ba284578807a141847c85c8786d228c31a78/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:25 np0005464891 podman[74283]: 2025-10-01 16:13:25.642573202 +0000 UTC m=+0.146127818 container init 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:13:25 np0005464891 podman[74283]: 2025-10-01 16:13:25.651441969 +0000 UTC m=+0.154996565 container start 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:13:25 np0005464891 bash[74283]: 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e
Oct  1 12:13:25 np0005464891 systemd[1]: Started Ceph mon.compute-0 for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5.
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: pidfile_write: ignore empty --pid-file
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: load: jerasure load: lrc 
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: RocksDB version: 7.9.2
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Git sha 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: DB SUMMARY
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: DB Session ID:  49L36WBKX0OR9VW6SLLI
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: CURRENT file:  CURRENT
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: IDENTITY file:  IDENTITY
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55680 ; 
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                         Options.error_if_exists: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                       Options.create_if_missing: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                         Options.paranoid_checks: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                                     Options.env: 0x55bddafcfc40
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                                Options.info_log: 0x55bddc59d040
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                Options.max_file_opening_threads: 16
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                              Options.statistics: (nil)
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                               Options.use_fsync: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                       Options.max_log_file_size: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                         Options.allow_fallocate: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                        Options.use_direct_reads: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:          Options.create_missing_column_families: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                              Options.db_log_dir: 
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                                 Options.wal_dir: 
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                   Options.advise_random_on_open: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                    Options.write_buffer_manager: 0x55bddc5acb40
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                            Options.rate_limiter: (nil)
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                  Options.unordered_write: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                               Options.row_cache: None
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                              Options.wal_filter: None
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.allow_ingest_behind: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.two_write_queues: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.manual_wal_flush: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.wal_compression: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.atomic_flush: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                 Options.log_readahead_size: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.allow_data_in_errors: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.db_host_id: __hostname__
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.max_background_jobs: 2
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.max_background_compactions: -1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.max_subcompactions: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.max_total_wal_size: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                          Options.max_open_files: -1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                          Options.bytes_per_sync: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:       Options.compaction_readahead_size: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                  Options.max_background_flushes: -1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Compression algorithms supported:
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: #011kZSTD supported: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: #011kXpressCompression supported: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: #011kBZip2Compression supported: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: #011kLZ4Compression supported: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: #011kZlibCompression supported: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: #011kSnappyCompression supported: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:           Options.merge_operator: 
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:        Options.compaction_filter: None
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bddc59cc40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bddc5951f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:        Options.write_buffer_size: 33554432
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:  Options.max_write_buffer_number: 2
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:          Options.compression: NoCompression
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.num_levels: 7
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4cdc7836-3ae4-40a3-8b66-898644585cc0
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335205698369, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335205702610, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53801, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51390, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335205, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335205702721, "job": 1, "event": "recovery_finished"}
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bddc5bee00
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: DB pointer 0x55bddc648000
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.73 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.73 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bddc5951f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: mon.compute-0@-1(???) e1 preinit fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: mon.compute-0@-1(???).mds e1 new map
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(probing) e1 win_standalone_election
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct  1 12:13:25 np0005464891 podman[74304]: 2025-10-01 16:13:25.721809943 +0000 UTC m=+0.040937071 container create 0c9cf453c9576611264ad181bd5cb134a70d6b372b98a4dfd404d3826fa458f4 (image=quay.io/ceph/ceph:v18, name=strange_albattani, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : fsmap 
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct  1 12:13:25 np0005464891 systemd[1]: Started libpod-conmon-0c9cf453c9576611264ad181bd5cb134a70d6b372b98a4dfd404d3826fa458f4.scope.
Oct  1 12:13:25 np0005464891 ceph-mon[74303]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  1 12:13:25 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2375a2ff26cc5ed948475c2e96e2b0b4a627de4d1775649eff9b376292f0fb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2375a2ff26cc5ed948475c2e96e2b0b4a627de4d1775649eff9b376292f0fb2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2375a2ff26cc5ed948475c2e96e2b0b4a627de4d1775649eff9b376292f0fb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:25 np0005464891 podman[74304]: 2025-10-01 16:13:25.704413766 +0000 UTC m=+0.023540904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:25 np0005464891 podman[74304]: 2025-10-01 16:13:25.814545023 +0000 UTC m=+0.133672181 container init 0c9cf453c9576611264ad181bd5cb134a70d6b372b98a4dfd404d3826fa458f4 (image=quay.io/ceph/ceph:v18, name=strange_albattani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:13:25 np0005464891 podman[74304]: 2025-10-01 16:13:25.823710647 +0000 UTC m=+0.142837795 container start 0c9cf453c9576611264ad181bd5cb134a70d6b372b98a4dfd404d3826fa458f4 (image=quay.io/ceph/ceph:v18, name=strange_albattani, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct  1 12:13:25 np0005464891 podman[74304]: 2025-10-01 16:13:25.82699602 +0000 UTC m=+0.146123168 container attach 0c9cf453c9576611264ad181bd5cb134a70d6b372b98a4dfd404d3826fa458f4 (image=quay.io/ceph/ceph:v18, name=strange_albattani, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:13:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Oct  1 12:13:26 np0005464891 systemd[1]: libpod-0c9cf453c9576611264ad181bd5cb134a70d6b372b98a4dfd404d3826fa458f4.scope: Deactivated successfully.
Oct  1 12:13:26 np0005464891 podman[74304]: 2025-10-01 16:13:26.256236937 +0000 UTC m=+0.575364065 container died 0c9cf453c9576611264ad181bd5cb134a70d6b372b98a4dfd404d3826fa458f4 (image=quay.io/ceph/ceph:v18, name=strange_albattani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 12:13:26 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e2375a2ff26cc5ed948475c2e96e2b0b4a627de4d1775649eff9b376292f0fb2-merged.mount: Deactivated successfully.
Oct  1 12:13:26 np0005464891 podman[74304]: 2025-10-01 16:13:26.294880505 +0000 UTC m=+0.614007633 container remove 0c9cf453c9576611264ad181bd5cb134a70d6b372b98a4dfd404d3826fa458f4 (image=quay.io/ceph/ceph:v18, name=strange_albattani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 12:13:26 np0005464891 systemd[1]: libpod-conmon-0c9cf453c9576611264ad181bd5cb134a70d6b372b98a4dfd404d3826fa458f4.scope: Deactivated successfully.
Oct  1 12:13:26 np0005464891 podman[74397]: 2025-10-01 16:13:26.387305009 +0000 UTC m=+0.067368078 container create 8eea783ea42efae486362f14bb6abe15072e6e03cd2186876ebbb75d37639335 (image=quay.io/ceph/ceph:v18, name=beautiful_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:13:26 np0005464891 systemd[1]: Started libpod-conmon-8eea783ea42efae486362f14bb6abe15072e6e03cd2186876ebbb75d37639335.scope.
Oct  1 12:13:26 np0005464891 podman[74397]: 2025-10-01 16:13:26.347818302 +0000 UTC m=+0.027881381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:26 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe2fe1bd52a848c91760d7de24a2aaa291e49d1200c4dc3dbf524ff7b67773e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe2fe1bd52a848c91760d7de24a2aaa291e49d1200c4dc3dbf524ff7b67773e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe2fe1bd52a848c91760d7de24a2aaa291e49d1200c4dc3dbf524ff7b67773e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:26 np0005464891 podman[74397]: 2025-10-01 16:13:26.48767818 +0000 UTC m=+0.167741259 container init 8eea783ea42efae486362f14bb6abe15072e6e03cd2186876ebbb75d37639335 (image=quay.io/ceph/ceph:v18, name=beautiful_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:13:26 np0005464891 podman[74397]: 2025-10-01 16:13:26.494163073 +0000 UTC m=+0.174226132 container start 8eea783ea42efae486362f14bb6abe15072e6e03cd2186876ebbb75d37639335 (image=quay.io/ceph/ceph:v18, name=beautiful_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:13:26 np0005464891 podman[74397]: 2025-10-01 16:13:26.515034328 +0000 UTC m=+0.195097427 container attach 8eea783ea42efae486362f14bb6abe15072e6e03cd2186876ebbb75d37639335 (image=quay.io/ceph/ceph:v18, name=beautiful_hugle, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 12:13:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Oct  1 12:13:26 np0005464891 systemd[1]: libpod-8eea783ea42efae486362f14bb6abe15072e6e03cd2186876ebbb75d37639335.scope: Deactivated successfully.
Oct  1 12:13:26 np0005464891 podman[74397]: 2025-10-01 16:13:26.960814112 +0000 UTC m=+0.640877161 container died 8eea783ea42efae486362f14bb6abe15072e6e03cd2186876ebbb75d37639335 (image=quay.io/ceph/ceph:v18, name=beautiful_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 12:13:26 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ffe2fe1bd52a848c91760d7de24a2aaa291e49d1200c4dc3dbf524ff7b67773e-merged.mount: Deactivated successfully.
Oct  1 12:13:27 np0005464891 podman[74397]: 2025-10-01 16:13:27.009369901 +0000 UTC m=+0.689432960 container remove 8eea783ea42efae486362f14bb6abe15072e6e03cd2186876ebbb75d37639335 (image=quay.io/ceph/ceph:v18, name=beautiful_hugle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:13:27 np0005464891 systemd[1]: libpod-conmon-8eea783ea42efae486362f14bb6abe15072e6e03cd2186876ebbb75d37639335.scope: Deactivated successfully.
Oct  1 12:13:27 np0005464891 systemd[1]: Reloading.
Oct  1 12:13:27 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:13:27 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:13:27 np0005464891 systemd[1]: Reloading.
Oct  1 12:13:27 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:13:27 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:13:27 np0005464891 systemd[1]: Starting Ceph mgr.compute-0.ieawdb for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5...
Oct  1 12:13:27 np0005464891 podman[74572]: 2025-10-01 16:13:27.870716589 +0000 UTC m=+0.041507123 container create fe2a13ced320d5d3e5477de786fa64cd1cda06744b9bb9e3f04d86ee049e3467 (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 12:13:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d605be7e1096667150cf22e64ddab8bd7784bfa83374c6a414d656763508db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d605be7e1096667150cf22e64ddab8bd7784bfa83374c6a414d656763508db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d605be7e1096667150cf22e64ddab8bd7784bfa83374c6a414d656763508db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d605be7e1096667150cf22e64ddab8bd7784bfa83374c6a414d656763508db/merged/var/lib/ceph/mgr/ceph-compute-0.ieawdb supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:27 np0005464891 podman[74572]: 2025-10-01 16:13:27.927629994 +0000 UTC m=+0.098420588 container init fe2a13ced320d5d3e5477de786fa64cd1cda06744b9bb9e3f04d86ee049e3467 (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:13:27 np0005464891 podman[74572]: 2025-10-01 16:13:27.934798273 +0000 UTC m=+0.105588827 container start fe2a13ced320d5d3e5477de786fa64cd1cda06744b9bb9e3f04d86ee049e3467 (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:13:27 np0005464891 bash[74572]: fe2a13ced320d5d3e5477de786fa64cd1cda06744b9bb9e3f04d86ee049e3467
Oct  1 12:13:27 np0005464891 podman[74572]: 2025-10-01 16:13:27.85365308 +0000 UTC m=+0.024443644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:27 np0005464891 systemd[1]: Started Ceph mgr.compute-0.ieawdb for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5.
Oct  1 12:13:28 np0005464891 ceph-mgr[74592]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 12:13:28 np0005464891 ceph-mgr[74592]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  1 12:13:28 np0005464891 ceph-mgr[74592]: pidfile_write: ignore empty --pid-file
Oct  1 12:13:28 np0005464891 podman[74593]: 2025-10-01 16:13:28.022226566 +0000 UTC m=+0.048469279 container create e408ce4f804ef46fb3a649d68bca96d12eb1a589805a2c92966314e7f60d0e5c (image=quay.io/ceph/ceph:v18, name=cool_khayyam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 12:13:28 np0005464891 systemd[1]: Started libpod-conmon-e408ce4f804ef46fb3a649d68bca96d12eb1a589805a2c92966314e7f60d0e5c.scope.
Oct  1 12:13:28 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5415e5b0a5b9a90f92fb54a7b12b73da890b64ef8816f7f0a4e5743f9ddea5b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5415e5b0a5b9a90f92fb54a7b12b73da890b64ef8816f7f0a4e5743f9ddea5b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5415e5b0a5b9a90f92fb54a7b12b73da890b64ef8816f7f0a4e5743f9ddea5b5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:28 np0005464891 podman[74593]: 2025-10-01 16:13:27.99990072 +0000 UTC m=+0.026143523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:28 np0005464891 podman[74593]: 2025-10-01 16:13:28.102924249 +0000 UTC m=+0.129166982 container init e408ce4f804ef46fb3a649d68bca96d12eb1a589805a2c92966314e7f60d0e5c (image=quay.io/ceph/ceph:v18, name=cool_khayyam, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 12:13:28 np0005464891 podman[74593]: 2025-10-01 16:13:28.112751907 +0000 UTC m=+0.138994650 container start e408ce4f804ef46fb3a649d68bca96d12eb1a589805a2c92966314e7f60d0e5c (image=quay.io/ceph/ceph:v18, name=cool_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 12:13:28 np0005464891 podman[74593]: 2025-10-01 16:13:28.129682493 +0000 UTC m=+0.155925206 container attach e408ce4f804ef46fb3a649d68bca96d12eb1a589805a2c92966314e7f60d0e5c (image=quay.io/ceph/ceph:v18, name=cool_khayyam, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 12:13:28 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'alerts'
Oct  1 12:13:28 np0005464891 ceph-mgr[74592]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  1 12:13:28 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'balancer'
Oct  1 12:13:28 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:28.508+0000 7fdd61c77140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  1 12:13:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 12:13:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019599254' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]: 
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]: {
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    "fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    "health": {
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "status": "HEALTH_OK",
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "checks": {},
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "mutes": []
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    },
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    "election_epoch": 5,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    "quorum": [
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        0
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    ],
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    "quorum_names": [
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "compute-0"
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    ],
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    "quorum_age": 2,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    "monmap": {
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "epoch": 1,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "min_mon_release_name": "reef",
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "num_mons": 1
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    },
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    "osdmap": {
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "epoch": 1,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "num_osds": 0,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "num_up_osds": 0,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "osd_up_since": 0,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "num_in_osds": 0,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "osd_in_since": 0,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "num_remapped_pgs": 0
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    },
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    "pgmap": {
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "pgs_by_state": [],
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "num_pgs": 0,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "num_pools": 0,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "num_objects": 0,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "data_bytes": 0,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "bytes_used": 0,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "bytes_avail": 0,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "bytes_total": 0
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    },
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    "fsmap": {
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "epoch": 1,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "by_rank": [],
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "up:standby": 0
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    },
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    "mgrmap": {
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "available": false,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "num_standbys": 0,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "modules": [
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:            "iostat",
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:            "nfs",
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:            "restful"
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        ],
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "services": {}
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    },
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    "servicemap": {
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "epoch": 1,
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "modified": "2025-10-01T16:13:22.803916+0000",
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:        "services": {}
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    },
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]:    "progress_events": {}
Oct  1 12:13:28 np0005464891 cool_khayyam[74634]: }
Oct  1 12:13:28 np0005464891 systemd[1]: libpod-e408ce4f804ef46fb3a649d68bca96d12eb1a589805a2c92966314e7f60d0e5c.scope: Deactivated successfully.
Oct  1 12:13:28 np0005464891 conmon[74634]: conmon e408ce4f804ef46fb3a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e408ce4f804ef46fb3a649d68bca96d12eb1a589805a2c92966314e7f60d0e5c.scope/container/memory.events
Oct  1 12:13:28 np0005464891 podman[74593]: 2025-10-01 16:13:28.546895253 +0000 UTC m=+0.573138006 container died e408ce4f804ef46fb3a649d68bca96d12eb1a589805a2c92966314e7f60d0e5c (image=quay.io/ceph/ceph:v18, name=cool_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:13:28 np0005464891 systemd[1]: var-lib-containers-storage-overlay-5415e5b0a5b9a90f92fb54a7b12b73da890b64ef8816f7f0a4e5743f9ddea5b5-merged.mount: Deactivated successfully.
Oct  1 12:13:28 np0005464891 podman[74593]: 2025-10-01 16:13:28.601890135 +0000 UTC m=+0.628132888 container remove e408ce4f804ef46fb3a649d68bca96d12eb1a589805a2c92966314e7f60d0e5c (image=quay.io/ceph/ceph:v18, name=cool_khayyam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:13:28 np0005464891 systemd[1]: libpod-conmon-e408ce4f804ef46fb3a649d68bca96d12eb1a589805a2c92966314e7f60d0e5c.scope: Deactivated successfully.
Oct  1 12:13:28 np0005464891 ceph-mgr[74592]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  1 12:13:28 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'cephadm'
Oct  1 12:13:28 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:28.801+0000 7fdd61c77140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  1 12:13:30 np0005464891 podman[74683]: 2025-10-01 16:13:30.683198119 +0000 UTC m=+0.048103920 container create ba72b9258cd9fafd76a67c55d2421bd241f9534c912a3e1ab5e3ae60207ebd4d (image=quay.io/ceph/ceph:v18, name=strange_khorana, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 12:13:30 np0005464891 systemd[1]: Started libpod-conmon-ba72b9258cd9fafd76a67c55d2421bd241f9534c912a3e1ab5e3ae60207ebd4d.scope.
Oct  1 12:13:30 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75e2fa33ab3a9c51d8c8780af5a0b6c54f073e10d59607298e7974b799ffcb9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75e2fa33ab3a9c51d8c8780af5a0b6c54f073e10d59607298e7974b799ffcb9a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75e2fa33ab3a9c51d8c8780af5a0b6c54f073e10d59607298e7974b799ffcb9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:30 np0005464891 podman[74683]: 2025-10-01 16:13:30.659794829 +0000 UTC m=+0.024700660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:30 np0005464891 podman[74683]: 2025-10-01 16:13:30.771561182 +0000 UTC m=+0.136467003 container init ba72b9258cd9fafd76a67c55d2421bd241f9534c912a3e1ab5e3ae60207ebd4d (image=quay.io/ceph/ceph:v18, name=strange_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:13:30 np0005464891 podman[74683]: 2025-10-01 16:13:30.782612658 +0000 UTC m=+0.147518469 container start ba72b9258cd9fafd76a67c55d2421bd241f9534c912a3e1ab5e3ae60207ebd4d (image=quay.io/ceph/ceph:v18, name=strange_khorana, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:13:30 np0005464891 podman[74683]: 2025-10-01 16:13:30.785830189 +0000 UTC m=+0.150736060 container attach ba72b9258cd9fafd76a67c55d2421bd241f9534c912a3e1ab5e3ae60207ebd4d (image=quay.io/ceph/ceph:v18, name=strange_khorana, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 12:13:31 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'crash'
Oct  1 12:13:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 12:13:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2736532625' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 12:13:31 np0005464891 strange_khorana[74700]: 
Oct  1 12:13:31 np0005464891 strange_khorana[74700]: {
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    "fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    "health": {
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "status": "HEALTH_OK",
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "checks": {},
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "mutes": []
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    },
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    "election_epoch": 5,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    "quorum": [
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        0
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    ],
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    "quorum_names": [
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "compute-0"
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    ],
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    "quorum_age": 5,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    "monmap": {
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "epoch": 1,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "min_mon_release_name": "reef",
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "num_mons": 1
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    },
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    "osdmap": {
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "epoch": 1,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "num_osds": 0,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "num_up_osds": 0,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "osd_up_since": 0,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "num_in_osds": 0,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "osd_in_since": 0,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "num_remapped_pgs": 0
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    },
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    "pgmap": {
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "pgs_by_state": [],
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "num_pgs": 0,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "num_pools": 0,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "num_objects": 0,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "data_bytes": 0,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "bytes_used": 0,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "bytes_avail": 0,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "bytes_total": 0
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    },
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    "fsmap": {
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "epoch": 1,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "by_rank": [],
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "up:standby": 0
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    },
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    "mgrmap": {
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "available": false,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "num_standbys": 0,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "modules": [
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:            "iostat",
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:            "nfs",
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:            "restful"
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        ],
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "services": {}
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    },
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    "servicemap": {
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "epoch": 1,
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "modified": "2025-10-01T16:13:22.803916+0000",
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:        "services": {}
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    },
Oct  1 12:13:31 np0005464891 strange_khorana[74700]:    "progress_events": {}
Oct  1 12:13:31 np0005464891 strange_khorana[74700]: }
Oct  1 12:13:31 np0005464891 systemd[1]: libpod-ba72b9258cd9fafd76a67c55d2421bd241f9534c912a3e1ab5e3ae60207ebd4d.scope: Deactivated successfully.
Oct  1 12:13:31 np0005464891 podman[74726]: 2025-10-01 16:13:31.246391063 +0000 UTC m=+0.023181927 container died ba72b9258cd9fafd76a67c55d2421bd241f9534c912a3e1ab5e3ae60207ebd4d (image=quay.io/ceph/ceph:v18, name=strange_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 12:13:31 np0005464891 systemd[1]: var-lib-containers-storage-overlay-75e2fa33ab3a9c51d8c8780af5a0b6c54f073e10d59607298e7974b799ffcb9a-merged.mount: Deactivated successfully.
Oct  1 12:13:31 np0005464891 podman[74726]: 2025-10-01 16:13:31.28452489 +0000 UTC m=+0.061315734 container remove ba72b9258cd9fafd76a67c55d2421bd241f9534c912a3e1ab5e3ae60207ebd4d (image=quay.io/ceph/ceph:v18, name=strange_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:13:31 np0005464891 systemd[1]: libpod-conmon-ba72b9258cd9fafd76a67c55d2421bd241f9534c912a3e1ab5e3ae60207ebd4d.scope: Deactivated successfully.
Oct  1 12:13:31 np0005464891 ceph-mgr[74592]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  1 12:13:31 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'dashboard'
Oct  1 12:13:31 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:31.391+0000 7fdd61c77140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  1 12:13:32 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'devicehealth'
Oct  1 12:13:33 np0005464891 ceph-mgr[74592]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  1 12:13:33 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'diskprediction_local'
Oct  1 12:13:33 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:33.132+0000 7fdd61c77140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  1 12:13:33 np0005464891 podman[74742]: 2025-10-01 16:13:33.373909754 +0000 UTC m=+0.050397101 container create 7f4cd53e7ad5f1fa7861c057d834886feb28fad259423f9cd4606275353549db (image=quay.io/ceph/ceph:v18, name=dazzling_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:13:33 np0005464891 systemd[1]: Started libpod-conmon-7f4cd53e7ad5f1fa7861c057d834886feb28fad259423f9cd4606275353549db.scope.
Oct  1 12:13:33 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1f078b003317e095287a84c3f1d62d6578b76dc35bd2c5ec2ca276a62cb9f7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1f078b003317e095287a84c3f1d62d6578b76dc35bd2c5ec2ca276a62cb9f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1f078b003317e095287a84c3f1d62d6578b76dc35bd2c5ec2ca276a62cb9f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:33 np0005464891 podman[74742]: 2025-10-01 16:13:33.347051038 +0000 UTC m=+0.023538375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:33 np0005464891 podman[74742]: 2025-10-01 16:13:33.466833998 +0000 UTC m=+0.143321445 container init 7f4cd53e7ad5f1fa7861c057d834886feb28fad259423f9cd4606275353549db (image=quay.io/ceph/ceph:v18, name=dazzling_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:13:33 np0005464891 podman[74742]: 2025-10-01 16:13:33.471746428 +0000 UTC m=+0.148233775 container start 7f4cd53e7ad5f1fa7861c057d834886feb28fad259423f9cd4606275353549db (image=quay.io/ceph/ceph:v18, name=dazzling_hertz, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:13:33 np0005464891 podman[74742]: 2025-10-01 16:13:33.476046974 +0000 UTC m=+0.152534321 container attach 7f4cd53e7ad5f1fa7861c057d834886feb28fad259423f9cd4606275353549db (image=quay.io/ceph/ceph:v18, name=dazzling_hertz, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:13:33 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  1 12:13:33 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  1 12:13:33 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]:  from numpy import show_config as show_numpy_config
Oct  1 12:13:33 np0005464891 ceph-mgr[74592]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  1 12:13:33 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'influx'
Oct  1 12:13:33 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:33.672+0000 7fdd61c77140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  1 12:13:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 12:13:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3385209632' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]: 
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]: {
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    "fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    "health": {
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "status": "HEALTH_OK",
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "checks": {},
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "mutes": []
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    },
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    "election_epoch": 5,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    "quorum": [
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        0
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    ],
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    "quorum_names": [
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "compute-0"
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    ],
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    "quorum_age": 8,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    "monmap": {
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "epoch": 1,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "min_mon_release_name": "reef",
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "num_mons": 1
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    },
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    "osdmap": {
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "epoch": 1,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "num_osds": 0,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "num_up_osds": 0,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "osd_up_since": 0,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "num_in_osds": 0,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "osd_in_since": 0,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "num_remapped_pgs": 0
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    },
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    "pgmap": {
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "pgs_by_state": [],
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "num_pgs": 0,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "num_pools": 0,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "num_objects": 0,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "data_bytes": 0,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "bytes_used": 0,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "bytes_avail": 0,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "bytes_total": 0
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    },
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    "fsmap": {
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "epoch": 1,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "by_rank": [],
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "up:standby": 0
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    },
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    "mgrmap": {
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "available": false,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "num_standbys": 0,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "modules": [
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:            "iostat",
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:            "nfs",
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:            "restful"
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        ],
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "services": {}
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    },
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    "servicemap": {
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "epoch": 1,
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "modified": "2025-10-01T16:13:22.803916+0000",
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:        "services": {}
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    },
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]:    "progress_events": {}
Oct  1 12:13:33 np0005464891 dazzling_hertz[74760]: }
Oct  1 12:13:33 np0005464891 systemd[1]: libpod-7f4cd53e7ad5f1fa7861c057d834886feb28fad259423f9cd4606275353549db.scope: Deactivated successfully.
Oct  1 12:13:33 np0005464891 podman[74742]: 2025-10-01 16:13:33.906345345 +0000 UTC m=+0.582832662 container died 7f4cd53e7ad5f1fa7861c057d834886feb28fad259423f9cd4606275353549db (image=quay.io/ceph/ceph:v18, name=dazzling_hertz, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:13:33 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9d1f078b003317e095287a84c3f1d62d6578b76dc35bd2c5ec2ca276a62cb9f7-merged.mount: Deactivated successfully.
Oct  1 12:13:33 np0005464891 ceph-mgr[74592]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  1 12:13:33 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'insights'
Oct  1 12:13:33 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:33.935+0000 7fdd61c77140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  1 12:13:33 np0005464891 podman[74742]: 2025-10-01 16:13:33.950991357 +0000 UTC m=+0.627478674 container remove 7f4cd53e7ad5f1fa7861c057d834886feb28fad259423f9cd4606275353549db (image=quay.io/ceph/ceph:v18, name=dazzling_hertz, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct  1 12:13:33 np0005464891 systemd[1]: libpod-conmon-7f4cd53e7ad5f1fa7861c057d834886feb28fad259423f9cd4606275353549db.scope: Deactivated successfully.
Oct  1 12:13:34 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'iostat'
Oct  1 12:13:34 np0005464891 ceph-mgr[74592]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  1 12:13:34 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'k8sevents'
Oct  1 12:13:34 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:34.400+0000 7fdd61c77140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  1 12:13:36 np0005464891 podman[74797]: 2025-10-01 16:13:36.024200041 +0000 UTC m=+0.050212866 container create 734149be2680337c4c21da6d9244935655e2e0ff9073c5b462179a78a782d08c (image=quay.io/ceph/ceph:v18, name=agitated_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:13:36 np0005464891 systemd[1]: Started libpod-conmon-734149be2680337c4c21da6d9244935655e2e0ff9073c5b462179a78a782d08c.scope.
Oct  1 12:13:36 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def1faf595b2c03772c6a452b034f70250c117289fdebbe085bce3788dd564eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def1faf595b2c03772c6a452b034f70250c117289fdebbe085bce3788dd564eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def1faf595b2c03772c6a452b034f70250c117289fdebbe085bce3788dd564eb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:36 np0005464891 podman[74797]: 2025-10-01 16:13:36.001549758 +0000 UTC m=+0.027562613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:36 np0005464891 podman[74797]: 2025-10-01 16:13:36.131028475 +0000 UTC m=+0.157041380 container init 734149be2680337c4c21da6d9244935655e2e0ff9073c5b462179a78a782d08c (image=quay.io/ceph/ceph:v18, name=agitated_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 12:13:36 np0005464891 podman[74797]: 2025-10-01 16:13:36.138737446 +0000 UTC m=+0.164750251 container start 734149be2680337c4c21da6d9244935655e2e0ff9073c5b462179a78a782d08c (image=quay.io/ceph/ceph:v18, name=agitated_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 12:13:36 np0005464891 podman[74797]: 2025-10-01 16:13:36.141444307 +0000 UTC m=+0.167457112 container attach 734149be2680337c4c21da6d9244935655e2e0ff9073c5b462179a78a782d08c (image=quay.io/ceph/ceph:v18, name=agitated_brattain, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 12:13:36 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'localpool'
Oct  1 12:13:36 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'mds_autoscaler'
Oct  1 12:13:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 12:13:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2789159136' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]: 
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]: {
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    "fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    "health": {
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "status": "HEALTH_OK",
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "checks": {},
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "mutes": []
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    },
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    "election_epoch": 5,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    "quorum": [
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        0
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    ],
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    "quorum_names": [
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "compute-0"
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    ],
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    "quorum_age": 10,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    "monmap": {
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "epoch": 1,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "min_mon_release_name": "reef",
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "num_mons": 1
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    },
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    "osdmap": {
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "epoch": 1,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "num_osds": 0,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "num_up_osds": 0,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "osd_up_since": 0,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "num_in_osds": 0,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "osd_in_since": 0,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "num_remapped_pgs": 0
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    },
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    "pgmap": {
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "pgs_by_state": [],
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "num_pgs": 0,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "num_pools": 0,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "num_objects": 0,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "data_bytes": 0,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "bytes_used": 0,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "bytes_avail": 0,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "bytes_total": 0
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    },
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    "fsmap": {
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "epoch": 1,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "by_rank": [],
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "up:standby": 0
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    },
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    "mgrmap": {
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "available": false,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "num_standbys": 0,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "modules": [
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:            "iostat",
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:            "nfs",
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:            "restful"
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        ],
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "services": {}
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    },
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    "servicemap": {
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "epoch": 1,
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "modified": "2025-10-01T16:13:22.803916+0000",
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:        "services": {}
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    },
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]:    "progress_events": {}
Oct  1 12:13:36 np0005464891 agitated_brattain[74813]: }
Oct  1 12:13:36 np0005464891 systemd[1]: libpod-734149be2680337c4c21da6d9244935655e2e0ff9073c5b462179a78a782d08c.scope: Deactivated successfully.
Oct  1 12:13:36 np0005464891 podman[74797]: 2025-10-01 16:13:36.551885977 +0000 UTC m=+0.577898782 container died 734149be2680337c4c21da6d9244935655e2e0ff9073c5b462179a78a782d08c (image=quay.io/ceph/ceph:v18, name=agitated_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 12:13:36 np0005464891 systemd[1]: var-lib-containers-storage-overlay-def1faf595b2c03772c6a452b034f70250c117289fdebbe085bce3788dd564eb-merged.mount: Deactivated successfully.
Oct  1 12:13:36 np0005464891 podman[74797]: 2025-10-01 16:13:36.589713437 +0000 UTC m=+0.615726282 container remove 734149be2680337c4c21da6d9244935655e2e0ff9073c5b462179a78a782d08c (image=quay.io/ceph/ceph:v18, name=agitated_brattain, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 12:13:36 np0005464891 systemd[1]: libpod-conmon-734149be2680337c4c21da6d9244935655e2e0ff9073c5b462179a78a782d08c.scope: Deactivated successfully.
Oct  1 12:13:37 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'mirroring'
Oct  1 12:13:37 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'nfs'
Oct  1 12:13:38 np0005464891 ceph-mgr[74592]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  1 12:13:38 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'orchestrator'
Oct  1 12:13:38 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:38.056+0000 7fdd61c77140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  1 12:13:38 np0005464891 podman[74850]: 2025-10-01 16:13:38.668673359 +0000 UTC m=+0.048964589 container create 008a23948186f61c5744fcaa7df48b87b179a91f409deb12e726a11613d57274 (image=quay.io/ceph/ceph:v18, name=loving_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:13:38 np0005464891 systemd[1]: Started libpod-conmon-008a23948186f61c5744fcaa7df48b87b179a91f409deb12e726a11613d57274.scope.
Oct  1 12:13:38 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:38 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bf6e093bcc275e28e6452f3049af8f284c7fd16b29801e4aba7870b8d15f86/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:38 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bf6e093bcc275e28e6452f3049af8f284c7fd16b29801e4aba7870b8d15f86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:38 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bf6e093bcc275e28e6452f3049af8f284c7fd16b29801e4aba7870b8d15f86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:38 np0005464891 podman[74850]: 2025-10-01 16:13:38.739589415 +0000 UTC m=+0.119880565 container init 008a23948186f61c5744fcaa7df48b87b179a91f409deb12e726a11613d57274 (image=quay.io/ceph/ceph:v18, name=loving_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 12:13:38 np0005464891 podman[74850]: 2025-10-01 16:13:38.649679496 +0000 UTC m=+0.029970636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:38 np0005464891 podman[74850]: 2025-10-01 16:13:38.7461314 +0000 UTC m=+0.126422530 container start 008a23948186f61c5744fcaa7df48b87b179a91f409deb12e726a11613d57274 (image=quay.io/ceph/ceph:v18, name=loving_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:13:38 np0005464891 ceph-mgr[74592]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  1 12:13:38 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'osd_perf_query'
Oct  1 12:13:38 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:38.747+0000 7fdd61c77140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  1 12:13:38 np0005464891 podman[74850]: 2025-10-01 16:13:38.749680769 +0000 UTC m=+0.129971889 container attach 008a23948186f61c5744fcaa7df48b87b179a91f409deb12e726a11613d57274 (image=quay.io/ceph/ceph:v18, name=loving_noyce, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:13:39 np0005464891 ceph-mgr[74592]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  1 12:13:39 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'osd_support'
Oct  1 12:13:39 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:39.004+0000 7fdd61c77140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  1 12:13:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 12:13:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1460438384' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 12:13:39 np0005464891 loving_noyce[74866]: 
Oct  1 12:13:39 np0005464891 loving_noyce[74866]: {
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    "fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    "health": {
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "status": "HEALTH_OK",
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "checks": {},
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "mutes": []
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    },
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    "election_epoch": 5,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    "quorum": [
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        0
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    ],
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    "quorum_names": [
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "compute-0"
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    ],
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    "quorum_age": 13,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    "monmap": {
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "epoch": 1,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "min_mon_release_name": "reef",
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "num_mons": 1
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    },
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    "osdmap": {
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "epoch": 1,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "num_osds": 0,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "num_up_osds": 0,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "osd_up_since": 0,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "num_in_osds": 0,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "osd_in_since": 0,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "num_remapped_pgs": 0
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    },
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    "pgmap": {
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "pgs_by_state": [],
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "num_pgs": 0,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "num_pools": 0,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "num_objects": 0,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "data_bytes": 0,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "bytes_used": 0,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "bytes_avail": 0,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "bytes_total": 0
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    },
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    "fsmap": {
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "epoch": 1,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "by_rank": [],
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "up:standby": 0
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    },
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    "mgrmap": {
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "available": false,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "num_standbys": 0,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "modules": [
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:            "iostat",
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:            "nfs",
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:            "restful"
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        ],
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "services": {}
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    },
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    "servicemap": {
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "epoch": 1,
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "modified": "2025-10-01T16:13:22.803916+0000",
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:        "services": {}
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    },
Oct  1 12:13:39 np0005464891 loving_noyce[74866]:    "progress_events": {}
Oct  1 12:13:39 np0005464891 loving_noyce[74866]: }
Oct  1 12:13:39 np0005464891 systemd[1]: libpod-008a23948186f61c5744fcaa7df48b87b179a91f409deb12e726a11613d57274.scope: Deactivated successfully.
Oct  1 12:13:39 np0005464891 podman[74850]: 2025-10-01 16:13:39.208422912 +0000 UTC m=+0.588714072 container died 008a23948186f61c5744fcaa7df48b87b179a91f409deb12e726a11613d57274 (image=quay.io/ceph/ceph:v18, name=loving_noyce, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:13:39 np0005464891 systemd[1]: var-lib-containers-storage-overlay-31bf6e093bcc275e28e6452f3049af8f284c7fd16b29801e4aba7870b8d15f86-merged.mount: Deactivated successfully.
Oct  1 12:13:39 np0005464891 ceph-mgr[74592]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  1 12:13:39 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'pg_autoscaler'
Oct  1 12:13:39 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:39.237+0000 7fdd61c77140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  1 12:13:39 np0005464891 podman[74850]: 2025-10-01 16:13:39.257304887 +0000 UTC m=+0.637596017 container remove 008a23948186f61c5744fcaa7df48b87b179a91f409deb12e726a11613d57274 (image=quay.io/ceph/ceph:v18, name=loving_noyce, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:13:39 np0005464891 systemd[1]: libpod-conmon-008a23948186f61c5744fcaa7df48b87b179a91f409deb12e726a11613d57274.scope: Deactivated successfully.
Oct  1 12:13:39 np0005464891 ceph-mgr[74592]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  1 12:13:39 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'progress'
Oct  1 12:13:39 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:39.550+0000 7fdd61c77140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  1 12:13:39 np0005464891 ceph-mgr[74592]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  1 12:13:39 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'prometheus'
Oct  1 12:13:39 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:39.783+0000 7fdd61c77140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  1 12:13:40 np0005464891 ceph-mgr[74592]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  1 12:13:40 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'rbd_support'
Oct  1 12:13:40 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:40.789+0000 7fdd61c77140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  1 12:13:41 np0005464891 ceph-mgr[74592]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  1 12:13:41 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'restful'
Oct  1 12:13:41 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:41.111+0000 7fdd61c77140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  1 12:13:41 np0005464891 podman[74904]: 2025-10-01 16:13:41.335417461 +0000 UTC m=+0.055453954 container create 1f7124d2e2819f16cc7b2a0dbd9bdef6f75e1418ee1d853bdbf71417f4e1371b (image=quay.io/ceph/ceph:v18, name=wonderful_gould, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:13:41 np0005464891 systemd[1]: Started libpod-conmon-1f7124d2e2819f16cc7b2a0dbd9bdef6f75e1418ee1d853bdbf71417f4e1371b.scope.
Oct  1 12:13:41 np0005464891 podman[74904]: 2025-10-01 16:13:41.306812475 +0000 UTC m=+0.026848998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:41 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7621f820f1e6b4253c4364f3c0b149346c87a613ddca87e6f75562ea5b1ee52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7621f820f1e6b4253c4364f3c0b149346c87a613ddca87e6f75562ea5b1ee52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7621f820f1e6b4253c4364f3c0b149346c87a613ddca87e6f75562ea5b1ee52/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:41 np0005464891 podman[74904]: 2025-10-01 16:13:41.441725103 +0000 UTC m=+0.161761566 container init 1f7124d2e2819f16cc7b2a0dbd9bdef6f75e1418ee1d853bdbf71417f4e1371b (image=quay.io/ceph/ceph:v18, name=wonderful_gould, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:13:41 np0005464891 podman[74904]: 2025-10-01 16:13:41.447543432 +0000 UTC m=+0.167579885 container start 1f7124d2e2819f16cc7b2a0dbd9bdef6f75e1418ee1d853bdbf71417f4e1371b (image=quay.io/ceph/ceph:v18, name=wonderful_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:13:41 np0005464891 podman[74904]: 2025-10-01 16:13:41.475110425 +0000 UTC m=+0.195146898 container attach 1f7124d2e2819f16cc7b2a0dbd9bdef6f75e1418ee1d853bdbf71417f4e1371b (image=quay.io/ceph/ceph:v18, name=wonderful_gould, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:13:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 12:13:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2544366021' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]: 
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]: {
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    "fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    "health": {
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "status": "HEALTH_OK",
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "checks": {},
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "mutes": []
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    },
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    "election_epoch": 5,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    "quorum": [
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        0
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    ],
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    "quorum_names": [
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "compute-0"
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    ],
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    "quorum_age": 16,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    "monmap": {
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "epoch": 1,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "min_mon_release_name": "reef",
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "num_mons": 1
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    },
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    "osdmap": {
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "epoch": 1,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "num_osds": 0,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "num_up_osds": 0,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "osd_up_since": 0,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "num_in_osds": 0,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "osd_in_since": 0,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "num_remapped_pgs": 0
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    },
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    "pgmap": {
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "pgs_by_state": [],
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "num_pgs": 0,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "num_pools": 0,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "num_objects": 0,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "data_bytes": 0,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "bytes_used": 0,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "bytes_avail": 0,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "bytes_total": 0
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    },
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    "fsmap": {
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "epoch": 1,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "by_rank": [],
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "up:standby": 0
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    },
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    "mgrmap": {
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "available": false,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "num_standbys": 0,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "modules": [
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:            "iostat",
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:            "nfs",
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:            "restful"
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        ],
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "services": {}
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    },
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    "servicemap": {
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "epoch": 1,
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "modified": "2025-10-01T16:13:22.803916+0000",
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:        "services": {}
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    },
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]:    "progress_events": {}
Oct  1 12:13:41 np0005464891 wonderful_gould[74921]: }
Oct  1 12:13:41 np0005464891 podman[74904]: 2025-10-01 16:13:41.839895729 +0000 UTC m=+0.559932172 container died 1f7124d2e2819f16cc7b2a0dbd9bdef6f75e1418ee1d853bdbf71417f4e1371b (image=quay.io/ceph/ceph:v18, name=wonderful_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 12:13:41 np0005464891 systemd[1]: libpod-1f7124d2e2819f16cc7b2a0dbd9bdef6f75e1418ee1d853bdbf71417f4e1371b.scope: Deactivated successfully.
Oct  1 12:13:41 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c7621f820f1e6b4253c4364f3c0b149346c87a613ddca87e6f75562ea5b1ee52-merged.mount: Deactivated successfully.
Oct  1 12:13:41 np0005464891 podman[74904]: 2025-10-01 16:13:41.883713263 +0000 UTC m=+0.603749716 container remove 1f7124d2e2819f16cc7b2a0dbd9bdef6f75e1418ee1d853bdbf71417f4e1371b (image=quay.io/ceph/ceph:v18, name=wonderful_gould, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 12:13:41 np0005464891 systemd[1]: libpod-conmon-1f7124d2e2819f16cc7b2a0dbd9bdef6f75e1418ee1d853bdbf71417f4e1371b.scope: Deactivated successfully.
Oct  1 12:13:41 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'rgw'
Oct  1 12:13:42 np0005464891 ceph-mgr[74592]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  1 12:13:42 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'rook'
Oct  1 12:13:42 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:42.597+0000 7fdd61c77140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  1 12:13:43 np0005464891 podman[74960]: 2025-10-01 16:13:43.957070461 +0000 UTC m=+0.047908615 container create 7e1a09660f3738368a08aa4111ae5a881b9a11a113d23283c5dbffc6bf80e211 (image=quay.io/ceph/ceph:v18, name=intelligent_dhawan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 12:13:44 np0005464891 systemd[1]: Started libpod-conmon-7e1a09660f3738368a08aa4111ae5a881b9a11a113d23283c5dbffc6bf80e211.scope.
Oct  1 12:13:44 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:44 np0005464891 podman[74960]: 2025-10-01 16:13:43.936874472 +0000 UTC m=+0.027712646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84794e9e453f176c78c15f89ee06b599bb65ec365de9489e370033637402908f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84794e9e453f176c78c15f89ee06b599bb65ec365de9489e370033637402908f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84794e9e453f176c78c15f89ee06b599bb65ec365de9489e370033637402908f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:44 np0005464891 podman[74960]: 2025-10-01 16:13:44.067815672 +0000 UTC m=+0.158653806 container init 7e1a09660f3738368a08aa4111ae5a881b9a11a113d23283c5dbffc6bf80e211 (image=quay.io/ceph/ceph:v18, name=intelligent_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 12:13:44 np0005464891 podman[74960]: 2025-10-01 16:13:44.077068217 +0000 UTC m=+0.167906381 container start 7e1a09660f3738368a08aa4111ae5a881b9a11a113d23283c5dbffc6bf80e211 (image=quay.io/ceph/ceph:v18, name=intelligent_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 12:13:44 np0005464891 podman[74960]: 2025-10-01 16:13:44.080259168 +0000 UTC m=+0.171097302 container attach 7e1a09660f3738368a08aa4111ae5a881b9a11a113d23283c5dbffc6bf80e211 (image=quay.io/ceph/ceph:v18, name=intelligent_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:13:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 12:13:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3242256239' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]: 
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]: {
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    "fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    "health": {
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "status": "HEALTH_OK",
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "checks": {},
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "mutes": []
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    },
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    "election_epoch": 5,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    "quorum": [
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        0
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    ],
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    "quorum_names": [
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "compute-0"
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    ],
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    "quorum_age": 18,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    "monmap": {
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "epoch": 1,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "min_mon_release_name": "reef",
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "num_mons": 1
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    },
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    "osdmap": {
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "epoch": 1,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "num_osds": 0,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "num_up_osds": 0,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "osd_up_since": 0,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "num_in_osds": 0,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "osd_in_since": 0,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "num_remapped_pgs": 0
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    },
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    "pgmap": {
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "pgs_by_state": [],
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "num_pgs": 0,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "num_pools": 0,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "num_objects": 0,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "data_bytes": 0,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "bytes_used": 0,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "bytes_avail": 0,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "bytes_total": 0
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    },
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    "fsmap": {
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "epoch": 1,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "by_rank": [],
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "up:standby": 0
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    },
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    "mgrmap": {
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "available": false,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "num_standbys": 0,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "modules": [
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:            "iostat",
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:            "nfs",
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:            "restful"
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        ],
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "services": {}
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    },
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    "servicemap": {
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "epoch": 1,
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "modified": "2025-10-01T16:13:22.803916+0000",
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:        "services": {}
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    },
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]:    "progress_events": {}
Oct  1 12:13:44 np0005464891 intelligent_dhawan[74977]: }
Oct  1 12:13:44 np0005464891 systemd[1]: libpod-7e1a09660f3738368a08aa4111ae5a881b9a11a113d23283c5dbffc6bf80e211.scope: Deactivated successfully.
Oct  1 12:13:44 np0005464891 podman[74960]: 2025-10-01 16:13:44.492832185 +0000 UTC m=+0.583670299 container died 7e1a09660f3738368a08aa4111ae5a881b9a11a113d23283c5dbffc6bf80e211 (image=quay.io/ceph/ceph:v18, name=intelligent_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:13:44 np0005464891 systemd[1]: var-lib-containers-storage-overlay-84794e9e453f176c78c15f89ee06b599bb65ec365de9489e370033637402908f-merged.mount: Deactivated successfully.
Oct  1 12:13:44 np0005464891 podman[74960]: 2025-10-01 16:13:44.554167958 +0000 UTC m=+0.645006082 container remove 7e1a09660f3738368a08aa4111ae5a881b9a11a113d23283c5dbffc6bf80e211 (image=quay.io/ceph/ceph:v18, name=intelligent_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 12:13:44 np0005464891 systemd[1]: libpod-conmon-7e1a09660f3738368a08aa4111ae5a881b9a11a113d23283c5dbffc6bf80e211.scope: Deactivated successfully.
Oct  1 12:13:44 np0005464891 ceph-mgr[74592]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  1 12:13:44 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'selftest'
Oct  1 12:13:44 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:44.660+0000 7fdd61c77140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  1 12:13:44 np0005464891 ceph-mgr[74592]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  1 12:13:44 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'snap_schedule'
Oct  1 12:13:44 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:44.881+0000 7fdd61c77140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  1 12:13:45 np0005464891 ceph-mgr[74592]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  1 12:13:45 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'stats'
Oct  1 12:13:45 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:45.114+0000 7fdd61c77140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  1 12:13:45 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'status'
Oct  1 12:13:45 np0005464891 ceph-mgr[74592]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  1 12:13:45 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'telegraf'
Oct  1 12:13:45 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:45.606+0000 7fdd61c77140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  1 12:13:45 np0005464891 ceph-mgr[74592]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  1 12:13:45 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'telemetry'
Oct  1 12:13:45 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:45.843+0000 7fdd61c77140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  1 12:13:46 np0005464891 ceph-mgr[74592]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  1 12:13:46 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'test_orchestrator'
Oct  1 12:13:46 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:46.462+0000 7fdd61c77140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  1 12:13:46 np0005464891 podman[75017]: 2025-10-01 16:13:46.65064965 +0000 UTC m=+0.062812967 container create b44435e21f7f5eb26dc1caaa244062649cfc2758a2d74b22c5ddefef20b8e597 (image=quay.io/ceph/ceph:v18, name=heuristic_jackson, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:13:46 np0005464891 systemd[1]: Started libpod-conmon-b44435e21f7f5eb26dc1caaa244062649cfc2758a2d74b22c5ddefef20b8e597.scope.
Oct  1 12:13:46 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7f15d9c0a225faec5926cde550297550c52fba3f380a232b4ed6799046d71e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7f15d9c0a225faec5926cde550297550c52fba3f380a232b4ed6799046d71e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7f15d9c0a225faec5926cde550297550c52fba3f380a232b4ed6799046d71e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:46 np0005464891 podman[75017]: 2025-10-01 16:13:46.629362576 +0000 UTC m=+0.041525923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:46 np0005464891 podman[75017]: 2025-10-01 16:13:46.721820051 +0000 UTC m=+0.133983348 container init b44435e21f7f5eb26dc1caaa244062649cfc2758a2d74b22c5ddefef20b8e597 (image=quay.io/ceph/ceph:v18, name=heuristic_jackson, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:13:46 np0005464891 podman[75017]: 2025-10-01 16:13:46.736880985 +0000 UTC m=+0.149044292 container start b44435e21f7f5eb26dc1caaa244062649cfc2758a2d74b22c5ddefef20b8e597 (image=quay.io/ceph/ceph:v18, name=heuristic_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:13:46 np0005464891 podman[75017]: 2025-10-01 16:13:46.740996607 +0000 UTC m=+0.153159904 container attach b44435e21f7f5eb26dc1caaa244062649cfc2758a2d74b22c5ddefef20b8e597 (image=quay.io/ceph/ceph:v18, name=heuristic_jackson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:13:47 np0005464891 ceph-mgr[74592]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  1 12:13:47 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'volumes'
Oct  1 12:13:47 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:47.123+0000 7fdd61c77140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  1 12:13:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 12:13:47 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1320255068' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]: 
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]: {
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    "fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    "health": {
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "status": "HEALTH_OK",
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "checks": {},
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "mutes": []
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    },
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    "election_epoch": 5,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    "quorum": [
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        0
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    ],
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    "quorum_names": [
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "compute-0"
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    ],
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    "quorum_age": 21,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    "monmap": {
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "epoch": 1,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "min_mon_release_name": "reef",
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "num_mons": 1
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    },
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    "osdmap": {
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "epoch": 1,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "num_osds": 0,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "num_up_osds": 0,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "osd_up_since": 0,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "num_in_osds": 0,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "osd_in_since": 0,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "num_remapped_pgs": 0
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    },
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    "pgmap": {
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "pgs_by_state": [],
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "num_pgs": 0,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "num_pools": 0,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "num_objects": 0,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "data_bytes": 0,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "bytes_used": 0,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "bytes_avail": 0,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "bytes_total": 0
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    },
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    "fsmap": {
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "epoch": 1,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "by_rank": [],
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "up:standby": 0
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    },
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    "mgrmap": {
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "available": false,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "num_standbys": 0,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "modules": [
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:            "iostat",
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:            "nfs",
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:            "restful"
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        ],
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "services": {}
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    },
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    "servicemap": {
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "epoch": 1,
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "modified": "2025-10-01T16:13:22.803916+0000",
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:        "services": {}
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    },
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]:    "progress_events": {}
Oct  1 12:13:47 np0005464891 heuristic_jackson[75033]: }
Oct  1 12:13:47 np0005464891 systemd[1]: libpod-b44435e21f7f5eb26dc1caaa244062649cfc2758a2d74b22c5ddefef20b8e597.scope: Deactivated successfully.
Oct  1 12:13:47 np0005464891 podman[75017]: 2025-10-01 16:13:47.167409271 +0000 UTC m=+0.579572568 container died b44435e21f7f5eb26dc1caaa244062649cfc2758a2d74b22c5ddefef20b8e597 (image=quay.io/ceph/ceph:v18, name=heuristic_jackson, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:13:47 np0005464891 systemd[1]: var-lib-containers-storage-overlay-8c7f15d9c0a225faec5926cde550297550c52fba3f380a232b4ed6799046d71e-merged.mount: Deactivated successfully.
Oct  1 12:13:47 np0005464891 podman[75017]: 2025-10-01 16:13:47.210198811 +0000 UTC m=+0.622362128 container remove b44435e21f7f5eb26dc1caaa244062649cfc2758a2d74b22c5ddefef20b8e597 (image=quay.io/ceph/ceph:v18, name=heuristic_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 12:13:47 np0005464891 systemd[1]: libpod-conmon-b44435e21f7f5eb26dc1caaa244062649cfc2758a2d74b22c5ddefef20b8e597.scope: Deactivated successfully.
Oct  1 12:13:47 np0005464891 ceph-mgr[74592]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  1 12:13:47 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'zabbix'
Oct  1 12:13:47 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:47.882+0000 7fdd61c77140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  1 12:13:48 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:48.127+0000 7fdd61c77140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: ms_deliver_dispatch: unhandled message 0x5579bbc6d1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ieawdb
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: mgr handle_mgr_map Activating!
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: mgr handle_mgr_map I am now activating
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.ieawdb(active, starting, since 0.0176904s)
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1383988508' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).mds e1 all = 1
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1383988508' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1383988508' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1383988508' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ieawdb", "id": "compute-0.ieawdb"} v 0) v1
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1383988508' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ieawdb", "id": "compute-0.ieawdb"}]: dispatch
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : Manager daemon compute-0.ieawdb is now available
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: balancer
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: crash
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [balancer INFO root] Starting
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:13:48
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [balancer INFO root] No pools available
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: devicehealth
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [devicehealth INFO root] Starting
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: iostat
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: nfs
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: orchestrator
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: pg_autoscaler
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: progress
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [progress INFO root] Loading...
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [progress INFO root] No stored events to load
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [progress INFO root] Loaded [] historic events
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [progress INFO root] Loaded OSDMap, ready.
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] recovery thread starting
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] starting setup
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: rbd_support
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: restful
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ieawdb/mirror_snapshot_schedule"} v 0) v1
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1383988508' entity='mgr.compute-0.ieawdb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ieawdb/mirror_snapshot_schedule"}]: dispatch
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [restful INFO root] server_addr: :: server_port: 8003
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: status
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [restful WARNING root] server not running: no certificate configured
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] PerfHandler: starting
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: telemetry
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TaskHandler: starting
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ieawdb/trash_purge_schedule"} v 0) v1
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1383988508' entity='mgr.compute-0.ieawdb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ieawdb/trash_purge_schedule"}]: dispatch
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] setup complete
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1383988508' entity='mgr.compute-0.ieawdb' 
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1383988508' entity='mgr.compute-0.ieawdb' 
Oct  1 12:13:48 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: volumes
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1383988508' entity='mgr.compute-0.ieawdb' 
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: Activating manager daemon compute-0.ieawdb
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: Manager daemon compute-0.ieawdb is now available
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: from='mgr.14102 192.168.122.100:0/1383988508' entity='mgr.compute-0.ieawdb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ieawdb/mirror_snapshot_schedule"}]: dispatch
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: from='mgr.14102 192.168.122.100:0/1383988508' entity='mgr.compute-0.ieawdb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ieawdb/trash_purge_schedule"}]: dispatch
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: from='mgr.14102 192.168.122.100:0/1383988508' entity='mgr.compute-0.ieawdb' 
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: from='mgr.14102 192.168.122.100:0/1383988508' entity='mgr.compute-0.ieawdb' 
Oct  1 12:13:48 np0005464891 ceph-mon[74303]: from='mgr.14102 192.168.122.100:0/1383988508' entity='mgr.compute-0.ieawdb' 
Oct  1 12:13:49 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.ieawdb(active, since 1.02812s)
Oct  1 12:13:49 np0005464891 podman[75150]: 2025-10-01 16:13:49.300364302 +0000 UTC m=+0.054752317 container create 3d88d0e5562ebabe7ec6118a7557088bf2ef90d13ee404acec2714fd44ec0795 (image=quay.io/ceph/ceph:v18, name=competent_chatterjee, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:13:49 np0005464891 systemd[1]: Started libpod-conmon-3d88d0e5562ebabe7ec6118a7557088bf2ef90d13ee404acec2714fd44ec0795.scope.
Oct  1 12:13:49 np0005464891 podman[75150]: 2025-10-01 16:13:49.275680305 +0000 UTC m=+0.030068370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:49 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:49 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9667ae005d1397c3e0cd2ee05f3060308e5970d3b91d9bd24dd4995878ac46e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:49 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9667ae005d1397c3e0cd2ee05f3060308e5970d3b91d9bd24dd4995878ac46e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:49 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9667ae005d1397c3e0cd2ee05f3060308e5970d3b91d9bd24dd4995878ac46e8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:49 np0005464891 podman[75150]: 2025-10-01 16:13:49.393120004 +0000 UTC m=+0.147508069 container init 3d88d0e5562ebabe7ec6118a7557088bf2ef90d13ee404acec2714fd44ec0795 (image=quay.io/ceph/ceph:v18, name=competent_chatterjee, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct  1 12:13:49 np0005464891 podman[75150]: 2025-10-01 16:13:49.400936697 +0000 UTC m=+0.155324722 container start 3d88d0e5562ebabe7ec6118a7557088bf2ef90d13ee404acec2714fd44ec0795 (image=quay.io/ceph/ceph:v18, name=competent_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 12:13:49 np0005464891 podman[75150]: 2025-10-01 16:13:49.405008248 +0000 UTC m=+0.159396283 container attach 3d88d0e5562ebabe7ec6118a7557088bf2ef90d13ee404acec2714fd44ec0795 (image=quay.io/ceph/ceph:v18, name=competent_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:13:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 12:13:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1070730391' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]: 
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]: {
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    "fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    "health": {
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "status": "HEALTH_OK",
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "checks": {},
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "mutes": []
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    },
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    "election_epoch": 5,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    "quorum": [
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        0
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    ],
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    "quorum_names": [
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "compute-0"
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    ],
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    "quorum_age": 24,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    "monmap": {
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "epoch": 1,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "min_mon_release_name": "reef",
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "num_mons": 1
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    },
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    "osdmap": {
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "epoch": 1,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "num_osds": 0,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "num_up_osds": 0,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "osd_up_since": 0,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "num_in_osds": 0,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "osd_in_since": 0,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "num_remapped_pgs": 0
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    },
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    "pgmap": {
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "pgs_by_state": [],
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "num_pgs": 0,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "num_pools": 0,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "num_objects": 0,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "data_bytes": 0,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "bytes_used": 0,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "bytes_avail": 0,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "bytes_total": 0
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    },
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    "fsmap": {
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "epoch": 1,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "by_rank": [],
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "up:standby": 0
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    },
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    "mgrmap": {
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "available": true,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "num_standbys": 0,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "modules": [
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:            "iostat",
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:            "nfs",
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:            "restful"
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        ],
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "services": {}
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    },
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    "servicemap": {
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "epoch": 1,
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "modified": "2025-10-01T16:13:22.803916+0000",
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:        "services": {}
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    },
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]:    "progress_events": {}
Oct  1 12:13:49 np0005464891 competent_chatterjee[75166]: }
Oct  1 12:13:49 np0005464891 systemd[1]: libpod-3d88d0e5562ebabe7ec6118a7557088bf2ef90d13ee404acec2714fd44ec0795.scope: Deactivated successfully.
Oct  1 12:13:50 np0005464891 podman[75192]: 2025-10-01 16:13:50.049072589 +0000 UTC m=+0.035664574 container died 3d88d0e5562ebabe7ec6118a7557088bf2ef90d13ee404acec2714fd44ec0795 (image=quay.io/ceph/ceph:v18, name=competent_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:13:50 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9667ae005d1397c3e0cd2ee05f3060308e5970d3b91d9bd24dd4995878ac46e8-merged.mount: Deactivated successfully.
Oct  1 12:13:50 np0005464891 podman[75192]: 2025-10-01 16:13:50.096354039 +0000 UTC m=+0.082945954 container remove 3d88d0e5562ebabe7ec6118a7557088bf2ef90d13ee404acec2714fd44ec0795 (image=quay.io/ceph/ceph:v18, name=competent_chatterjee, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:13:50 np0005464891 systemd[1]: libpod-conmon-3d88d0e5562ebabe7ec6118a7557088bf2ef90d13ee404acec2714fd44ec0795.scope: Deactivated successfully.
Oct  1 12:13:50 np0005464891 ceph-mgr[74592]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 12:13:50 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.ieawdb(active, since 2s)
Oct  1 12:13:50 np0005464891 podman[75207]: 2025-10-01 16:13:50.197597749 +0000 UTC m=+0.067112033 container create 4e2243c55d4d16c5356d2974864a71463895c42d9ed5faa9e1670867398e65f1 (image=quay.io/ceph/ceph:v18, name=jolly_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 12:13:50 np0005464891 systemd[1]: Started libpod-conmon-4e2243c55d4d16c5356d2974864a71463895c42d9ed5faa9e1670867398e65f1.scope.
Oct  1 12:13:50 np0005464891 podman[75207]: 2025-10-01 16:13:50.162849886 +0000 UTC m=+0.032364250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:50 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:50 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/606c576cf43d2d5eb5b9ff757fb26373a52687984160bf814c7674cdab3a6691/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:50 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/606c576cf43d2d5eb5b9ff757fb26373a52687984160bf814c7674cdab3a6691/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:50 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/606c576cf43d2d5eb5b9ff757fb26373a52687984160bf814c7674cdab3a6691/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:50 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/606c576cf43d2d5eb5b9ff757fb26373a52687984160bf814c7674cdab3a6691/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:50 np0005464891 podman[75207]: 2025-10-01 16:13:50.284704304 +0000 UTC m=+0.154218608 container init 4e2243c55d4d16c5356d2974864a71463895c42d9ed5faa9e1670867398e65f1 (image=quay.io/ceph/ceph:v18, name=jolly_banzai, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 12:13:50 np0005464891 podman[75207]: 2025-10-01 16:13:50.303735457 +0000 UTC m=+0.173249751 container start 4e2243c55d4d16c5356d2974864a71463895c42d9ed5faa9e1670867398e65f1 (image=quay.io/ceph/ceph:v18, name=jolly_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:13:50 np0005464891 podman[75207]: 2025-10-01 16:13:50.308044482 +0000 UTC m=+0.177558796 container attach 4e2243c55d4d16c5356d2974864a71463895c42d9ed5faa9e1670867398e65f1 (image=quay.io/ceph/ceph:v18, name=jolly_banzai, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:13:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct  1 12:13:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/944100122' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  1 12:13:50 np0005464891 systemd[1]: libpod-4e2243c55d4d16c5356d2974864a71463895c42d9ed5faa9e1670867398e65f1.scope: Deactivated successfully.
Oct  1 12:13:50 np0005464891 podman[75207]: 2025-10-01 16:13:50.825219074 +0000 UTC m=+0.694733328 container died 4e2243c55d4d16c5356d2974864a71463895c42d9ed5faa9e1670867398e65f1 (image=quay.io/ceph/ceph:v18, name=jolly_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:13:51 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/944100122' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  1 12:13:51 np0005464891 systemd[1]: var-lib-containers-storage-overlay-606c576cf43d2d5eb5b9ff757fb26373a52687984160bf814c7674cdab3a6691-merged.mount: Deactivated successfully.
Oct  1 12:13:51 np0005464891 podman[75207]: 2025-10-01 16:13:51.800245077 +0000 UTC m=+1.669759341 container remove 4e2243c55d4d16c5356d2974864a71463895c42d9ed5faa9e1670867398e65f1 (image=quay.io/ceph/ceph:v18, name=jolly_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 12:13:51 np0005464891 podman[75263]: 2025-10-01 16:13:51.89533983 +0000 UTC m=+0.063767208 container create 68daaf13714e1f4e8da7f9b10460846fbd81d6937818428fee84b973d2d915b3 (image=quay.io/ceph/ceph:v18, name=vigilant_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  1 12:13:51 np0005464891 systemd[1]: libpod-conmon-4e2243c55d4d16c5356d2974864a71463895c42d9ed5faa9e1670867398e65f1.scope: Deactivated successfully.
Oct  1 12:13:51 np0005464891 systemd[1]: Started libpod-conmon-68daaf13714e1f4e8da7f9b10460846fbd81d6937818428fee84b973d2d915b3.scope.
Oct  1 12:13:51 np0005464891 podman[75263]: 2025-10-01 16:13:51.872341339 +0000 UTC m=+0.040768697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:51 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cef86698b8532879e24fd45c229c2795eb7415e6c91897c7e21c84165712e86e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cef86698b8532879e24fd45c229c2795eb7415e6c91897c7e21c84165712e86e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cef86698b8532879e24fd45c229c2795eb7415e6c91897c7e21c84165712e86e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:51 np0005464891 podman[75263]: 2025-10-01 16:13:51.998348038 +0000 UTC m=+0.166775486 container init 68daaf13714e1f4e8da7f9b10460846fbd81d6937818428fee84b973d2d915b3 (image=quay.io/ceph/ceph:v18, name=vigilant_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  1 12:13:52 np0005464891 podman[75263]: 2025-10-01 16:13:52.007615735 +0000 UTC m=+0.176043113 container start 68daaf13714e1f4e8da7f9b10460846fbd81d6937818428fee84b973d2d915b3 (image=quay.io/ceph/ceph:v18, name=vigilant_wilson, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 12:13:52 np0005464891 podman[75263]: 2025-10-01 16:13:52.011042331 +0000 UTC m=+0.179469689 container attach 68daaf13714e1f4e8da7f9b10460846fbd81d6937818428fee84b973d2d915b3 (image=quay.io/ceph/ceph:v18, name=vigilant_wilson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 12:13:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Oct  1 12:13:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/433118992' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct  1 12:13:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/433118992' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr respawn  1: '-n'
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr respawn  2: 'mgr.compute-0.ieawdb'
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr respawn  3: '-f'
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr respawn  4: '--setuser'
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr respawn  5: 'ceph'
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr respawn  6: '--setgroup'
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr respawn  7: 'ceph'
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr respawn  8: '--default-log-to-file=false'
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr respawn  9: '--default-log-to-journald=true'
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr respawn  exe_path /proc/self/exe
Oct  1 12:13:52 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.ieawdb(active, since 4s)
Oct  1 12:13:52 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/433118992' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct  1 12:13:52 np0005464891 systemd[1]: libpod-68daaf13714e1f4e8da7f9b10460846fbd81d6937818428fee84b973d2d915b3.scope: Deactivated successfully.
Oct  1 12:13:52 np0005464891 podman[75263]: 2025-10-01 16:13:52.638037992 +0000 UTC m=+0.806465370 container died 68daaf13714e1f4e8da7f9b10460846fbd81d6937818428fee84b973d2d915b3 (image=quay.io/ceph/ceph:v18, name=vigilant_wilson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 12:13:52 np0005464891 systemd[1]: var-lib-containers-storage-overlay-cef86698b8532879e24fd45c229c2795eb7415e6c91897c7e21c84165712e86e-merged.mount: Deactivated successfully.
Oct  1 12:13:52 np0005464891 podman[75263]: 2025-10-01 16:13:52.691740105 +0000 UTC m=+0.860167453 container remove 68daaf13714e1f4e8da7f9b10460846fbd81d6937818428fee84b973d2d915b3 (image=quay.io/ceph/ceph:v18, name=vigilant_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 12:13:52 np0005464891 systemd[1]: libpod-conmon-68daaf13714e1f4e8da7f9b10460846fbd81d6937818428fee84b973d2d915b3.scope: Deactivated successfully.
Oct  1 12:13:52 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: ignoring --setuser ceph since I am not root
Oct  1 12:13:52 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: ignoring --setgroup ceph since I am not root
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  1 12:13:52 np0005464891 podman[75320]: 2025-10-01 16:13:52.751867311 +0000 UTC m=+0.039038038 container create c8eaa6436ef8069b0609d955abac29c6e68e0aa85d37d2f6fad85417abee54d7 (image=quay.io/ceph/ceph:v18, name=fervent_cerf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: pidfile_write: ignore empty --pid-file
Oct  1 12:13:52 np0005464891 systemd[1]: Started libpod-conmon-c8eaa6436ef8069b0609d955abac29c6e68e0aa85d37d2f6fad85417abee54d7.scope.
Oct  1 12:13:52 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c650d283b1bf8c6f5df3d411521c1220e65dac3a553e91e4a464736d321c4d39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c650d283b1bf8c6f5df3d411521c1220e65dac3a553e91e4a464736d321c4d39/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c650d283b1bf8c6f5df3d411521c1220e65dac3a553e91e4a464736d321c4d39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:52 np0005464891 podman[75320]: 2025-10-01 16:13:52.735409206 +0000 UTC m=+0.022579953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:52 np0005464891 podman[75320]: 2025-10-01 16:13:52.840542091 +0000 UTC m=+0.127712858 container init c8eaa6436ef8069b0609d955abac29c6e68e0aa85d37d2f6fad85417abee54d7 (image=quay.io/ceph/ceph:v18, name=fervent_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 12:13:52 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'alerts'
Oct  1 12:13:52 np0005464891 podman[75320]: 2025-10-01 16:13:52.851252759 +0000 UTC m=+0.138423486 container start c8eaa6436ef8069b0609d955abac29c6e68e0aa85d37d2f6fad85417abee54d7 (image=quay.io/ceph/ceph:v18, name=fervent_cerf, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 12:13:52 np0005464891 podman[75320]: 2025-10-01 16:13:52.854726736 +0000 UTC m=+0.141897503 container attach c8eaa6436ef8069b0609d955abac29c6e68e0aa85d37d2f6fad85417abee54d7 (image=quay.io/ceph/ceph:v18, name=fervent_cerf, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 12:13:53 np0005464891 ceph-mgr[74592]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  1 12:13:53 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'balancer'
Oct  1 12:13:53 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:53.148+0000 7f1d83256140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  1 12:13:53 np0005464891 ceph-mgr[74592]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  1 12:13:53 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'cephadm'
Oct  1 12:13:53 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:53.406+0000 7f1d83256140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  1 12:13:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  1 12:13:53 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729875860' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  1 12:13:53 np0005464891 fervent_cerf[75361]: {
Oct  1 12:13:53 np0005464891 fervent_cerf[75361]:    "epoch": 5,
Oct  1 12:13:53 np0005464891 fervent_cerf[75361]:    "available": true,
Oct  1 12:13:53 np0005464891 fervent_cerf[75361]:    "active_name": "compute-0.ieawdb",
Oct  1 12:13:53 np0005464891 fervent_cerf[75361]:    "num_standby": 0
Oct  1 12:13:53 np0005464891 fervent_cerf[75361]: }
Oct  1 12:13:53 np0005464891 systemd[1]: libpod-c8eaa6436ef8069b0609d955abac29c6e68e0aa85d37d2f6fad85417abee54d7.scope: Deactivated successfully.
Oct  1 12:13:53 np0005464891 conmon[75361]: conmon c8eaa6436ef8069b0609 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8eaa6436ef8069b0609d955abac29c6e68e0aa85d37d2f6fad85417abee54d7.scope/container/memory.events
Oct  1 12:13:53 np0005464891 podman[75320]: 2025-10-01 16:13:53.445178276 +0000 UTC m=+0.732349013 container died c8eaa6436ef8069b0609d955abac29c6e68e0aa85d37d2f6fad85417abee54d7 (image=quay.io/ceph/ceph:v18, name=fervent_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:13:53 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c650d283b1bf8c6f5df3d411521c1220e65dac3a553e91e4a464736d321c4d39-merged.mount: Deactivated successfully.
Oct  1 12:13:53 np0005464891 podman[75320]: 2025-10-01 16:13:53.495903133 +0000 UTC m=+0.783073880 container remove c8eaa6436ef8069b0609d955abac29c6e68e0aa85d37d2f6fad85417abee54d7 (image=quay.io/ceph/ceph:v18, name=fervent_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:13:53 np0005464891 systemd[1]: libpod-conmon-c8eaa6436ef8069b0609d955abac29c6e68e0aa85d37d2f6fad85417abee54d7.scope: Deactivated successfully.
Oct  1 12:13:53 np0005464891 podman[75399]: 2025-10-01 16:13:53.56913989 +0000 UTC m=+0.052381214 container create 280004ccd9c6d3f606d2b8c44c1f5d7657db3c045e1dba001169eb4bdef79889 (image=quay.io/ceph/ceph:v18, name=hardcore_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:13:53 np0005464891 systemd[1]: Started libpod-conmon-280004ccd9c6d3f606d2b8c44c1f5d7657db3c045e1dba001169eb4bdef79889.scope.
Oct  1 12:13:53 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:13:53 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63bb4f82e412744c93556aa5d515f993c1e777995d25472436676c70ef9e6896/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:53 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63bb4f82e412744c93556aa5d515f993c1e777995d25472436676c70ef9e6896/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:53 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63bb4f82e412744c93556aa5d515f993c1e777995d25472436676c70ef9e6896/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:13:53 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/433118992' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct  1 12:13:53 np0005464891 podman[75399]: 2025-10-01 16:13:53.541145128 +0000 UTC m=+0.024386502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:13:53 np0005464891 podman[75399]: 2025-10-01 16:13:53.639601775 +0000 UTC m=+0.122843109 container init 280004ccd9c6d3f606d2b8c44c1f5d7657db3c045e1dba001169eb4bdef79889 (image=quay.io/ceph/ceph:v18, name=hardcore_raman, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 12:13:53 np0005464891 podman[75399]: 2025-10-01 16:13:53.650949357 +0000 UTC m=+0.134190681 container start 280004ccd9c6d3f606d2b8c44c1f5d7657db3c045e1dba001169eb4bdef79889 (image=quay.io/ceph/ceph:v18, name=hardcore_raman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 12:13:53 np0005464891 podman[75399]: 2025-10-01 16:13:53.654722772 +0000 UTC m=+0.137964066 container attach 280004ccd9c6d3f606d2b8c44c1f5d7657db3c045e1dba001169eb4bdef79889 (image=quay.io/ceph/ceph:v18, name=hardcore_raman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:13:55 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'crash'
Oct  1 12:13:55 np0005464891 ceph-mgr[74592]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  1 12:13:55 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'dashboard'
Oct  1 12:13:55 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:55.490+0000 7f1d83256140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  1 12:13:56 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'devicehealth'
Oct  1 12:13:57 np0005464891 ceph-mgr[74592]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  1 12:13:57 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'diskprediction_local'
Oct  1 12:13:57 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:57.207+0000 7f1d83256140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  1 12:13:57 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  1 12:13:57 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  1 12:13:57 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]:  from numpy import show_config as show_numpy_config
Oct  1 12:13:57 np0005464891 ceph-mgr[74592]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  1 12:13:57 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:57.754+0000 7f1d83256140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  1 12:13:57 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'influx'
Oct  1 12:13:58 np0005464891 ceph-mgr[74592]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  1 12:13:58 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:58.004+0000 7f1d83256140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  1 12:13:58 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'insights'
Oct  1 12:13:58 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'iostat'
Oct  1 12:13:58 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:13:58.485+0000 7f1d83256140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  1 12:13:58 np0005464891 ceph-mgr[74592]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  1 12:13:58 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'k8sevents'
Oct  1 12:14:00 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'localpool'
Oct  1 12:14:00 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'mds_autoscaler'
Oct  1 12:14:00 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'mirroring'
Oct  1 12:14:01 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'nfs'
Oct  1 12:14:01 np0005464891 ceph-mgr[74592]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  1 12:14:01 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'orchestrator'
Oct  1 12:14:01 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:01.917+0000 7f1d83256140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  1 12:14:02 np0005464891 ceph-mgr[74592]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  1 12:14:02 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'osd_perf_query'
Oct  1 12:14:02 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:02.653+0000 7f1d83256140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  1 12:14:02 np0005464891 ceph-mgr[74592]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  1 12:14:02 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'osd_support'
Oct  1 12:14:02 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:02.927+0000 7f1d83256140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  1 12:14:03 np0005464891 ceph-mgr[74592]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  1 12:14:03 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'pg_autoscaler'
Oct  1 12:14:03 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:03.162+0000 7f1d83256140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  1 12:14:03 np0005464891 ceph-mgr[74592]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  1 12:14:03 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'progress'
Oct  1 12:14:03 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:03.439+0000 7f1d83256140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  1 12:14:03 np0005464891 ceph-mgr[74592]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  1 12:14:03 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'prometheus'
Oct  1 12:14:03 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:03.670+0000 7f1d83256140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  1 12:14:04 np0005464891 ceph-mgr[74592]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  1 12:14:04 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'rbd_support'
Oct  1 12:14:04 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:04.675+0000 7f1d83256140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  1 12:14:05 np0005464891 ceph-mgr[74592]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  1 12:14:05 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'restful'
Oct  1 12:14:05 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:04.999+0000 7f1d83256140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  1 12:14:05 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'rgw'
Oct  1 12:14:06 np0005464891 ceph-mgr[74592]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  1 12:14:06 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'rook'
Oct  1 12:14:06 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:06.409+0000 7f1d83256140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  1 12:14:08 np0005464891 ceph-mgr[74592]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  1 12:14:08 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'selftest'
Oct  1 12:14:08 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:08.507+0000 7f1d83256140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  1 12:14:08 np0005464891 ceph-mgr[74592]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  1 12:14:08 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'snap_schedule'
Oct  1 12:14:08 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:08.748+0000 7f1d83256140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  1 12:14:09 np0005464891 ceph-mgr[74592]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  1 12:14:09 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'stats'
Oct  1 12:14:09 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:09.020+0000 7f1d83256140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  1 12:14:09 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'status'
Oct  1 12:14:09 np0005464891 ceph-mgr[74592]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  1 12:14:09 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'telegraf'
Oct  1 12:14:09 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:09.542+0000 7f1d83256140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  1 12:14:09 np0005464891 ceph-mgr[74592]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  1 12:14:09 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'telemetry'
Oct  1 12:14:09 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:09.780+0000 7f1d83256140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  1 12:14:10 np0005464891 ceph-mgr[74592]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  1 12:14:10 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'test_orchestrator'
Oct  1 12:14:10 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:10.332+0000 7f1d83256140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  1 12:14:10 np0005464891 ceph-mgr[74592]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  1 12:14:10 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'volumes'
Oct  1 12:14:10 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:10.995+0000 7f1d83256140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: mgr[py] Loading python module 'zabbix'
Oct  1 12:14:11 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:11.664+0000 7f1d83256140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  1 12:14:11 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T16:14:11.896+0000 7f1d83256140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: ms_deliver_dispatch: unhandled message 0x556a27db71e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ieawdb restarted
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ieawdb
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: mgr handle_mgr_map Activating!
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: mgr handle_mgr_map I am now activating
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.ieawdb(active, starting, since 0.0155621s)
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ieawdb", "id": "compute-0.ieawdb"} v 0) v1
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ieawdb", "id": "compute-0.ieawdb"}]: dispatch
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).mds e1 all = 1
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: balancer
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Starting
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : Manager daemon compute-0.ieawdb is now available
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:14:11
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] No pools available
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: Active manager daemon compute-0.ieawdb restarted
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: Activating manager daemon compute-0.ieawdb
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: Manager daemon compute-0.ieawdb is now available
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: cephadm
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: crash
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: devicehealth
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: [devicehealth INFO root] Starting
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  1 12:14:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  1 12:14:11 np0005464891 ceph-mgr[74592]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: iostat
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: nfs
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: orchestrator
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: pg_autoscaler
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: progress
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [progress INFO root] Loading...
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [progress INFO root] No stored events to load
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [progress INFO root] Loaded [] historic events
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [progress INFO root] Loaded OSDMap, ready.
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] recovery thread starting
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] starting setup
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: rbd_support
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: restful
Oct  1 12:14:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ieawdb/mirror_snapshot_schedule"} v 0) v1
Oct  1 12:14:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ieawdb/mirror_snapshot_schedule"}]: dispatch
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: status
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [restful INFO root] server_addr: :: server_port: 8003
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: telemetry
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [restful WARNING root] server not running: no certificate configured
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] PerfHandler: starting
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TaskHandler: starting
Oct  1 12:14:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ieawdb/trash_purge_schedule"} v 0) v1
Oct  1 12:14:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ieawdb/trash_purge_schedule"}]: dispatch
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] setup complete
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: mgr load Constructed class from module: volumes
Oct  1 12:14:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Oct  1 12:14:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Oct  1 12:14:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:12 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.ieawdb(active, since 1.02465s)
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct  1 12:14:12 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct  1 12:14:12 np0005464891 hardcore_raman[75416]: {
Oct  1 12:14:12 np0005464891 hardcore_raman[75416]:    "mgrmap_epoch": 7,
Oct  1 12:14:12 np0005464891 hardcore_raman[75416]:    "initialized": true
Oct  1 12:14:12 np0005464891 hardcore_raman[75416]: }
Oct  1 12:14:12 np0005464891 systemd[1]: libpod-280004ccd9c6d3f606d2b8c44c1f5d7657db3c045e1dba001169eb4bdef79889.scope: Deactivated successfully.
Oct  1 12:14:12 np0005464891 podman[75399]: 2025-10-01 16:14:12.96131833 +0000 UTC m=+19.444559694 container died 280004ccd9c6d3f606d2b8c44c1f5d7657db3c045e1dba001169eb4bdef79889 (image=quay.io/ceph/ceph:v18, name=hardcore_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:14:12 np0005464891 ceph-mon[74303]: Found migration_current of "None". Setting to last migration.
Oct  1 12:14:12 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:12 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:12 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ieawdb/mirror_snapshot_schedule"}]: dispatch
Oct  1 12:14:12 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ieawdb/trash_purge_schedule"}]: dispatch
Oct  1 12:14:12 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:12 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:12 np0005464891 systemd[1]: var-lib-containers-storage-overlay-63bb4f82e412744c93556aa5d515f993c1e777995d25472436676c70ef9e6896-merged.mount: Deactivated successfully.
Oct  1 12:14:13 np0005464891 podman[75399]: 2025-10-01 16:14:13.015392086 +0000 UTC m=+19.498633380 container remove 280004ccd9c6d3f606d2b8c44c1f5d7657db3c045e1dba001169eb4bdef79889 (image=quay.io/ceph/ceph:v18, name=hardcore_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct  1 12:14:13 np0005464891 systemd[1]: libpod-conmon-280004ccd9c6d3f606d2b8c44c1f5d7657db3c045e1dba001169eb4bdef79889.scope: Deactivated successfully.
Oct  1 12:14:13 np0005464891 podman[75574]: 2025-10-01 16:14:13.102975856 +0000 UTC m=+0.064277712 container create 6e7db8258f943d34658fee7a0216211d870ffc4d2622a8cd4b2b9d86d1a5b90e (image=quay.io/ceph/ceph:v18, name=amazing_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 12:14:13 np0005464891 systemd[1]: Started libpod-conmon-6e7db8258f943d34658fee7a0216211d870ffc4d2622a8cd4b2b9d86d1a5b90e.scope.
Oct  1 12:14:13 np0005464891 podman[75574]: 2025-10-01 16:14:13.07365602 +0000 UTC m=+0.034957926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:13 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ca413e70cdbe8fab8051204c873e952a680fbd5af466364793e3a85c74c097/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ca413e70cdbe8fab8051204c873e952a680fbd5af466364793e3a85c74c097/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ca413e70cdbe8fab8051204c873e952a680fbd5af466364793e3a85c74c097/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:13 np0005464891 podman[75574]: 2025-10-01 16:14:13.21582628 +0000 UTC m=+0.177128166 container init 6e7db8258f943d34658fee7a0216211d870ffc4d2622a8cd4b2b9d86d1a5b90e (image=quay.io/ceph/ceph:v18, name=amazing_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 12:14:13 np0005464891 podman[75574]: 2025-10-01 16:14:13.226512928 +0000 UTC m=+0.187814754 container start 6e7db8258f943d34658fee7a0216211d870ffc4d2622a8cd4b2b9d86d1a5b90e (image=quay.io/ceph/ceph:v18, name=amazing_bhaskara, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:14:13 np0005464891 podman[75574]: 2025-10-01 16:14:13.240518018 +0000 UTC m=+0.201819924 container attach 6e7db8258f943d34658fee7a0216211d870ffc4d2622a8cd4b2b9d86d1a5b90e (image=quay.io/ceph/ceph:v18, name=amazing_bhaskara, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:14:13 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 12:14:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Oct  1 12:14:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  1 12:14:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  1 12:14:13 np0005464891 systemd[1]: libpod-6e7db8258f943d34658fee7a0216211d870ffc4d2622a8cd4b2b9d86d1a5b90e.scope: Deactivated successfully.
Oct  1 12:14:13 np0005464891 podman[75574]: 2025-10-01 16:14:13.816669672 +0000 UTC m=+0.777971558 container died 6e7db8258f943d34658fee7a0216211d870ffc4d2622a8cd4b2b9d86d1a5b90e (image=quay.io/ceph/ceph:v18, name=amazing_bhaskara, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:14:13 np0005464891 systemd[1]: var-lib-containers-storage-overlay-16ca413e70cdbe8fab8051204c873e952a680fbd5af466364793e3a85c74c097-merged.mount: Deactivated successfully.
Oct  1 12:14:13 np0005464891 podman[75574]: 2025-10-01 16:14:13.861403228 +0000 UTC m=+0.822705034 container remove 6e7db8258f943d34658fee7a0216211d870ffc4d2622a8cd4b2b9d86d1a5b90e (image=quay.io/ceph/ceph:v18, name=amazing_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:14:13 np0005464891 systemd[1]: libpod-conmon-6e7db8258f943d34658fee7a0216211d870ffc4d2622a8cd4b2b9d86d1a5b90e.scope: Deactivated successfully.
Oct  1 12:14:13 np0005464891 ceph-mgr[74592]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 12:14:13 np0005464891 podman[75629]: 2025-10-01 16:14:13.921966135 +0000 UTC m=+0.040170080 container create 993a735fbf020616c4e518b92a9a077e530916ef998164f2e2d6e02f36ad9bc9 (image=quay.io/ceph/ceph:v18, name=wonderful_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:14:13 np0005464891 systemd[1]: Started libpod-conmon-993a735fbf020616c4e518b92a9a077e530916ef998164f2e2d6e02f36ad9bc9.scope.
Oct  1 12:14:13 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:13 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9776a0f8c2df16f01e5220693377fa34be9553bb0faae5879d27a0d5e1d529a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9776a0f8c2df16f01e5220693377fa34be9553bb0faae5879d27a0d5e1d529a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9776a0f8c2df16f01e5220693377fa34be9553bb0faae5879d27a0d5e1d529a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:13 np0005464891 podman[75629]: 2025-10-01 16:14:13.904171979 +0000 UTC m=+0.022375974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:14 np0005464891 podman[75629]: 2025-10-01 16:14:14.004121684 +0000 UTC m=+0.122325649 container init 993a735fbf020616c4e518b92a9a077e530916ef998164f2e2d6e02f36ad9bc9 (image=quay.io/ceph/ceph:v18, name=wonderful_kowalevski, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 12:14:14 np0005464891 podman[75629]: 2025-10-01 16:14:14.014358709 +0000 UTC m=+0.132562674 container start 993a735fbf020616c4e518b92a9a077e530916ef998164f2e2d6e02f36ad9bc9 (image=quay.io/ceph/ceph:v18, name=wonderful_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:14:14 np0005464891 podman[75629]: 2025-10-01 16:14:14.017113156 +0000 UTC m=+0.135317201 container attach 993a735fbf020616c4e518b92a9a077e530916ef998164f2e2d6e02f36ad9bc9 (image=quay.io/ceph/ceph:v18, name=wonderful_kowalevski, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: [cephadm INFO cherrypy.error] [01/Oct/2025:16:14:14] ENGINE Bus STARTING
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : [01/Oct/2025:16:14:14] ENGINE Bus STARTING
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 12:14:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Oct  1 12:14:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Set ssh ssh_user
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct  1 12:14:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Oct  1 12:14:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Set ssh ssh_config
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct  1 12:14:14 np0005464891 wonderful_kowalevski[75645]: ssh user set to ceph-admin. sudo will be used
Oct  1 12:14:14 np0005464891 systemd[1]: libpod-993a735fbf020616c4e518b92a9a077e530916ef998164f2e2d6e02f36ad9bc9.scope: Deactivated successfully.
Oct  1 12:14:14 np0005464891 podman[75683]: 2025-10-01 16:14:14.574650791 +0000 UTC m=+0.033269388 container died 993a735fbf020616c4e518b92a9a077e530916ef998164f2e2d6e02f36ad9bc9 (image=quay.io/ceph/ceph:v18, name=wonderful_kowalevski, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: [cephadm INFO cherrypy.error] [01/Oct/2025:16:14:14] ENGINE Client ('192.168.122.100', 56556) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : [01/Oct/2025:16:14:14] ENGINE Client ('192.168.122.100', 56556) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: [cephadm INFO cherrypy.error] [01/Oct/2025:16:14:14] ENGINE Serving on https://192.168.122.100:7150
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : [01/Oct/2025:16:14:14] ENGINE Serving on https://192.168.122.100:7150
Oct  1 12:14:14 np0005464891 systemd[1]: var-lib-containers-storage-overlay-f9776a0f8c2df16f01e5220693377fa34be9553bb0faae5879d27a0d5e1d529a-merged.mount: Deactivated successfully.
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: [cephadm INFO cherrypy.error] [01/Oct/2025:16:14:14] ENGINE Serving on http://192.168.122.100:8765
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : [01/Oct/2025:16:14:14] ENGINE Serving on http://192.168.122.100:8765
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: [cephadm INFO cherrypy.error] [01/Oct/2025:16:14:14] ENGINE Bus STARTED
Oct  1 12:14:14 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : [01/Oct/2025:16:14:14] ENGINE Bus STARTED
Oct  1 12:14:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  1 12:14:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  1 12:14:14 np0005464891 podman[75683]: 2025-10-01 16:14:14.722153921 +0000 UTC m=+0.180772498 container remove 993a735fbf020616c4e518b92a9a077e530916ef998164f2e2d6e02f36ad9bc9 (image=quay.io/ceph/ceph:v18, name=wonderful_kowalevski, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:14:14 np0005464891 systemd[1]: libpod-conmon-993a735fbf020616c4e518b92a9a077e530916ef998164f2e2d6e02f36ad9bc9.scope: Deactivated successfully.
Oct  1 12:14:14 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.ieawdb(active, since 2s)
Oct  1 12:14:14 np0005464891 podman[75709]: 2025-10-01 16:14:14.803850957 +0000 UTC m=+0.052263557 container create 923914c0d37b04fa0914b012fa8895703646a08f1f232d120c5eef7df9ee75a9 (image=quay.io/ceph/ceph:v18, name=zealous_tharp, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 12:14:14 np0005464891 systemd[1]: Started libpod-conmon-923914c0d37b04fa0914b012fa8895703646a08f1f232d120c5eef7df9ee75a9.scope.
Oct  1 12:14:14 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:14 np0005464891 podman[75709]: 2025-10-01 16:14:14.775383164 +0000 UTC m=+0.023795784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef277e1dc3e5d427e7da71fd8a099649f4ffc8c1e56dcddd17be675d7f539d81/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef277e1dc3e5d427e7da71fd8a099649f4ffc8c1e56dcddd17be675d7f539d81/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef277e1dc3e5d427e7da71fd8a099649f4ffc8c1e56dcddd17be675d7f539d81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef277e1dc3e5d427e7da71fd8a099649f4ffc8c1e56dcddd17be675d7f539d81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef277e1dc3e5d427e7da71fd8a099649f4ffc8c1e56dcddd17be675d7f539d81/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:14 np0005464891 podman[75709]: 2025-10-01 16:14:14.906037264 +0000 UTC m=+0.154449914 container init 923914c0d37b04fa0914b012fa8895703646a08f1f232d120c5eef7df9ee75a9 (image=quay.io/ceph/ceph:v18, name=zealous_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:14:14 np0005464891 podman[75709]: 2025-10-01 16:14:14.912142024 +0000 UTC m=+0.160554624 container start 923914c0d37b04fa0914b012fa8895703646a08f1f232d120c5eef7df9ee75a9 (image=quay.io/ceph/ceph:v18, name=zealous_tharp, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:14:14 np0005464891 podman[75709]: 2025-10-01 16:14:14.920295402 +0000 UTC m=+0.168708002 container attach 923914c0d37b04fa0914b012fa8895703646a08f1f232d120c5eef7df9ee75a9 (image=quay.io/ceph/ceph:v18, name=zealous_tharp, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 12:14:15 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 12:14:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Oct  1 12:14:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:15 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Set ssh ssh_identity_key
Oct  1 12:14:15 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct  1 12:14:15 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Set ssh private key
Oct  1 12:14:15 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Set ssh private key
Oct  1 12:14:15 np0005464891 systemd[1]: libpod-923914c0d37b04fa0914b012fa8895703646a08f1f232d120c5eef7df9ee75a9.scope: Deactivated successfully.
Oct  1 12:14:15 np0005464891 podman[75709]: 2025-10-01 16:14:15.421166408 +0000 UTC m=+0.669578968 container died 923914c0d37b04fa0914b012fa8895703646a08f1f232d120c5eef7df9ee75a9 (image=quay.io/ceph/ceph:v18, name=zealous_tharp, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 12:14:15 np0005464891 ceph-mon[74303]: [01/Oct/2025:16:14:14] ENGINE Bus STARTING
Oct  1 12:14:15 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:15 np0005464891 ceph-mon[74303]: Set ssh ssh_user
Oct  1 12:14:15 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:15 np0005464891 ceph-mon[74303]: Set ssh ssh_config
Oct  1 12:14:15 np0005464891 ceph-mon[74303]: ssh user set to ceph-admin. sudo will be used
Oct  1 12:14:15 np0005464891 ceph-mon[74303]: [01/Oct/2025:16:14:14] ENGINE Client ('192.168.122.100', 56556) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  1 12:14:15 np0005464891 ceph-mon[74303]: [01/Oct/2025:16:14:14] ENGINE Serving on https://192.168.122.100:7150
Oct  1 12:14:15 np0005464891 ceph-mon[74303]: [01/Oct/2025:16:14:14] ENGINE Serving on http://192.168.122.100:8765
Oct  1 12:14:15 np0005464891 ceph-mon[74303]: [01/Oct/2025:16:14:14] ENGINE Bus STARTED
Oct  1 12:14:15 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:15 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ef277e1dc3e5d427e7da71fd8a099649f4ffc8c1e56dcddd17be675d7f539d81-merged.mount: Deactivated successfully.
Oct  1 12:14:15 np0005464891 podman[75709]: 2025-10-01 16:14:15.563334958 +0000 UTC m=+0.811747518 container remove 923914c0d37b04fa0914b012fa8895703646a08f1f232d120c5eef7df9ee75a9 (image=quay.io/ceph/ceph:v18, name=zealous_tharp, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 12:14:15 np0005464891 systemd[1]: libpod-conmon-923914c0d37b04fa0914b012fa8895703646a08f1f232d120c5eef7df9ee75a9.scope: Deactivated successfully.
Oct  1 12:14:15 np0005464891 podman[75766]: 2025-10-01 16:14:15.647075151 +0000 UTC m=+0.055893918 container create 873f5a2f2fe36737635ead90024c3dcaa979dfacc30caa87aa32c4b064b851f4 (image=quay.io/ceph/ceph:v18, name=awesome_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 12:14:15 np0005464891 systemd[1]: Started libpod-conmon-873f5a2f2fe36737635ead90024c3dcaa979dfacc30caa87aa32c4b064b851f4.scope.
Oct  1 12:14:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019921204 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:14:15 np0005464891 podman[75766]: 2025-10-01 16:14:15.629693627 +0000 UTC m=+0.038512384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:15 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d14386ee789609bb7d27f64acd80f2d3bdf2773fbed4bb4cdd2df92f251000a/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d14386ee789609bb7d27f64acd80f2d3bdf2773fbed4bb4cdd2df92f251000a/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d14386ee789609bb7d27f64acd80f2d3bdf2773fbed4bb4cdd2df92f251000a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d14386ee789609bb7d27f64acd80f2d3bdf2773fbed4bb4cdd2df92f251000a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d14386ee789609bb7d27f64acd80f2d3bdf2773fbed4bb4cdd2df92f251000a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:15 np0005464891 podman[75766]: 2025-10-01 16:14:15.755668397 +0000 UTC m=+0.164487174 container init 873f5a2f2fe36737635ead90024c3dcaa979dfacc30caa87aa32c4b064b851f4 (image=quay.io/ceph/ceph:v18, name=awesome_wilson, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 12:14:15 np0005464891 podman[75766]: 2025-10-01 16:14:15.765281074 +0000 UTC m=+0.174099851 container start 873f5a2f2fe36737635ead90024c3dcaa979dfacc30caa87aa32c4b064b851f4 (image=quay.io/ceph/ceph:v18, name=awesome_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:14:15 np0005464891 podman[75766]: 2025-10-01 16:14:15.769091551 +0000 UTC m=+0.177910308 container attach 873f5a2f2fe36737635ead90024c3dcaa979dfacc30caa87aa32c4b064b851f4 (image=quay.io/ceph/ceph:v18, name=awesome_wilson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:14:15 np0005464891 ceph-mgr[74592]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 12:14:16 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 12:14:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Oct  1 12:14:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:16 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct  1 12:14:16 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct  1 12:14:16 np0005464891 systemd[1]: libpod-873f5a2f2fe36737635ead90024c3dcaa979dfacc30caa87aa32c4b064b851f4.scope: Deactivated successfully.
Oct  1 12:14:16 np0005464891 podman[75766]: 2025-10-01 16:14:16.315721521 +0000 UTC m=+0.724540298 container died 873f5a2f2fe36737635ead90024c3dcaa979dfacc30caa87aa32c4b064b851f4 (image=quay.io/ceph/ceph:v18, name=awesome_wilson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 12:14:16 np0005464891 systemd[1]: var-lib-containers-storage-overlay-8d14386ee789609bb7d27f64acd80f2d3bdf2773fbed4bb4cdd2df92f251000a-merged.mount: Deactivated successfully.
Oct  1 12:14:16 np0005464891 podman[75766]: 2025-10-01 16:14:16.360096398 +0000 UTC m=+0.768915145 container remove 873f5a2f2fe36737635ead90024c3dcaa979dfacc30caa87aa32c4b064b851f4 (image=quay.io/ceph/ceph:v18, name=awesome_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 12:14:16 np0005464891 systemd[1]: libpod-conmon-873f5a2f2fe36737635ead90024c3dcaa979dfacc30caa87aa32c4b064b851f4.scope: Deactivated successfully.
Oct  1 12:14:16 np0005464891 podman[75822]: 2025-10-01 16:14:16.436582409 +0000 UTC m=+0.052964567 container create 570947b837bb9d59b65e04451ffbb585ba9321d8cd0759e713b09d90aa9f14e0 (image=quay.io/ceph/ceph:v18, name=competent_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:14:16 np0005464891 systemd[1]: Started libpod-conmon-570947b837bb9d59b65e04451ffbb585ba9321d8cd0759e713b09d90aa9f14e0.scope.
Oct  1 12:14:16 np0005464891 podman[75822]: 2025-10-01 16:14:16.409933847 +0000 UTC m=+0.026316095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:16 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:16 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec310ca75866e09b1c6053fd2b878efd93d9db1f54b1f811afe9ff5de6f3ca09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:16 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec310ca75866e09b1c6053fd2b878efd93d9db1f54b1f811afe9ff5de6f3ca09/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:16 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec310ca75866e09b1c6053fd2b878efd93d9db1f54b1f811afe9ff5de6f3ca09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:16 np0005464891 podman[75822]: 2025-10-01 16:14:16.535109064 +0000 UTC m=+0.151491332 container init 570947b837bb9d59b65e04451ffbb585ba9321d8cd0759e713b09d90aa9f14e0 (image=quay.io/ceph/ceph:v18, name=competent_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct  1 12:14:16 np0005464891 ceph-mon[74303]: Set ssh ssh_identity_key
Oct  1 12:14:16 np0005464891 ceph-mon[74303]: Set ssh private key
Oct  1 12:14:16 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:16 np0005464891 podman[75822]: 2025-10-01 16:14:16.55252795 +0000 UTC m=+0.168910138 container start 570947b837bb9d59b65e04451ffbb585ba9321d8cd0759e713b09d90aa9f14e0 (image=quay.io/ceph/ceph:v18, name=competent_mccarthy, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 12:14:16 np0005464891 podman[75822]: 2025-10-01 16:14:16.556643884 +0000 UTC m=+0.173026142 container attach 570947b837bb9d59b65e04451ffbb585ba9321d8cd0759e713b09d90aa9f14e0 (image=quay.io/ceph/ceph:v18, name=competent_mccarthy, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 12:14:17 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 12:14:17 np0005464891 competent_mccarthy[75838]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEuC9L6xYlGY8yd00KsKBkmYEnmaGf79LUwrax2iIDtHt6IIOcG8rAPu9HJITpuXWWIlA/r74rLNcEYzdO82ZSWASM/RMyYrltuw5Xn0q+vEs2IglP4zYxzWMD2XPCgcoMjunWP+p7F2VslgVXsj/uB6v7rXDdIfo6UdQCbKC4p3g9PkSRZg0uoXOQI/HKIWoodS8QKKuGCGzssh7JEfFA2J8oVi3rLFnU59eTfM/qk2yoPeSj4af0R54B+knTrH9E2aDEKCHFfAUTXlHLPCQrb15SXThyy0KUX+6RIJAe9jX2gtFcr74YvSp3JcNRpSUDIt+nB/5RUC6GisYS9s63QdvJ4IVCmLw8eX+8H5t2IJI0plJZF8rGMAMHNBBcvewaOe/6HaUsOJOIsnCMqLdSzifSTNV3waX6gZrcBSnX57gL6Rz6szr0SOemy3Ymql7k1QnWeZMMxIUSD9FqJNlnNfD8ep4ZI1T7fZN8b/Lw1gEzi1suDKiV0ecwiaQoO1E= zuul@controller
Oct  1 12:14:17 np0005464891 systemd[1]: libpod-570947b837bb9d59b65e04451ffbb585ba9321d8cd0759e713b09d90aa9f14e0.scope: Deactivated successfully.
Oct  1 12:14:17 np0005464891 podman[75822]: 2025-10-01 16:14:17.095926319 +0000 UTC m=+0.712308557 container died 570947b837bb9d59b65e04451ffbb585ba9321d8cd0759e713b09d90aa9f14e0 (image=quay.io/ceph/ceph:v18, name=competent_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:14:17 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ec310ca75866e09b1c6053fd2b878efd93d9db1f54b1f811afe9ff5de6f3ca09-merged.mount: Deactivated successfully.
Oct  1 12:14:17 np0005464891 podman[75822]: 2025-10-01 16:14:17.132621322 +0000 UTC m=+0.749003470 container remove 570947b837bb9d59b65e04451ffbb585ba9321d8cd0759e713b09d90aa9f14e0 (image=quay.io/ceph/ceph:v18, name=competent_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:14:17 np0005464891 systemd[1]: libpod-conmon-570947b837bb9d59b65e04451ffbb585ba9321d8cd0759e713b09d90aa9f14e0.scope: Deactivated successfully.
Oct  1 12:14:17 np0005464891 podman[75876]: 2025-10-01 16:14:17.193985911 +0000 UTC m=+0.044238593 container create 5035dc9bb8bfcc71bd1e43f42c27ad5ab8b8c3ccb12328c21409514da593d6d9 (image=quay.io/ceph/ceph:v18, name=romantic_joliot, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:14:17 np0005464891 systemd[1]: Started libpod-conmon-5035dc9bb8bfcc71bd1e43f42c27ad5ab8b8c3ccb12328c21409514da593d6d9.scope.
Oct  1 12:14:17 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:17 np0005464891 podman[75876]: 2025-10-01 16:14:17.175034814 +0000 UTC m=+0.025287486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22df097cf013c2bdc9622eea9a28e39f18b5616dabbe3e7520a431e9fa854789/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22df097cf013c2bdc9622eea9a28e39f18b5616dabbe3e7520a431e9fa854789/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22df097cf013c2bdc9622eea9a28e39f18b5616dabbe3e7520a431e9fa854789/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:17 np0005464891 podman[75876]: 2025-10-01 16:14:17.283644339 +0000 UTC m=+0.133896991 container init 5035dc9bb8bfcc71bd1e43f42c27ad5ab8b8c3ccb12328c21409514da593d6d9 (image=quay.io/ceph/ceph:v18, name=romantic_joliot, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:14:17 np0005464891 podman[75876]: 2025-10-01 16:14:17.293416432 +0000 UTC m=+0.143669114 container start 5035dc9bb8bfcc71bd1e43f42c27ad5ab8b8c3ccb12328c21409514da593d6d9 (image=quay.io/ceph/ceph:v18, name=romantic_joliot, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:14:17 np0005464891 podman[75876]: 2025-10-01 16:14:17.29694811 +0000 UTC m=+0.147200792 container attach 5035dc9bb8bfcc71bd1e43f42c27ad5ab8b8c3ccb12328c21409514da593d6d9 (image=quay.io/ceph/ceph:v18, name=romantic_joliot, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 12:14:17 np0005464891 ceph-mon[74303]: Set ssh ssh_identity_pub
Oct  1 12:14:17 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 12:14:17 np0005464891 ceph-mgr[74592]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 12:14:18 np0005464891 systemd[1]: Created slice User Slice of UID 42477.
Oct  1 12:14:18 np0005464891 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  1 12:14:18 np0005464891 systemd-logind[801]: New session 22 of user ceph-admin.
Oct  1 12:14:18 np0005464891 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  1 12:14:18 np0005464891 systemd[1]: Starting User Manager for UID 42477...
Oct  1 12:14:18 np0005464891 systemd-logind[801]: New session 24 of user ceph-admin.
Oct  1 12:14:18 np0005464891 systemd[75922]: Queued start job for default target Main User Target.
Oct  1 12:14:18 np0005464891 systemd[75922]: Created slice User Application Slice.
Oct  1 12:14:18 np0005464891 systemd[75922]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  1 12:14:18 np0005464891 systemd[75922]: Started Daily Cleanup of User's Temporary Directories.
Oct  1 12:14:18 np0005464891 systemd[75922]: Reached target Paths.
Oct  1 12:14:18 np0005464891 systemd[75922]: Reached target Timers.
Oct  1 12:14:18 np0005464891 systemd[75922]: Starting D-Bus User Message Bus Socket...
Oct  1 12:14:18 np0005464891 systemd[75922]: Starting Create User's Volatile Files and Directories...
Oct  1 12:14:18 np0005464891 systemd[75922]: Listening on D-Bus User Message Bus Socket.
Oct  1 12:14:18 np0005464891 systemd[75922]: Reached target Sockets.
Oct  1 12:14:18 np0005464891 systemd[75922]: Finished Create User's Volatile Files and Directories.
Oct  1 12:14:18 np0005464891 systemd[75922]: Reached target Basic System.
Oct  1 12:14:18 np0005464891 systemd[1]: Started User Manager for UID 42477.
Oct  1 12:14:18 np0005464891 systemd[75922]: Reached target Main User Target.
Oct  1 12:14:18 np0005464891 systemd[75922]: Startup finished in 177ms.
Oct  1 12:14:18 np0005464891 systemd[1]: Started Session 22 of User ceph-admin.
Oct  1 12:14:18 np0005464891 systemd[1]: Started Session 24 of User ceph-admin.
Oct  1 12:14:18 np0005464891 systemd-logind[801]: New session 25 of user ceph-admin.
Oct  1 12:14:18 np0005464891 systemd[1]: Started Session 25 of User ceph-admin.
Oct  1 12:14:19 np0005464891 systemd-logind[801]: New session 26 of user ceph-admin.
Oct  1 12:14:19 np0005464891 systemd[1]: Started Session 26 of User ceph-admin.
Oct  1 12:14:19 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct  1 12:14:19 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct  1 12:14:19 np0005464891 systemd-logind[801]: New session 27 of user ceph-admin.
Oct  1 12:14:19 np0005464891 systemd[1]: Started Session 27 of User ceph-admin.
Oct  1 12:14:19 np0005464891 ceph-mgr[74592]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 12:14:20 np0005464891 systemd-logind[801]: New session 28 of user ceph-admin.
Oct  1 12:14:20 np0005464891 systemd[1]: Started Session 28 of User ceph-admin.
Oct  1 12:14:20 np0005464891 ceph-mon[74303]: Deploying cephadm binary to compute-0
Oct  1 12:14:20 np0005464891 systemd-logind[801]: New session 29 of user ceph-admin.
Oct  1 12:14:20 np0005464891 systemd[1]: Started Session 29 of User ceph-admin.
Oct  1 12:14:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053013 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:14:21 np0005464891 systemd-logind[801]: New session 30 of user ceph-admin.
Oct  1 12:14:21 np0005464891 systemd[1]: Started Session 30 of User ceph-admin.
Oct  1 12:14:21 np0005464891 systemd-logind[801]: New session 31 of user ceph-admin.
Oct  1 12:14:21 np0005464891 systemd[1]: Started Session 31 of User ceph-admin.
Oct  1 12:14:21 np0005464891 ceph-mgr[74592]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 12:14:22 np0005464891 systemd-logind[801]: New session 32 of user ceph-admin.
Oct  1 12:14:22 np0005464891 systemd[1]: Started Session 32 of User ceph-admin.
Oct  1 12:14:22 np0005464891 systemd-logind[801]: New session 33 of user ceph-admin.
Oct  1 12:14:22 np0005464891 systemd[1]: Started Session 33 of User ceph-admin.
Oct  1 12:14:23 np0005464891 systemd-logind[801]: New session 34 of user ceph-admin.
Oct  1 12:14:23 np0005464891 systemd[1]: Started Session 34 of User ceph-admin.
Oct  1 12:14:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  1 12:14:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:23 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Added host compute-0
Oct  1 12:14:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  1 12:14:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  1 12:14:23 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Added host compute-0
Oct  1 12:14:23 np0005464891 romantic_joliot[75892]: Added host 'compute-0' with addr '192.168.122.100'
Oct  1 12:14:23 np0005464891 systemd[1]: libpod-5035dc9bb8bfcc71bd1e43f42c27ad5ab8b8c3ccb12328c21409514da593d6d9.scope: Deactivated successfully.
Oct  1 12:14:23 np0005464891 podman[75876]: 2025-10-01 16:14:23.705384275 +0000 UTC m=+6.555636967 container died 5035dc9bb8bfcc71bd1e43f42c27ad5ab8b8c3ccb12328c21409514da593d6d9 (image=quay.io/ceph/ceph:v18, name=romantic_joliot, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 12:14:23 np0005464891 systemd[1]: var-lib-containers-storage-overlay-22df097cf013c2bdc9622eea9a28e39f18b5616dabbe3e7520a431e9fa854789-merged.mount: Deactivated successfully.
Oct  1 12:14:23 np0005464891 podman[75876]: 2025-10-01 16:14:23.763189395 +0000 UTC m=+6.613442087 container remove 5035dc9bb8bfcc71bd1e43f42c27ad5ab8b8c3ccb12328c21409514da593d6d9 (image=quay.io/ceph/ceph:v18, name=romantic_joliot, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 12:14:23 np0005464891 systemd[1]: libpod-conmon-5035dc9bb8bfcc71bd1e43f42c27ad5ab8b8c3ccb12328c21409514da593d6d9.scope: Deactivated successfully.
Oct  1 12:14:23 np0005464891 podman[76562]: 2025-10-01 16:14:23.862560094 +0000 UTC m=+0.067509452 container create e1c6496ddf7c8241cb118c1bb10a0a282be2c16714ad732c3e3633f395851b9f (image=quay.io/ceph/ceph:v18, name=vigorous_torvalds, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 12:14:23 np0005464891 systemd[1]: Started libpod-conmon-e1c6496ddf7c8241cb118c1bb10a0a282be2c16714ad732c3e3633f395851b9f.scope.
Oct  1 12:14:23 np0005464891 ceph-mgr[74592]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 12:14:23 np0005464891 podman[76562]: 2025-10-01 16:14:23.834580245 +0000 UTC m=+0.039529653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:23 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1b7dd74b73f152531915ae579b4b1e9a30c321f0d33d31dabc6bc763f7b4f52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1b7dd74b73f152531915ae579b4b1e9a30c321f0d33d31dabc6bc763f7b4f52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1b7dd74b73f152531915ae579b4b1e9a30c321f0d33d31dabc6bc763f7b4f52/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:23 np0005464891 podman[76562]: 2025-10-01 16:14:23.966254183 +0000 UTC m=+0.171203591 container init e1c6496ddf7c8241cb118c1bb10a0a282be2c16714ad732c3e3633f395851b9f (image=quay.io/ceph/ceph:v18, name=vigorous_torvalds, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 12:14:23 np0005464891 podman[76562]: 2025-10-01 16:14:23.981389475 +0000 UTC m=+0.186338803 container start e1c6496ddf7c8241cb118c1bb10a0a282be2c16714ad732c3e3633f395851b9f (image=quay.io/ceph/ceph:v18, name=vigorous_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:14:23 np0005464891 podman[76562]: 2025-10-01 16:14:23.993628946 +0000 UTC m=+0.198578364 container attach e1c6496ddf7c8241cb118c1bb10a0a282be2c16714ad732c3e3633f395851b9f (image=quay.io/ceph/ceph:v18, name=vigorous_torvalds, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:14:24 np0005464891 podman[76704]: 2025-10-01 16:14:24.408357152 +0000 UTC m=+0.061453904 container create 608a821c340917234badcf5f6a9360a9f653794376604d42925f3ea6c68cad52 (image=quay.io/ceph/ceph:v18, name=zen_feistel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:14:24 np0005464891 systemd[1]: Started libpod-conmon-608a821c340917234badcf5f6a9360a9f653794376604d42925f3ea6c68cad52.scope.
Oct  1 12:14:24 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:24 np0005464891 podman[76704]: 2025-10-01 16:14:24.383988762 +0000 UTC m=+0.037085494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:24 np0005464891 podman[76704]: 2025-10-01 16:14:24.494829651 +0000 UTC m=+0.147926393 container init 608a821c340917234badcf5f6a9360a9f653794376604d42925f3ea6c68cad52 (image=quay.io/ceph/ceph:v18, name=zen_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 12:14:24 np0005464891 podman[76704]: 2025-10-01 16:14:24.505350093 +0000 UTC m=+0.158446845 container start 608a821c340917234badcf5f6a9360a9f653794376604d42925f3ea6c68cad52 (image=quay.io/ceph/ceph:v18, name=zen_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:14:24 np0005464891 podman[76704]: 2025-10-01 16:14:24.509730916 +0000 UTC m=+0.162827678 container attach 608a821c340917234badcf5f6a9360a9f653794376604d42925f3ea6c68cad52 (image=quay.io/ceph/ceph:v18, name=zen_feistel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:14:24 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 12:14:24 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct  1 12:14:24 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct  1 12:14:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  1 12:14:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:24 np0005464891 vigorous_torvalds[76605]: Scheduled mon update...
Oct  1 12:14:24 np0005464891 systemd[1]: libpod-e1c6496ddf7c8241cb118c1bb10a0a282be2c16714ad732c3e3633f395851b9f.scope: Deactivated successfully.
Oct  1 12:14:24 np0005464891 podman[76562]: 2025-10-01 16:14:24.566507838 +0000 UTC m=+0.771457196 container died e1c6496ddf7c8241cb118c1bb10a0a282be2c16714ad732c3e3633f395851b9f (image=quay.io/ceph/ceph:v18, name=vigorous_torvalds, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 12:14:24 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e1b7dd74b73f152531915ae579b4b1e9a30c321f0d33d31dabc6bc763f7b4f52-merged.mount: Deactivated successfully.
Oct  1 12:14:24 np0005464891 podman[76562]: 2025-10-01 16:14:24.630154451 +0000 UTC m=+0.835103799 container remove e1c6496ddf7c8241cb118c1bb10a0a282be2c16714ad732c3e3633f395851b9f (image=quay.io/ceph/ceph:v18, name=vigorous_torvalds, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  1 12:14:24 np0005464891 systemd[1]: libpod-conmon-e1c6496ddf7c8241cb118c1bb10a0a282be2c16714ad732c3e3633f395851b9f.scope: Deactivated successfully.
Oct  1 12:14:24 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:24 np0005464891 ceph-mon[74303]: Added host compute-0
Oct  1 12:14:24 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:24 np0005464891 podman[76741]: 2025-10-01 16:14:24.723627155 +0000 UTC m=+0.062976935 container create 00ac8791bef8b86058d87fbb4223ab8834222cd62cbf05e2a9b16aae91424592 (image=quay.io/ceph/ceph:v18, name=suspicious_dubinsky, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 12:14:24 np0005464891 systemd[1]: Started libpod-conmon-00ac8791bef8b86058d87fbb4223ab8834222cd62cbf05e2a9b16aae91424592.scope.
Oct  1 12:14:24 np0005464891 podman[76741]: 2025-10-01 16:14:24.696783348 +0000 UTC m=+0.036133178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:24 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef841014325dd6293fc1a41dc938bfa5c61ecd412e3b10d87fab9c876a9e645a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef841014325dd6293fc1a41dc938bfa5c61ecd412e3b10d87fab9c876a9e645a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef841014325dd6293fc1a41dc938bfa5c61ecd412e3b10d87fab9c876a9e645a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:24 np0005464891 podman[76741]: 2025-10-01 16:14:24.835576704 +0000 UTC m=+0.174926544 container init 00ac8791bef8b86058d87fbb4223ab8834222cd62cbf05e2a9b16aae91424592 (image=quay.io/ceph/ceph:v18, name=suspicious_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 12:14:24 np0005464891 zen_feistel[76720]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct  1 12:14:24 np0005464891 podman[76741]: 2025-10-01 16:14:24.846032906 +0000 UTC m=+0.185382676 container start 00ac8791bef8b86058d87fbb4223ab8834222cd62cbf05e2a9b16aae91424592 (image=quay.io/ceph/ceph:v18, name=suspicious_dubinsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:14:24 np0005464891 podman[76741]: 2025-10-01 16:14:24.850376398 +0000 UTC m=+0.189726228 container attach 00ac8791bef8b86058d87fbb4223ab8834222cd62cbf05e2a9b16aae91424592 (image=quay.io/ceph/ceph:v18, name=suspicious_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 12:14:24 np0005464891 systemd[1]: libpod-608a821c340917234badcf5f6a9360a9f653794376604d42925f3ea6c68cad52.scope: Deactivated successfully.
Oct  1 12:14:24 np0005464891 podman[76704]: 2025-10-01 16:14:24.86196219 +0000 UTC m=+0.515058922 container died 608a821c340917234badcf5f6a9360a9f653794376604d42925f3ea6c68cad52 (image=quay.io/ceph/ceph:v18, name=zen_feistel, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:14:24 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c47f915bcd02dac227abc566e411aa45752a27985a72ce8fad026aada3936446-merged.mount: Deactivated successfully.
Oct  1 12:14:24 np0005464891 podman[76704]: 2025-10-01 16:14:24.907110168 +0000 UTC m=+0.560206890 container remove 608a821c340917234badcf5f6a9360a9f653794376604d42925f3ea6c68cad52 (image=quay.io/ceph/ceph:v18, name=zen_feistel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:14:24 np0005464891 systemd[1]: libpod-conmon-608a821c340917234badcf5f6a9360a9f653794376604d42925f3ea6c68cad52.scope: Deactivated successfully.
Oct  1 12:14:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Oct  1 12:14:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:25 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 12:14:25 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct  1 12:14:25 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct  1 12:14:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  1 12:14:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:25 np0005464891 suspicious_dubinsky[76757]: Scheduled mgr update...
Oct  1 12:14:25 np0005464891 systemd[1]: libpod-00ac8791bef8b86058d87fbb4223ab8834222cd62cbf05e2a9b16aae91424592.scope: Deactivated successfully.
Oct  1 12:14:25 np0005464891 podman[76741]: 2025-10-01 16:14:25.434117362 +0000 UTC m=+0.773467102 container died 00ac8791bef8b86058d87fbb4223ab8834222cd62cbf05e2a9b16aae91424592 (image=quay.io/ceph/ceph:v18, name=suspicious_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:14:25 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ef841014325dd6293fc1a41dc938bfa5c61ecd412e3b10d87fab9c876a9e645a-merged.mount: Deactivated successfully.
Oct  1 12:14:25 np0005464891 podman[76741]: 2025-10-01 16:14:25.495660246 +0000 UTC m=+0.835010026 container remove 00ac8791bef8b86058d87fbb4223ab8834222cd62cbf05e2a9b16aae91424592 (image=quay.io/ceph/ceph:v18, name=suspicious_dubinsky, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 12:14:25 np0005464891 systemd[1]: libpod-conmon-00ac8791bef8b86058d87fbb4223ab8834222cd62cbf05e2a9b16aae91424592.scope: Deactivated successfully.
Oct  1 12:14:25 np0005464891 podman[76914]: 2025-10-01 16:14:25.559874766 +0000 UTC m=+0.043799632 container create 141aa1512052c04b24e5744ab15de33e595386078f633ae8d86ccef1c2a64dff (image=quay.io/ceph/ceph:v18, name=eager_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:14:25 np0005464891 systemd[1]: Started libpod-conmon-141aa1512052c04b24e5744ab15de33e595386078f633ae8d86ccef1c2a64dff.scope.
Oct  1 12:14:25 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10b27100f40687f08c267e6b5eb689c7b29202e75fbf4cf33cc2b9085bffcc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10b27100f40687f08c267e6b5eb689c7b29202e75fbf4cf33cc2b9085bffcc7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10b27100f40687f08c267e6b5eb689c7b29202e75fbf4cf33cc2b9085bffcc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:25 np0005464891 podman[76914]: 2025-10-01 16:14:25.538569432 +0000 UTC m=+0.022494318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:25 np0005464891 podman[76914]: 2025-10-01 16:14:25.636911521 +0000 UTC m=+0.120836397 container init 141aa1512052c04b24e5744ab15de33e595386078f633ae8d86ccef1c2a64dff (image=quay.io/ceph/ceph:v18, name=eager_almeida, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 12:14:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:14:25 np0005464891 podman[76914]: 2025-10-01 16:14:25.642502787 +0000 UTC m=+0.126427653 container start 141aa1512052c04b24e5744ab15de33e595386078f633ae8d86ccef1c2a64dff (image=quay.io/ceph/ceph:v18, name=eager_almeida, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:14:25 np0005464891 podman[76914]: 2025-10-01 16:14:25.646212671 +0000 UTC m=+0.130137527 container attach 141aa1512052c04b24e5744ab15de33e595386078f633ae8d86ccef1c2a64dff (image=quay.io/ceph/ceph:v18, name=eager_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 12:14:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:25 np0005464891 ceph-mon[74303]: Saving service mon spec with placement count:5
Oct  1 12:14:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:14:25 np0005464891 ceph-mgr[74592]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 12:14:26 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 12:14:26 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Saving service crash spec with placement *
Oct  1 12:14:26 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct  1 12:14:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct  1 12:14:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:26 np0005464891 eager_almeida[76941]: Scheduled crash update...
Oct  1 12:14:26 np0005464891 systemd[1]: libpod-141aa1512052c04b24e5744ab15de33e595386078f633ae8d86ccef1c2a64dff.scope: Deactivated successfully.
Oct  1 12:14:26 np0005464891 podman[76914]: 2025-10-01 16:14:26.201192864 +0000 UTC m=+0.685117750 container died 141aa1512052c04b24e5744ab15de33e595386078f633ae8d86ccef1c2a64dff (image=quay.io/ceph/ceph:v18, name=eager_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:14:26 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d10b27100f40687f08c267e6b5eb689c7b29202e75fbf4cf33cc2b9085bffcc7-merged.mount: Deactivated successfully.
Oct  1 12:14:26 np0005464891 podman[76914]: 2025-10-01 16:14:26.265483616 +0000 UTC m=+0.749408462 container remove 141aa1512052c04b24e5744ab15de33e595386078f633ae8d86ccef1c2a64dff (image=quay.io/ceph/ceph:v18, name=eager_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 12:14:26 np0005464891 systemd[1]: libpod-conmon-141aa1512052c04b24e5744ab15de33e595386078f633ae8d86ccef1c2a64dff.scope: Deactivated successfully.
Oct  1 12:14:26 np0005464891 podman[77122]: 2025-10-01 16:14:26.32987204 +0000 UTC m=+0.040707556 container create 6af8713a9c3d62515e994f2913e75ea961ab30a277931dc5d3b18aa8d9a6ec9a (image=quay.io/ceph/ceph:v18, name=confident_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 12:14:26 np0005464891 systemd[1]: Started libpod-conmon-6af8713a9c3d62515e994f2913e75ea961ab30a277931dc5d3b18aa8d9a6ec9a.scope.
Oct  1 12:14:26 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf8a38c1bdec8033a43dff933b5742c75b6ec80f828fe508aaaa04d481dacdcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf8a38c1bdec8033a43dff933b5742c75b6ec80f828fe508aaaa04d481dacdcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf8a38c1bdec8033a43dff933b5742c75b6ec80f828fe508aaaa04d481dacdcd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:26 np0005464891 podman[77122]: 2025-10-01 16:14:26.314660395 +0000 UTC m=+0.025495931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:26 np0005464891 podman[77122]: 2025-10-01 16:14:26.430347469 +0000 UTC m=+0.141183015 container init 6af8713a9c3d62515e994f2913e75ea961ab30a277931dc5d3b18aa8d9a6ec9a (image=quay.io/ceph/ceph:v18, name=confident_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 12:14:26 np0005464891 podman[77122]: 2025-10-01 16:14:26.441002375 +0000 UTC m=+0.151837891 container start 6af8713a9c3d62515e994f2913e75ea961ab30a277931dc5d3b18aa8d9a6ec9a (image=quay.io/ceph/ceph:v18, name=confident_elgamal, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 12:14:26 np0005464891 podman[77122]: 2025-10-01 16:14:26.445587423 +0000 UTC m=+0.156422969 container attach 6af8713a9c3d62515e994f2913e75ea961ab30a277931dc5d3b18aa8d9a6ec9a (image=quay.io/ceph/ceph:v18, name=confident_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:14:26 np0005464891 podman[77170]: 2025-10-01 16:14:26.555076854 +0000 UTC m=+0.079308930 container exec 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:14:26 np0005464891 ceph-mon[74303]: Saving service mgr spec with placement count:2
Oct  1 12:14:26 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:26 np0005464891 podman[77170]: 2025-10-01 16:14:26.866907152 +0000 UTC m=+0.391139218 container exec_died 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:14:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:14:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Oct  1 12:14:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2998757177' entity='client.admin' 
Oct  1 12:14:27 np0005464891 systemd[1]: libpod-6af8713a9c3d62515e994f2913e75ea961ab30a277931dc5d3b18aa8d9a6ec9a.scope: Deactivated successfully.
Oct  1 12:14:27 np0005464891 podman[77122]: 2025-10-01 16:14:27.092819737 +0000 UTC m=+0.803655293 container died 6af8713a9c3d62515e994f2913e75ea961ab30a277931dc5d3b18aa8d9a6ec9a (image=quay.io/ceph/ceph:v18, name=confident_elgamal, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 12:14:27 np0005464891 systemd[1]: var-lib-containers-storage-overlay-bf8a38c1bdec8033a43dff933b5742c75b6ec80f828fe508aaaa04d481dacdcd-merged.mount: Deactivated successfully.
Oct  1 12:14:27 np0005464891 podman[77122]: 2025-10-01 16:14:27.156972364 +0000 UTC m=+0.867807880 container remove 6af8713a9c3d62515e994f2913e75ea961ab30a277931dc5d3b18aa8d9a6ec9a (image=quay.io/ceph/ceph:v18, name=confident_elgamal, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:14:27 np0005464891 systemd[1]: libpod-conmon-6af8713a9c3d62515e994f2913e75ea961ab30a277931dc5d3b18aa8d9a6ec9a.scope: Deactivated successfully.
Oct  1 12:14:27 np0005464891 podman[77289]: 2025-10-01 16:14:27.22177637 +0000 UTC m=+0.045333594 container create 79356e768aae936cf2aa5ccca43a3cca8ec79390c7970b82eb94dfaf8574f3d5 (image=quay.io/ceph/ceph:v18, name=crazy_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 12:14:27 np0005464891 systemd[1]: Started libpod-conmon-79356e768aae936cf2aa5ccca43a3cca8ec79390c7970b82eb94dfaf8574f3d5.scope.
Oct  1 12:14:27 np0005464891 podman[77289]: 2025-10-01 16:14:27.198111931 +0000 UTC m=+0.021669165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:27 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a79398c9100c156a8e5ca803ac30915479814184f6ac120916c1c8e9866c7d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a79398c9100c156a8e5ca803ac30915479814184f6ac120916c1c8e9866c7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a79398c9100c156a8e5ca803ac30915479814184f6ac120916c1c8e9866c7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:27 np0005464891 podman[77289]: 2025-10-01 16:14:27.318233328 +0000 UTC m=+0.141790552 container init 79356e768aae936cf2aa5ccca43a3cca8ec79390c7970b82eb94dfaf8574f3d5 (image=quay.io/ceph/ceph:v18, name=crazy_solomon, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:14:27 np0005464891 podman[77289]: 2025-10-01 16:14:27.329401998 +0000 UTC m=+0.152959192 container start 79356e768aae936cf2aa5ccca43a3cca8ec79390c7970b82eb94dfaf8574f3d5 (image=quay.io/ceph/ceph:v18, name=crazy_solomon, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:14:27 np0005464891 podman[77289]: 2025-10-01 16:14:27.336290301 +0000 UTC m=+0.159847585 container attach 79356e768aae936cf2aa5ccca43a3cca8ec79390c7970b82eb94dfaf8574f3d5 (image=quay.io/ceph/ceph:v18, name=crazy_solomon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 12:14:27 np0005464891 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77391 (sysctl)
Oct  1 12:14:27 np0005464891 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct  1 12:14:27 np0005464891 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct  1 12:14:27 np0005464891 ceph-mon[74303]: Saving service crash spec with placement *
Oct  1 12:14:27 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:27 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/2998757177' entity='client.admin' 
Oct  1 12:14:27 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 12:14:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Oct  1 12:14:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:27 np0005464891 systemd[1]: libpod-79356e768aae936cf2aa5ccca43a3cca8ec79390c7970b82eb94dfaf8574f3d5.scope: Deactivated successfully.
Oct  1 12:14:27 np0005464891 podman[77289]: 2025-10-01 16:14:27.885991296 +0000 UTC m=+0.709548490 container died 79356e768aae936cf2aa5ccca43a3cca8ec79390c7970b82eb94dfaf8574f3d5 (image=quay.io/ceph/ceph:v18, name=crazy_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:14:27 np0005464891 systemd[1]: var-lib-containers-storage-overlay-95a79398c9100c156a8e5ca803ac30915479814184f6ac120916c1c8e9866c7d-merged.mount: Deactivated successfully.
Oct  1 12:14:27 np0005464891 ceph-mgr[74592]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 12:14:27 np0005464891 podman[77289]: 2025-10-01 16:14:27.934593031 +0000 UTC m=+0.758150225 container remove 79356e768aae936cf2aa5ccca43a3cca8ec79390c7970b82eb94dfaf8574f3d5 (image=quay.io/ceph/ceph:v18, name=crazy_solomon, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:14:27 np0005464891 systemd[1]: libpod-conmon-79356e768aae936cf2aa5ccca43a3cca8ec79390c7970b82eb94dfaf8574f3d5.scope: Deactivated successfully.
Oct  1 12:14:28 np0005464891 podman[77445]: 2025-10-01 16:14:28.004808448 +0000 UTC m=+0.044332417 container create 7103dd73bf02c716df9191ddcc3a4110be97ca20cd211b94e2cf505be2bbad65 (image=quay.io/ceph/ceph:v18, name=suspicious_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 12:14:28 np0005464891 systemd[1]: Started libpod-conmon-7103dd73bf02c716df9191ddcc3a4110be97ca20cd211b94e2cf505be2bbad65.scope.
Oct  1 12:14:28 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665e2519b9c2af12bb074eb68558094949d7ba1c6166ebd12ecf4f475e284ee6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665e2519b9c2af12bb074eb68558094949d7ba1c6166ebd12ecf4f475e284ee6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665e2519b9c2af12bb074eb68558094949d7ba1c6166ebd12ecf4f475e284ee6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:28 np0005464891 podman[77445]: 2025-10-01 16:14:27.986761624 +0000 UTC m=+0.026285573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:28 np0005464891 podman[77445]: 2025-10-01 16:14:28.106661555 +0000 UTC m=+0.146185534 container init 7103dd73bf02c716df9191ddcc3a4110be97ca20cd211b94e2cf505be2bbad65 (image=quay.io/ceph/ceph:v18, name=suspicious_taussig, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:14:28 np0005464891 podman[77445]: 2025-10-01 16:14:28.121056776 +0000 UTC m=+0.160580765 container start 7103dd73bf02c716df9191ddcc3a4110be97ca20cd211b94e2cf505be2bbad65 (image=quay.io/ceph/ceph:v18, name=suspicious_taussig, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:14:28 np0005464891 podman[77445]: 2025-10-01 16:14:28.12513791 +0000 UTC m=+0.164661949 container attach 7103dd73bf02c716df9191ddcc3a4110be97ca20cd211b94e2cf505be2bbad65 (image=quay.io/ceph/ceph:v18, name=suspicious_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:14:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:14:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:28 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 12:14:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  1 12:14:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:28 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Added label _admin to host compute-0
Oct  1 12:14:28 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct  1 12:14:28 np0005464891 suspicious_taussig[77484]: Added label _admin to host compute-0
Oct  1 12:14:28 np0005464891 systemd[1]: libpod-7103dd73bf02c716df9191ddcc3a4110be97ca20cd211b94e2cf505be2bbad65.scope: Deactivated successfully.
Oct  1 12:14:28 np0005464891 podman[77445]: 2025-10-01 16:14:28.727804242 +0000 UTC m=+0.767328191 container died 7103dd73bf02c716df9191ddcc3a4110be97ca20cd211b94e2cf505be2bbad65 (image=quay.io/ceph/ceph:v18, name=suspicious_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:14:28 np0005464891 systemd[1]: var-lib-containers-storage-overlay-665e2519b9c2af12bb074eb68558094949d7ba1c6166ebd12ecf4f475e284ee6-merged.mount: Deactivated successfully.
Oct  1 12:14:28 np0005464891 podman[77445]: 2025-10-01 16:14:28.76904649 +0000 UTC m=+0.808570439 container remove 7103dd73bf02c716df9191ddcc3a4110be97ca20cd211b94e2cf505be2bbad65 (image=quay.io/ceph/ceph:v18, name=suspicious_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:14:28 np0005464891 systemd[1]: libpod-conmon-7103dd73bf02c716df9191ddcc3a4110be97ca20cd211b94e2cf505be2bbad65.scope: Deactivated successfully.
Oct  1 12:14:28 np0005464891 podman[77690]: 2025-10-01 16:14:28.840162012 +0000 UTC m=+0.042979859 container create 7febe076f77ac8191d97a32bdacbdf24900250c383b5d1760f022274bf41a6dd (image=quay.io/ceph/ceph:v18, name=elastic_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  1 12:14:28 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:28 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:28 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:28 np0005464891 ceph-mon[74303]: Added label _admin to host compute-0
Oct  1 12:14:28 np0005464891 systemd[1]: Started libpod-conmon-7febe076f77ac8191d97a32bdacbdf24900250c383b5d1760f022274bf41a6dd.scope.
Oct  1 12:14:28 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:28 np0005464891 podman[77690]: 2025-10-01 16:14:28.825210765 +0000 UTC m=+0.028028642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d31dfd7eb1dcd5fb161e126fcdda2ec11cb12d64258ee0aa6c8990255c34273/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d31dfd7eb1dcd5fb161e126fcdda2ec11cb12d64258ee0aa6c8990255c34273/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d31dfd7eb1dcd5fb161e126fcdda2ec11cb12d64258ee0aa6c8990255c34273/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:28 np0005464891 podman[77690]: 2025-10-01 16:14:28.938544713 +0000 UTC m=+0.141362590 container init 7febe076f77ac8191d97a32bdacbdf24900250c383b5d1760f022274bf41a6dd (image=quay.io/ceph/ceph:v18, name=elastic_boyd, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:14:28 np0005464891 podman[77690]: 2025-10-01 16:14:28.950148347 +0000 UTC m=+0.152966194 container start 7febe076f77ac8191d97a32bdacbdf24900250c383b5d1760f022274bf41a6dd (image=quay.io/ceph/ceph:v18, name=elastic_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 12:14:28 np0005464891 podman[77690]: 2025-10-01 16:14:28.953575502 +0000 UTC m=+0.156393369 container attach 7febe076f77ac8191d97a32bdacbdf24900250c383b5d1760f022274bf41a6dd (image=quay.io/ceph/ceph:v18, name=elastic_boyd, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 12:14:29 np0005464891 podman[77776]: 2025-10-01 16:14:29.248560841 +0000 UTC m=+0.067603975 container create 41148b31b2d2aa14fef26c411f2c8fe321e08f028ade4ef1ad365f6cd1bdb7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_spence, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:14:29 np0005464891 systemd[1]: Started libpod-conmon-41148b31b2d2aa14fef26c411f2c8fe321e08f028ade4ef1ad365f6cd1bdb7ac.scope.
Oct  1 12:14:29 np0005464891 podman[77776]: 2025-10-01 16:14:29.222421103 +0000 UTC m=+0.041464267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:14:29 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:29 np0005464891 podman[77776]: 2025-10-01 16:14:29.320797383 +0000 UTC m=+0.139840527 container init 41148b31b2d2aa14fef26c411f2c8fe321e08f028ade4ef1ad365f6cd1bdb7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_spence, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:14:29 np0005464891 podman[77776]: 2025-10-01 16:14:29.32567496 +0000 UTC m=+0.144718084 container start 41148b31b2d2aa14fef26c411f2c8fe321e08f028ade4ef1ad365f6cd1bdb7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_spence, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 12:14:29 np0005464891 podman[77776]: 2025-10-01 16:14:29.328388365 +0000 UTC m=+0.147431539 container attach 41148b31b2d2aa14fef26c411f2c8fe321e08f028ade4ef1ad365f6cd1bdb7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_spence, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:14:29 np0005464891 nice_spence[77808]: 167 167
Oct  1 12:14:29 np0005464891 systemd[1]: libpod-41148b31b2d2aa14fef26c411f2c8fe321e08f028ade4ef1ad365f6cd1bdb7ac.scope: Deactivated successfully.
Oct  1 12:14:29 np0005464891 podman[77776]: 2025-10-01 16:14:29.3307176 +0000 UTC m=+0.149760764 container died 41148b31b2d2aa14fef26c411f2c8fe321e08f028ade4ef1ad365f6cd1bdb7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_spence, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:14:29 np0005464891 systemd[1]: var-lib-containers-storage-overlay-dbf9b144834d5bd7cbcea7143607f90e717e602e9241a9234472015c0abf564e-merged.mount: Deactivated successfully.
Oct  1 12:14:29 np0005464891 podman[77776]: 2025-10-01 16:14:29.360572222 +0000 UTC m=+0.179615376 container remove 41148b31b2d2aa14fef26c411f2c8fe321e08f028ade4ef1ad365f6cd1bdb7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 12:14:29 np0005464891 systemd[1]: libpod-conmon-41148b31b2d2aa14fef26c411f2c8fe321e08f028ade4ef1ad365f6cd1bdb7ac.scope: Deactivated successfully.
Oct  1 12:14:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Oct  1 12:14:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1253444748' entity='client.admin' 
Oct  1 12:14:29 np0005464891 systemd[1]: libpod-7febe076f77ac8191d97a32bdacbdf24900250c383b5d1760f022274bf41a6dd.scope: Deactivated successfully.
Oct  1 12:14:29 np0005464891 podman[77690]: 2025-10-01 16:14:29.513176194 +0000 UTC m=+0.715994141 container died 7febe076f77ac8191d97a32bdacbdf24900250c383b5d1760f022274bf41a6dd (image=quay.io/ceph/ceph:v18, name=elastic_boyd, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:14:29 np0005464891 systemd[1]: var-lib-containers-storage-overlay-1d31dfd7eb1dcd5fb161e126fcdda2ec11cb12d64258ee0aa6c8990255c34273-merged.mount: Deactivated successfully.
Oct  1 12:14:29 np0005464891 podman[77690]: 2025-10-01 16:14:29.566035027 +0000 UTC m=+0.768852914 container remove 7febe076f77ac8191d97a32bdacbdf24900250c383b5d1760f022274bf41a6dd (image=quay.io/ceph/ceph:v18, name=elastic_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  1 12:14:29 np0005464891 systemd[1]: libpod-conmon-7febe076f77ac8191d97a32bdacbdf24900250c383b5d1760f022274bf41a6dd.scope: Deactivated successfully.
Oct  1 12:14:29 np0005464891 podman[77843]: 2025-10-01 16:14:29.641936031 +0000 UTC m=+0.051391592 container create 1c99ea421a439faa0adb9e4e15bccb87ae180f8165b10294cb284d8472a1ce0a (image=quay.io/ceph/ceph:v18, name=loving_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:14:29 np0005464891 systemd[1]: Started libpod-conmon-1c99ea421a439faa0adb9e4e15bccb87ae180f8165b10294cb284d8472a1ce0a.scope.
Oct  1 12:14:29 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21393c783cec92c7b1ad18c97a7dab9a7f67f4082bef6de762eb422ecbc9b01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21393c783cec92c7b1ad18c97a7dab9a7f67f4082bef6de762eb422ecbc9b01/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21393c783cec92c7b1ad18c97a7dab9a7f67f4082bef6de762eb422ecbc9b01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:29 np0005464891 podman[77843]: 2025-10-01 16:14:29.61675887 +0000 UTC m=+0.026214481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:29 np0005464891 podman[77843]: 2025-10-01 16:14:29.727955538 +0000 UTC m=+0.137411069 container init 1c99ea421a439faa0adb9e4e15bccb87ae180f8165b10294cb284d8472a1ce0a (image=quay.io/ceph/ceph:v18, name=loving_darwin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 12:14:29 np0005464891 podman[77843]: 2025-10-01 16:14:29.741024032 +0000 UTC m=+0.150479583 container start 1c99ea421a439faa0adb9e4e15bccb87ae180f8165b10294cb284d8472a1ce0a (image=quay.io/ceph/ceph:v18, name=loving_darwin, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 12:14:29 np0005464891 podman[77843]: 2025-10-01 16:14:29.744709925 +0000 UTC m=+0.154165446 container attach 1c99ea421a439faa0adb9e4e15bccb87ae180f8165b10294cb284d8472a1ce0a (image=quay.io/ceph/ceph:v18, name=loving_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 12:14:29 np0005464891 ceph-mgr[74592]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 12:14:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Oct  1 12:14:30 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4294619916' entity='client.admin' 
Oct  1 12:14:30 np0005464891 loving_darwin[77861]: set mgr/dashboard/cluster/status
Oct  1 12:14:30 np0005464891 systemd[1]: libpod-1c99ea421a439faa0adb9e4e15bccb87ae180f8165b10294cb284d8472a1ce0a.scope: Deactivated successfully.
Oct  1 12:14:30 np0005464891 podman[77887]: 2025-10-01 16:14:30.418002854 +0000 UTC m=+0.026949912 container died 1c99ea421a439faa0adb9e4e15bccb87ae180f8165b10294cb284d8472a1ce0a (image=quay.io/ceph/ceph:v18, name=loving_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 12:14:30 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d21393c783cec92c7b1ad18c97a7dab9a7f67f4082bef6de762eb422ecbc9b01-merged.mount: Deactivated successfully.
Oct  1 12:14:30 np0005464891 podman[77887]: 2025-10-01 16:14:30.472639207 +0000 UTC m=+0.081586195 container remove 1c99ea421a439faa0adb9e4e15bccb87ae180f8165b10294cb284d8472a1ce0a (image=quay.io/ceph/ceph:v18, name=loving_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 12:14:30 np0005464891 systemd[1]: libpod-conmon-1c99ea421a439faa0adb9e4e15bccb87ae180f8165b10294cb284d8472a1ce0a.scope: Deactivated successfully.
Oct  1 12:14:30 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1253444748' entity='client.admin' 
Oct  1 12:14:30 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/4294619916' entity='client.admin' 
Oct  1 12:14:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:14:30 np0005464891 podman[77909]: 2025-10-01 16:14:30.753605855 +0000 UTC m=+0.061789392 container create 8d31ebb162dc03869cc3be6672030eccb94647d88f871acba6f5e9c98b170cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:14:30 np0005464891 systemd[1]: Started libpod-conmon-8d31ebb162dc03869cc3be6672030eccb94647d88f871acba6f5e9c98b170cdf.scope.
Oct  1 12:14:30 np0005464891 podman[77909]: 2025-10-01 16:14:30.729418161 +0000 UTC m=+0.037601768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:14:30 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b28b7412d735ee3c63373b146cfec3a7a361603a2b1d1a6cc55278bf8f8956/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b28b7412d735ee3c63373b146cfec3a7a361603a2b1d1a6cc55278bf8f8956/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b28b7412d735ee3c63373b146cfec3a7a361603a2b1d1a6cc55278bf8f8956/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b28b7412d735ee3c63373b146cfec3a7a361603a2b1d1a6cc55278bf8f8956/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:30 np0005464891 podman[77909]: 2025-10-01 16:14:30.868199858 +0000 UTC m=+0.176383475 container init 8d31ebb162dc03869cc3be6672030eccb94647d88f871acba6f5e9c98b170cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_elbakyan, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:14:30 np0005464891 podman[77909]: 2025-10-01 16:14:30.880046728 +0000 UTC m=+0.188230295 container start 8d31ebb162dc03869cc3be6672030eccb94647d88f871acba6f5e9c98b170cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_elbakyan, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:14:30 np0005464891 podman[77909]: 2025-10-01 16:14:30.885103399 +0000 UTC m=+0.193287006 container attach 8d31ebb162dc03869cc3be6672030eccb94647d88f871acba6f5e9c98b170cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 12:14:31 np0005464891 python3[77955]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:14:31 np0005464891 podman[77956]: 2025-10-01 16:14:31.120661502 +0000 UTC m=+0.070456803 container create 7d8ca8e209bbd1da828c3ea45f2a48b8787a9a94d9c3a41166918344891d8a56 (image=quay.io/ceph/ceph:v18, name=hopeful_hodgkin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 12:14:31 np0005464891 systemd[1]: Started libpod-conmon-7d8ca8e209bbd1da828c3ea45f2a48b8787a9a94d9c3a41166918344891d8a56.scope.
Oct  1 12:14:31 np0005464891 podman[77956]: 2025-10-01 16:14:31.091808779 +0000 UTC m=+0.041604140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:31 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:31 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd433f0f14a86460bf74be5ada6a66566dc6403850888399c02d639a5de4d88f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:31 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd433f0f14a86460bf74be5ada6a66566dc6403850888399c02d639a5de4d88f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:31 np0005464891 podman[77956]: 2025-10-01 16:14:31.215826034 +0000 UTC m=+0.165621375 container init 7d8ca8e209bbd1da828c3ea45f2a48b8787a9a94d9c3a41166918344891d8a56 (image=quay.io/ceph/ceph:v18, name=hopeful_hodgkin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:14:31 np0005464891 podman[77956]: 2025-10-01 16:14:31.228542478 +0000 UTC m=+0.178337789 container start 7d8ca8e209bbd1da828c3ea45f2a48b8787a9a94d9c3a41166918344891d8a56 (image=quay.io/ceph/ceph:v18, name=hopeful_hodgkin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 12:14:31 np0005464891 podman[77956]: 2025-10-01 16:14:31.232777436 +0000 UTC m=+0.182572777 container attach 7d8ca8e209bbd1da828c3ea45f2a48b8787a9a94d9c3a41166918344891d8a56 (image=quay.io/ceph/ceph:v18, name=hopeful_hodgkin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct  1 12:14:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Oct  1 12:14:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2455376311' entity='client.admin' 
Oct  1 12:14:31 np0005464891 systemd[1]: libpod-7d8ca8e209bbd1da828c3ea45f2a48b8787a9a94d9c3a41166918344891d8a56.scope: Deactivated successfully.
Oct  1 12:14:31 np0005464891 podman[77956]: 2025-10-01 16:14:31.777143523 +0000 UTC m=+0.726938834 container died 7d8ca8e209bbd1da828c3ea45f2a48b8787a9a94d9c3a41166918344891d8a56 (image=quay.io/ceph/ceph:v18, name=hopeful_hodgkin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 12:14:31 np0005464891 systemd[1]: var-lib-containers-storage-overlay-fd433f0f14a86460bf74be5ada6a66566dc6403850888399c02d639a5de4d88f-merged.mount: Deactivated successfully.
Oct  1 12:14:31 np0005464891 podman[77956]: 2025-10-01 16:14:31.822303361 +0000 UTC m=+0.772098642 container remove 7d8ca8e209bbd1da828c3ea45f2a48b8787a9a94d9c3a41166918344891d8a56 (image=quay.io/ceph/ceph:v18, name=hopeful_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:14:31 np0005464891 systemd[1]: libpod-conmon-7d8ca8e209bbd1da828c3ea45f2a48b8787a9a94d9c3a41166918344891d8a56.scope: Deactivated successfully.
Oct  1 12:14:31 np0005464891 ceph-mgr[74592]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct  1 12:14:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:14:31 np0005464891 ceph-mon[74303]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]: [
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:    {
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:        "available": false,
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:        "ceph_device": false,
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:        "lsm_data": {},
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:        "lvs": [],
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:        "path": "/dev/sr0",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:        "rejected_reasons": [
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "Insufficient space (<5GB)",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "Has a FileSystem"
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:        ],
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:        "sys_api": {
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "actuators": null,
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "device_nodes": "sr0",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "devname": "sr0",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "human_readable_size": "482.00 KB",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "id_bus": "ata",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "model": "QEMU DVD-ROM",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "nr_requests": "2",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "parent": "/dev/sr0",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "partitions": {},
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "path": "/dev/sr0",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "removable": "1",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "rev": "2.5+",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "ro": "0",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "rotational": "0",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "sas_address": "",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "sas_device_handle": "",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "scheduler_mode": "mq-deadline",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "sectors": 0,
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "sectorsize": "2048",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "size": 493568.0,
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "support_discard": "2048",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "type": "disk",
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:            "vendor": "QEMU"
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:        }
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]:    }
Oct  1 12:14:32 np0005464891 quizzical_elbakyan[77928]: ]
Oct  1 12:14:32 np0005464891 systemd[1]: libpod-8d31ebb162dc03869cc3be6672030eccb94647d88f871acba6f5e9c98b170cdf.scope: Deactivated successfully.
Oct  1 12:14:32 np0005464891 podman[77909]: 2025-10-01 16:14:32.38753796 +0000 UTC m=+1.695721557 container died 8d31ebb162dc03869cc3be6672030eccb94647d88f871acba6f5e9c98b170cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:14:32 np0005464891 systemd[1]: libpod-8d31ebb162dc03869cc3be6672030eccb94647d88f871acba6f5e9c98b170cdf.scope: Consumed 1.536s CPU time.
Oct  1 12:14:32 np0005464891 systemd[1]: var-lib-containers-storage-overlay-10b28b7412d735ee3c63373b146cfec3a7a361603a2b1d1a6cc55278bf8f8956-merged.mount: Deactivated successfully.
Oct  1 12:14:32 np0005464891 podman[77909]: 2025-10-01 16:14:32.443091188 +0000 UTC m=+1.751274735 container remove 8d31ebb162dc03869cc3be6672030eccb94647d88f871acba6f5e9c98b170cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 12:14:32 np0005464891 systemd[1]: libpod-conmon-8d31ebb162dc03869cc3be6672030eccb94647d88f871acba6f5e9c98b170cdf.scope: Deactivated successfully.
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:14:32 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  1 12:14:32 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/2455376311' entity='client.admin' 
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 12:14:32 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:14:32 np0005464891 ansible-async_wrapper.py[80106]: Invoked with j282760470739 30 /home/zuul/.ansible/tmp/ansible-tmp-1759335272.1719272-33198-115846474384044/AnsiballZ_command.py _
Oct  1 12:14:32 np0005464891 ansible-async_wrapper.py[80154]: Starting module and watcher
Oct  1 12:14:32 np0005464891 ansible-async_wrapper.py[80154]: Start watching 80157 (30)
Oct  1 12:14:32 np0005464891 ansible-async_wrapper.py[80157]: Start module (80157)
Oct  1 12:14:32 np0005464891 ansible-async_wrapper.py[80106]: Return async_wrapper task started.
Oct  1 12:14:33 np0005464891 python3[80161]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:14:33 np0005464891 podman[80195]: 2025-10-01 16:14:33.078233145 +0000 UTC m=+0.057374900 container create 47cee332659f8e8f647fb341ea61838b68e8998da5ab14d7df55043492f62ef3 (image=quay.io/ceph/ceph:v18, name=infallible_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:14:33 np0005464891 systemd[1]: Started libpod-conmon-47cee332659f8e8f647fb341ea61838b68e8998da5ab14d7df55043492f62ef3.scope.
Oct  1 12:14:33 np0005464891 podman[80195]: 2025-10-01 16:14:33.050736609 +0000 UTC m=+0.029878414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:33 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde58b9cb55e95d6394f2fa10911f134ebd4a952f055775364dd4b16ed02c98c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde58b9cb55e95d6394f2fa10911f134ebd4a952f055775364dd4b16ed02c98c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:33 np0005464891 podman[80195]: 2025-10-01 16:14:33.192001805 +0000 UTC m=+0.171143610 container init 47cee332659f8e8f647fb341ea61838b68e8998da5ab14d7df55043492f62ef3 (image=quay.io/ceph/ceph:v18, name=infallible_nobel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:14:33 np0005464891 podman[80195]: 2025-10-01 16:14:33.198682771 +0000 UTC m=+0.177824526 container start 47cee332659f8e8f647fb341ea61838b68e8998da5ab14d7df55043492f62ef3 (image=quay.io/ceph/ceph:v18, name=infallible_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 12:14:33 np0005464891 podman[80195]: 2025-10-01 16:14:33.202668622 +0000 UTC m=+0.181810387 container attach 47cee332659f8e8f647fb341ea61838b68e8998da5ab14d7df55043492f62ef3 (image=quay.io/ceph/ceph:v18, name=infallible_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 12:14:33 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  1 12:14:33 np0005464891 infallible_nobel[80252]: 
Oct  1 12:14:33 np0005464891 infallible_nobel[80252]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  1 12:14:33 np0005464891 ceph-mon[74303]: Updating compute-0:/etc/ceph/ceph.conf
Oct  1 12:14:33 np0005464891 systemd[1]: libpod-47cee332659f8e8f647fb341ea61838b68e8998da5ab14d7df55043492f62ef3.scope: Deactivated successfully.
Oct  1 12:14:33 np0005464891 podman[80195]: 2025-10-01 16:14:33.770541345 +0000 UTC m=+0.749683100 container died 47cee332659f8e8f647fb341ea61838b68e8998da5ab14d7df55043492f62ef3 (image=quay.io/ceph/ceph:v18, name=infallible_nobel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 12:14:33 np0005464891 systemd[1]: var-lib-containers-storage-overlay-bde58b9cb55e95d6394f2fa10911f134ebd4a952f055775364dd4b16ed02c98c-merged.mount: Deactivated successfully.
Oct  1 12:14:33 np0005464891 podman[80195]: 2025-10-01 16:14:33.827596344 +0000 UTC m=+0.806738079 container remove 47cee332659f8e8f647fb341ea61838b68e8998da5ab14d7df55043492f62ef3 (image=quay.io/ceph/ceph:v18, name=infallible_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 12:14:33 np0005464891 systemd[1]: libpod-conmon-47cee332659f8e8f647fb341ea61838b68e8998da5ab14d7df55043492f62ef3.scope: Deactivated successfully.
Oct  1 12:14:33 np0005464891 ansible-async_wrapper.py[80157]: Module complete (80157)
Oct  1 12:14:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:14:34 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5/config/ceph.conf
Oct  1 12:14:34 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5/config/ceph.conf
Oct  1 12:14:34 np0005464891 python3[80634]: ansible-ansible.legacy.async_status Invoked with jid=j282760470739.80106 mode=status _async_dir=/root/.ansible_async
Oct  1 12:14:34 np0005464891 python3[80788]: ansible-ansible.legacy.async_status Invoked with jid=j282760470739.80106 mode=cleanup _async_dir=/root/.ansible_async
Oct  1 12:14:35 np0005464891 python3[80960]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:14:35 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  1 12:14:35 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  1 12:14:35 np0005464891 python3[81164]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:14:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:14:35 np0005464891 podman[81221]: 2025-10-01 16:14:35.762855195 +0000 UTC m=+0.053587504 container create fcbcf444a1ad65de7834c95fee004f804a1f696f0acb3c3a39cf691d936a5b48 (image=quay.io/ceph/ceph:v18, name=gallant_williams, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:14:35 np0005464891 ceph-mon[74303]: Updating compute-0:/var/lib/ceph/6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5/config/ceph.conf
Oct  1 12:14:35 np0005464891 systemd[1]: Started libpod-conmon-fcbcf444a1ad65de7834c95fee004f804a1f696f0acb3c3a39cf691d936a5b48.scope.
Oct  1 12:14:35 np0005464891 podman[81221]: 2025-10-01 16:14:35.740732548 +0000 UTC m=+0.031464867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:35 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a849f60cd8df9536f7ac59ed8deb6016cbacbaefd657d66eb80c63b65bd2868/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a849f60cd8df9536f7ac59ed8deb6016cbacbaefd657d66eb80c63b65bd2868/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a849f60cd8df9536f7ac59ed8deb6016cbacbaefd657d66eb80c63b65bd2868/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:35 np0005464891 podman[81221]: 2025-10-01 16:14:35.863216101 +0000 UTC m=+0.153948410 container init fcbcf444a1ad65de7834c95fee004f804a1f696f0acb3c3a39cf691d936a5b48 (image=quay.io/ceph/ceph:v18, name=gallant_williams, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:14:35 np0005464891 podman[81221]: 2025-10-01 16:14:35.87142995 +0000 UTC m=+0.162162259 container start fcbcf444a1ad65de7834c95fee004f804a1f696f0acb3c3a39cf691d936a5b48 (image=quay.io/ceph/ceph:v18, name=gallant_williams, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:14:35 np0005464891 podman[81221]: 2025-10-01 16:14:35.875334429 +0000 UTC m=+0.166066758 container attach fcbcf444a1ad65de7834c95fee004f804a1f696f0acb3c3a39cf691d936a5b48 (image=quay.io/ceph/ceph:v18, name=gallant_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 12:14:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:14:36 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  1 12:14:36 np0005464891 gallant_williams[81280]: 
Oct  1 12:14:36 np0005464891 gallant_williams[81280]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  1 12:14:36 np0005464891 systemd[1]: libpod-fcbcf444a1ad65de7834c95fee004f804a1f696f0acb3c3a39cf691d936a5b48.scope: Deactivated successfully.
Oct  1 12:14:36 np0005464891 podman[81221]: 2025-10-01 16:14:36.404696198 +0000 UTC m=+0.695428517 container died fcbcf444a1ad65de7834c95fee004f804a1f696f0acb3c3a39cf691d936a5b48 (image=quay.io/ceph/ceph:v18, name=gallant_williams, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 12:14:36 np0005464891 systemd[1]: var-lib-containers-storage-overlay-0a849f60cd8df9536f7ac59ed8deb6016cbacbaefd657d66eb80c63b65bd2868-merged.mount: Deactivated successfully.
Oct  1 12:14:36 np0005464891 podman[81221]: 2025-10-01 16:14:36.454772293 +0000 UTC m=+0.745504572 container remove fcbcf444a1ad65de7834c95fee004f804a1f696f0acb3c3a39cf691d936a5b48 (image=quay.io/ceph/ceph:v18, name=gallant_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:14:36 np0005464891 systemd[1]: libpod-conmon-fcbcf444a1ad65de7834c95fee004f804a1f696f0acb3c3a39cf691d936a5b48.scope: Deactivated successfully.
Oct  1 12:14:36 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5/config/ceph.client.admin.keyring
Oct  1 12:14:36 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5/config/ceph.client.admin.keyring
Oct  1 12:14:36 np0005464891 ceph-mon[74303]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  1 12:14:36 np0005464891 python3[81638]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:14:37 np0005464891 podman[81689]: 2025-10-01 16:14:37.045643587 +0000 UTC m=+0.051673162 container create d37bff35908397cf396d89227034d1070063d9c8394f20025e517dd854733d76 (image=quay.io/ceph/ceph:v18, name=suspicious_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:14:37 np0005464891 systemd[1]: Started libpod-conmon-d37bff35908397cf396d89227034d1070063d9c8394f20025e517dd854733d76.scope.
Oct  1 12:14:37 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:37 np0005464891 podman[81689]: 2025-10-01 16:14:37.020577008 +0000 UTC m=+0.026606673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:37 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c5edbb1c3b2f15adba0e95fd30ae5fb54b39dfe4fb6023005b9456b87a373a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:37 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c5edbb1c3b2f15adba0e95fd30ae5fb54b39dfe4fb6023005b9456b87a373a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:37 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c5edbb1c3b2f15adba0e95fd30ae5fb54b39dfe4fb6023005b9456b87a373a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:37 np0005464891 podman[81689]: 2025-10-01 16:14:37.14087408 +0000 UTC m=+0.146903725 container init d37bff35908397cf396d89227034d1070063d9c8394f20025e517dd854733d76 (image=quay.io/ceph/ceph:v18, name=suspicious_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 12:14:37 np0005464891 podman[81689]: 2025-10-01 16:14:37.146905178 +0000 UTC m=+0.152934783 container start d37bff35908397cf396d89227034d1070063d9c8394f20025e517dd854733d76 (image=quay.io/ceph/ceph:v18, name=suspicious_khayyam, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 12:14:37 np0005464891 podman[81689]: 2025-10-01 16:14:37.150254241 +0000 UTC m=+0.156283816 container attach d37bff35908397cf396d89227034d1070063d9c8394f20025e517dd854733d76 (image=quay.io/ceph/ceph:v18, name=suspicious_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 12:14:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Oct  1 12:14:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2804495680' entity='client.admin' 
Oct  1 12:14:37 np0005464891 systemd[1]: libpod-d37bff35908397cf396d89227034d1070063d9c8394f20025e517dd854733d76.scope: Deactivated successfully.
Oct  1 12:14:37 np0005464891 podman[81689]: 2025-10-01 16:14:37.669111967 +0000 UTC m=+0.675141542 container died d37bff35908397cf396d89227034d1070063d9c8394f20025e517dd854733d76 (image=quay.io/ceph/ceph:v18, name=suspicious_khayyam, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:14:37 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b6c5edbb1c3b2f15adba0e95fd30ae5fb54b39dfe4fb6023005b9456b87a373a-merged.mount: Deactivated successfully.
Oct  1 12:14:37 np0005464891 podman[81689]: 2025-10-01 16:14:37.718531525 +0000 UTC m=+0.724561140 container remove d37bff35908397cf396d89227034d1070063d9c8394f20025e517dd854733d76 (image=quay.io/ceph/ceph:v18, name=suspicious_khayyam, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:14:37 np0005464891 systemd[1]: libpod-conmon-d37bff35908397cf396d89227034d1070063d9c8394f20025e517dd854733d76.scope: Deactivated successfully.
Oct  1 12:14:37 np0005464891 ceph-mon[74303]: Updating compute-0:/var/lib/ceph/6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5/config/ceph.client.admin.keyring
Oct  1 12:14:37 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/2804495680' entity='client.admin' 
Oct  1 12:14:37 np0005464891 ansible-async_wrapper.py[80154]: Done in kid B.
Oct  1 12:14:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:14:38 np0005464891 python3[82062]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:14:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:14:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:14:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:14:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:38 np0005464891 ceph-mgr[74592]: [progress INFO root] update: starting ev 399520d6-2742-4b0d-965a-41e44e9da76d (Updating crash deployment (+1 -> 1))
Oct  1 12:14:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct  1 12:14:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  1 12:14:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  1 12:14:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:14:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:14:38 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct  1 12:14:38 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct  1 12:14:38 np0005464891 podman[82113]: 2025-10-01 16:14:38.149819982 +0000 UTC m=+0.068864481 container create 82e82517b24aead4f8b53aa7b0b75e165056fd066551c8f5f70152ceb091d5d1 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:14:38 np0005464891 systemd[1]: Started libpod-conmon-82e82517b24aead4f8b53aa7b0b75e165056fd066551c8f5f70152ceb091d5d1.scope.
Oct  1 12:14:38 np0005464891 podman[82113]: 2025-10-01 16:14:38.119431045 +0000 UTC m=+0.038475634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:38 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:38 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab58e8cc556bb61d36b1da396c6669eb964062c9f8e1124a529aa5624fcc732b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:38 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab58e8cc556bb61d36b1da396c6669eb964062c9f8e1124a529aa5624fcc732b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:38 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab58e8cc556bb61d36b1da396c6669eb964062c9f8e1124a529aa5624fcc732b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:38 np0005464891 podman[82113]: 2025-10-01 16:14:38.235819618 +0000 UTC m=+0.154864227 container init 82e82517b24aead4f8b53aa7b0b75e165056fd066551c8f5f70152ceb091d5d1 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:14:38 np0005464891 podman[82113]: 2025-10-01 16:14:38.245786995 +0000 UTC m=+0.164831524 container start 82e82517b24aead4f8b53aa7b0b75e165056fd066551c8f5f70152ceb091d5d1 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 12:14:38 np0005464891 podman[82113]: 2025-10-01 16:14:38.251754481 +0000 UTC m=+0.170799060 container attach 82e82517b24aead4f8b53aa7b0b75e165056fd066551c8f5f70152ceb091d5d1 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 12:14:38 np0005464891 podman[82296]: 2025-10-01 16:14:38.762930994 +0000 UTC m=+0.044942053 container create ca4a7c86350ef25d9194aaeb2e71beb95a224251f61dc2376496589d93c3b260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:14:38 np0005464891 systemd[1]: Started libpod-conmon-ca4a7c86350ef25d9194aaeb2e71beb95a224251f61dc2376496589d93c3b260.scope.
Oct  1 12:14:38 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:38 np0005464891 podman[82296]: 2025-10-01 16:14:38.737950138 +0000 UTC m=+0.019961297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:14:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Oct  1 12:14:38 np0005464891 podman[82296]: 2025-10-01 16:14:38.838548121 +0000 UTC m=+0.120559230 container init ca4a7c86350ef25d9194aaeb2e71beb95a224251f61dc2376496589d93c3b260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:14:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1716092765' entity='client.admin' 
Oct  1 12:14:38 np0005464891 podman[82296]: 2025-10-01 16:14:38.848801737 +0000 UTC m=+0.130812796 container start ca4a7c86350ef25d9194aaeb2e71beb95a224251f61dc2376496589d93c3b260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_franklin, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:14:38 np0005464891 podman[82296]: 2025-10-01 16:14:38.852819939 +0000 UTC m=+0.134831078 container attach ca4a7c86350ef25d9194aaeb2e71beb95a224251f61dc2376496589d93c3b260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 12:14:38 np0005464891 reverent_franklin[82313]: 167 167
Oct  1 12:14:38 np0005464891 systemd[1]: libpod-ca4a7c86350ef25d9194aaeb2e71beb95a224251f61dc2376496589d93c3b260.scope: Deactivated successfully.
Oct  1 12:14:38 np0005464891 podman[82296]: 2025-10-01 16:14:38.856001697 +0000 UTC m=+0.138012786 container died ca4a7c86350ef25d9194aaeb2e71beb95a224251f61dc2376496589d93c3b260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_franklin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:14:38 np0005464891 systemd[1]: libpod-82e82517b24aead4f8b53aa7b0b75e165056fd066551c8f5f70152ceb091d5d1.scope: Deactivated successfully.
Oct  1 12:14:38 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4a10f75df1ff3c3a0ca81052c22f30d70161e04272ecb2c2c262a6d61ffdfbf7-merged.mount: Deactivated successfully.
Oct  1 12:14:38 np0005464891 podman[82296]: 2025-10-01 16:14:38.902265587 +0000 UTC m=+0.184276676 container remove ca4a7c86350ef25d9194aaeb2e71beb95a224251f61dc2376496589d93c3b260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 12:14:38 np0005464891 podman[82322]: 2025-10-01 16:14:38.909635342 +0000 UTC m=+0.025035709 container died 82e82517b24aead4f8b53aa7b0b75e165056fd066551c8f5f70152ceb091d5d1 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 12:14:38 np0005464891 systemd[1]: libpod-conmon-ca4a7c86350ef25d9194aaeb2e71beb95a224251f61dc2376496589d93c3b260.scope: Deactivated successfully.
Oct  1 12:14:38 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ab58e8cc556bb61d36b1da396c6669eb964062c9f8e1124a529aa5624fcc732b-merged.mount: Deactivated successfully.
Oct  1 12:14:38 np0005464891 podman[82322]: 2025-10-01 16:14:38.951826157 +0000 UTC m=+0.067226514 container remove 82e82517b24aead4f8b53aa7b0b75e165056fd066551c8f5f70152ceb091d5d1 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 12:14:38 np0005464891 systemd[1]: libpod-conmon-82e82517b24aead4f8b53aa7b0b75e165056fd066551c8f5f70152ceb091d5d1.scope: Deactivated successfully.
Oct  1 12:14:38 np0005464891 systemd[1]: Reloading.
Oct  1 12:14:39 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:14:39 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:14:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  1 12:14:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  1 12:14:39 np0005464891 ceph-mon[74303]: Deploying daemon crash.compute-0 on compute-0
Oct  1 12:14:39 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1716092765' entity='client.admin' 
Oct  1 12:14:39 np0005464891 systemd[1]: Reloading.
Oct  1 12:14:39 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:14:39 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:14:39 np0005464891 python3[82412]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:14:39 np0005464891 podman[82450]: 2025-10-01 16:14:39.523132255 +0000 UTC m=+0.060726742 container create 582c7029ca3d38a1121d1f93706e5d7d76140ccb55cad803240171cb1779585b (image=quay.io/ceph/ceph:v18, name=recursing_goodall, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:14:39 np0005464891 podman[82450]: 2025-10-01 16:14:39.494977211 +0000 UTC m=+0.032571758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:39 np0005464891 systemd[1]: Started libpod-conmon-582c7029ca3d38a1121d1f93706e5d7d76140ccb55cad803240171cb1779585b.scope.
Oct  1 12:14:39 np0005464891 systemd[1]: Starting Ceph crash.compute-0 for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5...
Oct  1 12:14:39 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af0c0c976fb3c5b432eba91a793960575ffb6b7084d311801e00a116260ed46/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af0c0c976fb3c5b432eba91a793960575ffb6b7084d311801e00a116260ed46/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af0c0c976fb3c5b432eba91a793960575ffb6b7084d311801e00a116260ed46/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:39 np0005464891 podman[82450]: 2025-10-01 16:14:39.670743468 +0000 UTC m=+0.208337975 container init 582c7029ca3d38a1121d1f93706e5d7d76140ccb55cad803240171cb1779585b (image=quay.io/ceph/ceph:v18, name=recursing_goodall, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:14:39 np0005464891 podman[82450]: 2025-10-01 16:14:39.682168816 +0000 UTC m=+0.219763313 container start 582c7029ca3d38a1121d1f93706e5d7d76140ccb55cad803240171cb1779585b (image=quay.io/ceph/ceph:v18, name=recursing_goodall, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 12:14:39 np0005464891 podman[82450]: 2025-10-01 16:14:39.685818878 +0000 UTC m=+0.223413325 container attach 582c7029ca3d38a1121d1f93706e5d7d76140ccb55cad803240171cb1779585b (image=quay.io/ceph/ceph:v18, name=recursing_goodall, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 12:14:39 np0005464891 podman[82519]: 2025-10-01 16:14:39.908081081 +0000 UTC m=+0.066546885 container create 61e502216e8362607b2bae6778b88d5d3cf924f4c121c8723c22364309588b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 12:14:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:14:39 np0005464891 podman[82519]: 2025-10-01 16:14:39.877811428 +0000 UTC m=+0.036277242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:14:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb2307326224fff8bef4886c434fa70e1cba5a0e04038b92eee0f2287bd0bbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb2307326224fff8bef4886c434fa70e1cba5a0e04038b92eee0f2287bd0bbd/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb2307326224fff8bef4886c434fa70e1cba5a0e04038b92eee0f2287bd0bbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb2307326224fff8bef4886c434fa70e1cba5a0e04038b92eee0f2287bd0bbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:40 np0005464891 podman[82519]: 2025-10-01 16:14:40.00424387 +0000 UTC m=+0.162709734 container init 61e502216e8362607b2bae6778b88d5d3cf924f4c121c8723c22364309588b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-crash-compute-0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:14:40 np0005464891 podman[82519]: 2025-10-01 16:14:40.014400153 +0000 UTC m=+0.172865937 container start 61e502216e8362607b2bae6778b88d5d3cf924f4c121c8723c22364309588b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-crash-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:14:40 np0005464891 bash[82519]: 61e502216e8362607b2bae6778b88d5d3cf924f4c121c8723c22364309588b73
Oct  1 12:14:40 np0005464891 systemd[1]: Started Ceph crash.compute-0 for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5.
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:40 np0005464891 ceph-mgr[74592]: [progress INFO root] complete: finished ev 399520d6-2742-4b0d-965a-41e44e9da76d (Updating crash deployment (+1 -> 1))
Oct  1 12:14:40 np0005464891 ceph-mgr[74592]: [progress INFO root] Completed event 399520d6-2742-4b0d-965a-41e44e9da76d (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:40 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev e6b10687-6476-4e4b-96db-c68c2d8e2568 does not exist
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:40 np0005464891 ceph-mgr[74592]: [progress INFO root] update: starting ev 0e004e7d-a982-45c4-8674-5617e85578cc (Updating mgr deployment (+1 -> 2))
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.vouobe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vouobe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vouobe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:14:40 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.vouobe on compute-0
Oct  1 12:14:40 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.vouobe on compute-0
Oct  1 12:14:40 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-crash-compute-0[82534]: INFO:ceph-crash:pinging cluster to exercise our key
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/801541470' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct  1 12:14:40 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-crash-compute-0[82534]: 2025-10-01T16:14:40.426+0000 7fb77b1d6640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  1 12:14:40 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-crash-compute-0[82534]: 2025-10-01T16:14:40.426+0000 7fb77b1d6640 -1 AuthRegistry(0x7fb774066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  1 12:14:40 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-crash-compute-0[82534]: 2025-10-01T16:14:40.427+0000 7fb77b1d6640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  1 12:14:40 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-crash-compute-0[82534]: 2025-10-01T16:14:40.427+0000 7fb77b1d6640 -1 AuthRegistry(0x7fb77b1d5000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  1 12:14:40 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-crash-compute-0[82534]: 2025-10-01T16:14:40.428+0000 7fb778f4b640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  1 12:14:40 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-crash-compute-0[82534]: 2025-10-01T16:14:40.428+0000 7fb77b1d6640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct  1 12:14:40 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-crash-compute-0[82534]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct  1 12:14:40 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-crash-compute-0[82534]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct  1 12:14:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:14:40 np0005464891 podman[82710]: 2025-10-01 16:14:40.869007305 +0000 UTC m=+0.066015331 container create 2dc97b663de19d2f0858c5ccb7464ba1982bbe752459e9be86ffdf70e6d57a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_allen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:14:40 np0005464891 podman[82710]: 2025-10-01 16:14:40.841034525 +0000 UTC m=+0.038042601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:14:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct  1 12:14:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 12:14:41 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:41 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:41 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:41 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:41 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:41 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vouobe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  1 12:14:41 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vouobe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  1 12:14:41 np0005464891 ceph-mon[74303]: Deploying daemon mgr.compute-0.vouobe on compute-0
Oct  1 12:14:41 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/801541470' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct  1 12:14:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/801541470' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct  1 12:14:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct  1 12:14:41 np0005464891 recursing_goodall[82467]: set require_min_compat_client to mimic
Oct  1 12:14:41 np0005464891 systemd[1]: Started libpod-conmon-2dc97b663de19d2f0858c5ccb7464ba1982bbe752459e9be86ffdf70e6d57a96.scope.
Oct  1 12:14:41 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct  1 12:14:41 np0005464891 podman[82450]: 2025-10-01 16:14:41.186959883 +0000 UTC m=+1.724554340 container died 582c7029ca3d38a1121d1f93706e5d7d76140ccb55cad803240171cb1779585b (image=quay.io/ceph/ceph:v18, name=recursing_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:14:41 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:41 np0005464891 systemd[1]: libpod-582c7029ca3d38a1121d1f93706e5d7d76140ccb55cad803240171cb1779585b.scope: Deactivated successfully.
Oct  1 12:14:41 np0005464891 podman[82710]: 2025-10-01 16:14:41.207379792 +0000 UTC m=+0.404387868 container init 2dc97b663de19d2f0858c5ccb7464ba1982bbe752459e9be86ffdf70e6d57a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 12:14:41 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6af0c0c976fb3c5b432eba91a793960575ffb6b7084d311801e00a116260ed46-merged.mount: Deactivated successfully.
Oct  1 12:14:41 np0005464891 podman[82710]: 2025-10-01 16:14:41.21875827 +0000 UTC m=+0.415766286 container start 2dc97b663de19d2f0858c5ccb7464ba1982bbe752459e9be86ffdf70e6d57a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_allen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:14:41 np0005464891 podman[82710]: 2025-10-01 16:14:41.22381117 +0000 UTC m=+0.420819246 container attach 2dc97b663de19d2f0858c5ccb7464ba1982bbe752459e9be86ffdf70e6d57a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_allen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:14:41 np0005464891 inspiring_allen[82728]: 167 167
Oct  1 12:14:41 np0005464891 systemd[1]: libpod-2dc97b663de19d2f0858c5ccb7464ba1982bbe752459e9be86ffdf70e6d57a96.scope: Deactivated successfully.
Oct  1 12:14:41 np0005464891 conmon[82728]: conmon 2dc97b663de19d2f0858 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2dc97b663de19d2f0858c5ccb7464ba1982bbe752459e9be86ffdf70e6d57a96.scope/container/memory.events
Oct  1 12:14:41 np0005464891 podman[82450]: 2025-10-01 16:14:41.238204841 +0000 UTC m=+1.775799298 container remove 582c7029ca3d38a1121d1f93706e5d7d76140ccb55cad803240171cb1779585b (image=quay.io/ceph/ceph:v18, name=recursing_goodall, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:14:41 np0005464891 podman[82710]: 2025-10-01 16:14:41.24068017 +0000 UTC m=+0.437688166 container died 2dc97b663de19d2f0858c5ccb7464ba1982bbe752459e9be86ffdf70e6d57a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_allen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:14:41 np0005464891 systemd[1]: libpod-conmon-582c7029ca3d38a1121d1f93706e5d7d76140ccb55cad803240171cb1779585b.scope: Deactivated successfully.
Oct  1 12:14:41 np0005464891 systemd[1]: var-lib-containers-storage-overlay-a2b65ed6f11c025fca3b0f97243a803a4062601e00ff6edd932605110a990286-merged.mount: Deactivated successfully.
Oct  1 12:14:41 np0005464891 podman[82710]: 2025-10-01 16:14:41.281351733 +0000 UTC m=+0.478359739 container remove 2dc97b663de19d2f0858c5ccb7464ba1982bbe752459e9be86ffdf70e6d57a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_allen, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:14:41 np0005464891 systemd[1]: libpod-conmon-2dc97b663de19d2f0858c5ccb7464ba1982bbe752459e9be86ffdf70e6d57a96.scope: Deactivated successfully.
Oct  1 12:14:41 np0005464891 systemd[1]: Reloading.
Oct  1 12:14:41 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:14:41 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:14:41 np0005464891 systemd[1]: Reloading.
Oct  1 12:14:41 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:14:41 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:14:41 np0005464891 systemd[1]: Starting Ceph mgr.compute-0.vouobe for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5...
Oct  1 12:14:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:14:42 np0005464891 ceph-mgr[74592]: [progress INFO root] Writing back 1 completed events
Oct  1 12:14:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  1 12:14:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:14:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:14:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:14:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:14:42 np0005464891 python3[82861]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:14:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:14:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:14:42 np0005464891 podman[82899]: 2025-10-01 16:14:42.136437378 +0000 UTC m=+0.062851652 container create 4f4ea13dd760efda4361d07f0515d9cc07fb3eeaa316345ff94f41fe27789cb4 (image=quay.io/ceph/ceph:v18, name=awesome_fermat, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:14:42 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/801541470' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct  1 12:14:42 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:42 np0005464891 systemd[1]: Started libpod-conmon-4f4ea13dd760efda4361d07f0515d9cc07fb3eeaa316345ff94f41fe27789cb4.scope.
Oct  1 12:14:42 np0005464891 podman[82920]: 2025-10-01 16:14:42.181121813 +0000 UTC m=+0.052472103 container create 9e8deabfe98a4e34219d5a878c60308be7afc50621ac782394406cbf91042ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-vouobe, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:14:42 np0005464891 podman[82899]: 2025-10-01 16:14:42.117063518 +0000 UTC m=+0.043477832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:42 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5160b4da84883a62cf11e7df8a73fa8b2f04404257ffc1fa18a52b7299133646/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5160b4da84883a62cf11e7df8a73fa8b2f04404257ffc1fa18a52b7299133646/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5160b4da84883a62cf11e7df8a73fa8b2f04404257ffc1fa18a52b7299133646/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:42 np0005464891 podman[82899]: 2025-10-01 16:14:42.236768604 +0000 UTC m=+0.163182908 container init 4f4ea13dd760efda4361d07f0515d9cc07fb3eeaa316345ff94f41fe27789cb4 (image=quay.io/ceph/ceph:v18, name=awesome_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 12:14:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e05a3f4129dd5134688dff97e963ef27cf4820a1faf9d8dba5e7fd5e57ecedd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e05a3f4129dd5134688dff97e963ef27cf4820a1faf9d8dba5e7fd5e57ecedd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e05a3f4129dd5134688dff97e963ef27cf4820a1faf9d8dba5e7fd5e57ecedd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e05a3f4129dd5134688dff97e963ef27cf4820a1faf9d8dba5e7fd5e57ecedd/merged/var/lib/ceph/mgr/ceph-compute-0.vouobe supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:42 np0005464891 podman[82899]: 2025-10-01 16:14:42.248536091 +0000 UTC m=+0.174950365 container start 4f4ea13dd760efda4361d07f0515d9cc07fb3eeaa316345ff94f41fe27789cb4 (image=quay.io/ceph/ceph:v18, name=awesome_fermat, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 12:14:42 np0005464891 podman[82899]: 2025-10-01 16:14:42.252577874 +0000 UTC m=+0.178992218 container attach 4f4ea13dd760efda4361d07f0515d9cc07fb3eeaa316345ff94f41fe27789cb4 (image=quay.io/ceph/ceph:v18, name=awesome_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:14:42 np0005464891 podman[82920]: 2025-10-01 16:14:42.166540227 +0000 UTC m=+0.037890537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:14:42 np0005464891 podman[82920]: 2025-10-01 16:14:42.261738269 +0000 UTC m=+0.133088639 container init 9e8deabfe98a4e34219d5a878c60308be7afc50621ac782394406cbf91042ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-vouobe, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:14:42 np0005464891 podman[82920]: 2025-10-01 16:14:42.267388057 +0000 UTC m=+0.138738387 container start 9e8deabfe98a4e34219d5a878c60308be7afc50621ac782394406cbf91042ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-vouobe, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:14:42 np0005464891 bash[82920]: 9e8deabfe98a4e34219d5a878c60308be7afc50621ac782394406cbf91042ce0
Oct  1 12:14:42 np0005464891 systemd[1]: Started Ceph mgr.compute-0.vouobe for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5.
Oct  1 12:14:42 np0005464891 ceph-mgr[82949]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 12:14:42 np0005464891 ceph-mgr[82949]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  1 12:14:42 np0005464891 ceph-mgr[82949]: pidfile_write: ignore empty --pid-file
Oct  1 12:14:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:14:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:14:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  1 12:14:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:42 np0005464891 ceph-mgr[74592]: [progress INFO root] complete: finished ev 0e004e7d-a982-45c4-8674-5617e85578cc (Updating mgr deployment (+1 -> 2))
Oct  1 12:14:42 np0005464891 ceph-mgr[74592]: [progress INFO root] Completed event 0e004e7d-a982-45c4-8674-5617e85578cc (Updating mgr deployment (+1 -> 2)) in 2 seconds
Oct  1 12:14:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  1 12:14:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:42 np0005464891 ceph-mgr[82949]: mgr[py] Loading python module 'alerts'
Oct  1 12:14:42 np0005464891 ceph-mgr[82949]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  1 12:14:42 np0005464891 ceph-mgr[82949]: mgr[py] Loading python module 'balancer'
Oct  1 12:14:42 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-vouobe[82944]: 2025-10-01T16:14:42.719+0000 7f60848de140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  1 12:14:42 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 12:14:42 np0005464891 ceph-mgr[82949]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  1 12:14:42 np0005464891 ceph-mgr[82949]: mgr[py] Loading python module 'cephadm'
Oct  1 12:14:42 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-vouobe[82944]: 2025-10-01T16:14:42.981+0000 7f60848de140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  1 12:14:43 np0005464891 podman[83309]: 2025-10-01 16:14:43.269190219 +0000 UTC m=+0.072835211 container exec 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 podman[83309]: 2025-10-01 16:14:43.362320114 +0000 UTC m=+0.165965066 container exec_died 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Added host compute-0
Oct  1 12:14:43 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Added host compute-0
Oct  1 12:14:43 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Saving service mon spec with placement compute-0
Oct  1 12:14:43 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Oct  1 12:14:43 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct  1 12:14:43 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct  1 12:14:43 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Oct  1 12:14:43 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 awesome_fermat[82939]: Added host 'compute-0' with addr '192.168.122.100'
Oct  1 12:14:43 np0005464891 awesome_fermat[82939]: Scheduled mon update...
Oct  1 12:14:43 np0005464891 awesome_fermat[82939]: Scheduled mgr update...
Oct  1 12:14:43 np0005464891 awesome_fermat[82939]: Scheduled osd.default_drive_group update...
Oct  1 12:14:43 np0005464891 systemd[1]: libpod-4f4ea13dd760efda4361d07f0515d9cc07fb3eeaa316345ff94f41fe27789cb4.scope: Deactivated successfully.
Oct  1 12:14:43 np0005464891 podman[83378]: 2025-10-01 16:14:43.500547885 +0000 UTC m=+0.030916082 container died 4f4ea13dd760efda4361d07f0515d9cc07fb3eeaa316345ff94f41fe27789cb4 (image=quay.io/ceph/ceph:v18, name=awesome_fermat, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  1 12:14:43 np0005464891 systemd[1]: var-lib-containers-storage-overlay-5160b4da84883a62cf11e7df8a73fa8b2f04404257ffc1fa18a52b7299133646-merged.mount: Deactivated successfully.
Oct  1 12:14:43 np0005464891 podman[83378]: 2025-10-01 16:14:43.558908462 +0000 UTC m=+0.089276649 container remove 4f4ea13dd760efda4361d07f0515d9cc07fb3eeaa316345ff94f41fe27789cb4 (image=quay.io/ceph/ceph:v18, name=awesome_fermat, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 12:14:43 np0005464891 systemd[1]: libpod-conmon-4f4ea13dd760efda4361d07f0515d9cc07fb3eeaa316345ff94f41fe27789cb4.scope: Deactivated successfully.
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 3fb8aeeb-6d19-4c2f-bfbe-30955c8ac03f does not exist
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  1 12:14:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:43 np0005464891 ceph-mgr[74592]: [progress INFO root] update: starting ev 5877b7b8-217b-4995-acbe-b8f1682101d3 (Updating mgr deployment (-1 -> 1))
Oct  1 12:14:43 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.vouobe from compute-0 -- ports [8765]
Oct  1 12:14:43 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.vouobe from compute-0 -- ports [8765]
Oct  1 12:14:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:14:44 np0005464891 python3[83535]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:14:44 np0005464891 podman[83556]: 2025-10-01 16:14:44.123466111 +0000 UTC m=+0.050305763 container create 24e78a2fa5f67f30ff460905e359401fde8a38be533c3fda102d7a37b97670bd (image=quay.io/ceph/ceph:v18, name=sleepy_galileo, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 12:14:44 np0005464891 systemd[1]: Started libpod-conmon-24e78a2fa5f67f30ff460905e359401fde8a38be533c3fda102d7a37b97670bd.scope.
Oct  1 12:14:44 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33ca47caeb7fe68ea7af98b686ddb7c92f1b2da625175303e6b821c2b47726ce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33ca47caeb7fe68ea7af98b686ddb7c92f1b2da625175303e6b821c2b47726ce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33ca47caeb7fe68ea7af98b686ddb7c92f1b2da625175303e6b821c2b47726ce/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:44 np0005464891 podman[83556]: 2025-10-01 16:14:44.1051133 +0000 UTC m=+0.031953002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:14:44 np0005464891 podman[83556]: 2025-10-01 16:14:44.210162177 +0000 UTC m=+0.137001849 container init 24e78a2fa5f67f30ff460905e359401fde8a38be533c3fda102d7a37b97670bd (image=quay.io/ceph/ceph:v18, name=sleepy_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:14:44 np0005464891 podman[83556]: 2025-10-01 16:14:44.216092242 +0000 UTC m=+0.142931894 container start 24e78a2fa5f67f30ff460905e359401fde8a38be533c3fda102d7a37b97670bd (image=quay.io/ceph/ceph:v18, name=sleepy_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 12:14:44 np0005464891 podman[83556]: 2025-10-01 16:14:44.219155978 +0000 UTC m=+0.145995670 container attach 24e78a2fa5f67f30ff460905e359401fde8a38be533c3fda102d7a37b97670bd (image=quay.io/ceph/ceph:v18, name=sleepy_galileo, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:14:44 np0005464891 systemd[1]: Stopping Ceph mgr.compute-0.vouobe for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5...
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: Added host compute-0
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: Saving service mon spec with placement compute-0
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: Saving service mgr spec with placement compute-0
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: Marking host: compute-0 for OSDSpec preview refresh.
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: Saving service osd.default_drive_group spec with placement compute-0
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: Removing daemon mgr.compute-0.vouobe from compute-0 -- ports [8765]
Oct  1 12:14:44 np0005464891 podman[83654]: 2025-10-01 16:14:44.488281136 +0000 UTC m=+0.059362515 container died 9e8deabfe98a4e34219d5a878c60308be7afc50621ac782394406cbf91042ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-vouobe, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:14:44 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4e05a3f4129dd5134688dff97e963ef27cf4820a1faf9d8dba5e7fd5e57ecedd-merged.mount: Deactivated successfully.
Oct  1 12:14:44 np0005464891 podman[83654]: 2025-10-01 16:14:44.537362263 +0000 UTC m=+0.108443632 container remove 9e8deabfe98a4e34219d5a878c60308be7afc50621ac782394406cbf91042ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-vouobe, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 12:14:44 np0005464891 bash[83654]: ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-vouobe
Oct  1 12:14:44 np0005464891 systemd[1]: ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5@mgr.compute-0.vouobe.service: Main process exited, code=exited, status=143/n/a
Oct  1 12:14:44 np0005464891 systemd[1]: ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5@mgr.compute-0.vouobe.service: Failed with result 'exit-code'.
Oct  1 12:14:44 np0005464891 systemd[1]: Stopped Ceph mgr.compute-0.vouobe for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5.
Oct  1 12:14:44 np0005464891 systemd[1]: ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5@mgr.compute-0.vouobe.service: Consumed 3.056s CPU time.
Oct  1 12:14:44 np0005464891 systemd[1]: Reloading.
Oct  1 12:14:44 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:14:44 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  1 12:14:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3724355499' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  1 12:14:44 np0005464891 sleepy_galileo[83598]: 
Oct  1 12:14:44 np0005464891 sleepy_galileo[83598]: {"fsid":"6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":79,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-01T16:13:22.803916+0000","services":{}},"progress_events":{"5877b7b8-217b-4995-acbe-b8f1682101d3":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Oct  1 12:14:44 np0005464891 podman[83771]: 2025-10-01 16:14:44.899116853 +0000 UTC m=+0.022374135 container died 24e78a2fa5f67f30ff460905e359401fde8a38be533c3fda102d7a37b97670bd (image=quay.io/ceph/ceph:v18, name=sleepy_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 12:14:44 np0005464891 systemd[1]: libpod-24e78a2fa5f67f30ff460905e359401fde8a38be533c3fda102d7a37b97670bd.scope: Deactivated successfully.
Oct  1 12:14:44 np0005464891 systemd[1]: var-lib-containers-storage-overlay-33ca47caeb7fe68ea7af98b686ddb7c92f1b2da625175303e6b821c2b47726ce-merged.mount: Deactivated successfully.
Oct  1 12:14:45 np0005464891 podman[83771]: 2025-10-01 16:14:45.006131094 +0000 UTC m=+0.129388396 container remove 24e78a2fa5f67f30ff460905e359401fde8a38be533c3fda102d7a37b97670bd (image=quay.io/ceph/ceph:v18, name=sleepy_galileo, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:14:45 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.vouobe
Oct  1 12:14:45 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.vouobe
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.vouobe"} v 0) v1
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.vouobe"}]: dispatch
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.vouobe"}]': finished
Oct  1 12:14:45 np0005464891 systemd[1]: libpod-conmon-24e78a2fa5f67f30ff460905e359401fde8a38be533c3fda102d7a37b97670bd.scope: Deactivated successfully.
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:45 np0005464891 ceph-mgr[74592]: [progress INFO root] complete: finished ev 5877b7b8-217b-4995-acbe-b8f1682101d3 (Updating mgr deployment (-1 -> 1))
Oct  1 12:14:45 np0005464891 ceph-mgr[74592]: [progress INFO root] Completed event 5877b7b8-217b-4995-acbe-b8f1682101d3 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:45 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 9e2519c4-55b8-423c-b4d3-9160c9014bb9 does not exist
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.vouobe"}]: dispatch
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.vouobe"}]': finished
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:14:45 np0005464891 podman[83927]: 2025-10-01 16:14:45.723346988 +0000 UTC m=+0.051850916 container create 8fdf90ba87fbf1da8f0bc8a41ddcee39635f2a0d548ecf88032aaa907cd155c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 12:14:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:14:45 np0005464891 systemd[1]: Started libpod-conmon-8fdf90ba87fbf1da8f0bc8a41ddcee39635f2a0d548ecf88032aaa907cd155c5.scope.
Oct  1 12:14:45 np0005464891 podman[83927]: 2025-10-01 16:14:45.695974055 +0000 UTC m=+0.024478053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:14:45 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:45 np0005464891 podman[83927]: 2025-10-01 16:14:45.829395272 +0000 UTC m=+0.157899260 container init 8fdf90ba87fbf1da8f0bc8a41ddcee39635f2a0d548ecf88032aaa907cd155c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:14:45 np0005464891 podman[83927]: 2025-10-01 16:14:45.841653104 +0000 UTC m=+0.170156992 container start 8fdf90ba87fbf1da8f0bc8a41ddcee39635f2a0d548ecf88032aaa907cd155c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_franklin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:14:45 np0005464891 podman[83927]: 2025-10-01 16:14:45.845126531 +0000 UTC m=+0.173630489 container attach 8fdf90ba87fbf1da8f0bc8a41ddcee39635f2a0d548ecf88032aaa907cd155c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_franklin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 12:14:45 np0005464891 awesome_franklin[83944]: 167 167
Oct  1 12:14:45 np0005464891 systemd[1]: libpod-8fdf90ba87fbf1da8f0bc8a41ddcee39635f2a0d548ecf88032aaa907cd155c5.scope: Deactivated successfully.
Oct  1 12:14:45 np0005464891 podman[83927]: 2025-10-01 16:14:45.851012265 +0000 UTC m=+0.179516223 container died 8fdf90ba87fbf1da8f0bc8a41ddcee39635f2a0d548ecf88032aaa907cd155c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_franklin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:14:45 np0005464891 systemd[1]: var-lib-containers-storage-overlay-2fc9ab7ff26ff03a94f6a61a4cc3a71a380ba33472b3877f14f0f0aee5a58f59-merged.mount: Deactivated successfully.
Oct  1 12:14:45 np0005464891 podman[83927]: 2025-10-01 16:14:45.906738468 +0000 UTC m=+0.235242396 container remove 8fdf90ba87fbf1da8f0bc8a41ddcee39635f2a0d548ecf88032aaa907cd155c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_franklin, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:14:45 np0005464891 systemd[1]: libpod-conmon-8fdf90ba87fbf1da8f0bc8a41ddcee39635f2a0d548ecf88032aaa907cd155c5.scope: Deactivated successfully.
Oct  1 12:14:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:14:46 np0005464891 podman[83967]: 2025-10-01 16:14:46.093775809 +0000 UTC m=+0.053328767 container create 581e450cfba1d67f179d2d3640af93e9638f9637afc7e3a4cd7da744a234d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_moser, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:14:46 np0005464891 systemd[1]: Started libpod-conmon-581e450cfba1d67f179d2d3640af93e9638f9637afc7e3a4cd7da744a234d26a.scope.
Oct  1 12:14:46 np0005464891 podman[83967]: 2025-10-01 16:14:46.06258305 +0000 UTC m=+0.022136048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:14:46 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:14:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6ed4737c0642af6605647aa4ea246b835b251b6e2b974ebb108546c54b26d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6ed4737c0642af6605647aa4ea246b835b251b6e2b974ebb108546c54b26d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6ed4737c0642af6605647aa4ea246b835b251b6e2b974ebb108546c54b26d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6ed4737c0642af6605647aa4ea246b835b251b6e2b974ebb108546c54b26d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6ed4737c0642af6605647aa4ea246b835b251b6e2b974ebb108546c54b26d5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:14:46 np0005464891 podman[83967]: 2025-10-01 16:14:46.188602001 +0000 UTC m=+0.148154969 container init 581e450cfba1d67f179d2d3640af93e9638f9637afc7e3a4cd7da744a234d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 12:14:46 np0005464891 podman[83967]: 2025-10-01 16:14:46.196883402 +0000 UTC m=+0.156436390 container start 581e450cfba1d67f179d2d3640af93e9638f9637afc7e3a4cd7da744a234d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:14:46 np0005464891 podman[83967]: 2025-10-01 16:14:46.200594535 +0000 UTC m=+0.160147513 container attach 581e450cfba1d67f179d2d3640af93e9638f9637afc7e3a4cd7da744a234d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_moser, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 12:14:47 np0005464891 ceph-mgr[74592]: [progress INFO root] Writing back 3 completed events
Oct  1 12:14:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  1 12:14:47 np0005464891 ceph-mon[74303]: Removing key for mgr.compute-0.vouobe
Oct  1 12:14:47 np0005464891 tender_moser[83983]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:14:47 np0005464891 tender_moser[83983]: --> relative data size: 1.0
Oct  1 12:14:47 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  1 12:14:47 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c
Oct  1 12:14:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:14:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c"} v 0) v1
Oct  1 12:14:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1586825211' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c"}]: dispatch
Oct  1 12:14:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct  1 12:14:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 12:14:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:14:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1586825211' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c"}]': finished
Oct  1 12:14:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct  1 12:14:50 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct  1 12:14:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 12:14:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 12:14:50 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 12:14:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:14:51 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:14:51 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1586825211' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c"}]: dispatch
Oct  1 12:14:51 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1586825211' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c"}]': finished
Oct  1 12:14:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:14:52 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  1 12:14:52 np0005464891 tender_moser[83983]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct  1 12:14:52 np0005464891 lvm[84045]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  1 12:14:52 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct  1 12:14:52 np0005464891 lvm[84045]: VG ceph_vg0 finished
Oct  1 12:14:52 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  1 12:14:52 np0005464891 tender_moser[83983]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  1 12:14:52 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct  1 12:14:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:14:54 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  1 12:14:54 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  1 12:14:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct  1 12:14:54 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/701864189' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  1 12:14:54 np0005464891 tender_moser[83983]: stderr: got monmap epoch 1
Oct  1 12:14:54 np0005464891 tender_moser[83983]: --> Creating keyring file for osd.0
Oct  1 12:14:54 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct  1 12:14:54 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct  1 12:14:54 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c --setuser ceph --setgroup ceph
Oct  1 12:14:55 np0005464891 ceph-mon[74303]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  1 12:14:55 np0005464891 ceph-mon[74303]: Cluster is now healthy
Oct  1 12:14:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:14:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:14:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:14:59 np0005464891 tender_moser[83983]: stderr: 2025-10-01T16:14:54.132+0000 7fa2a110c740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 12:14:59 np0005464891 tender_moser[83983]: stderr: 2025-10-01T16:14:54.132+0000 7fa2a110c740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 12:14:59 np0005464891 tender_moser[83983]: stderr: 2025-10-01T16:14:54.132+0000 7fa2a110c740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 12:14:59 np0005464891 tender_moser[83983]: stderr: 2025-10-01T16:14:54.132+0000 7fa2a110c740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct  1 12:14:59 np0005464891 tender_moser[83983]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct  1 12:14:59 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  1 12:14:59 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct  1 12:14:59 np0005464891 tender_moser[83983]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  1 12:14:59 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct  1 12:14:59 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  1 12:14:59 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  1 12:14:59 np0005464891 tender_moser[83983]: --> ceph-volume lvm activate successful for osd ID: 0
Oct  1 12:14:59 np0005464891 tender_moser[83983]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct  1 12:14:59 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  1 12:14:59 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new de7d462b-eb5f-4e2e-be78-18c7710c6a61
Oct  1 12:14:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:14:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61"} v 0) v1
Oct  1 12:14:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1848381009' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61"}]: dispatch
Oct  1 12:14:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct  1 12:14:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 12:14:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1848381009' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61"}]': finished
Oct  1 12:14:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct  1 12:14:59 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct  1 12:14:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 12:14:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 12:14:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 12:14:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 12:14:59 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 12:14:59 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 12:15:00 np0005464891 lvm[84980]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  1 12:15:00 np0005464891 lvm[84980]: VG ceph_vg1 finished
Oct  1 12:15:00 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  1 12:15:00 np0005464891 tender_moser[83983]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct  1 12:15:00 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Oct  1 12:15:00 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  1 12:15:00 np0005464891 tender_moser[83983]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  1 12:15:00 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct  1 12:15:00 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1848381009' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61"}]: dispatch
Oct  1 12:15:00 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1848381009' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61"}]': finished
Oct  1 12:15:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct  1 12:15:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/109666695' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  1 12:15:00 np0005464891 tender_moser[83983]: stderr: got monmap epoch 1
Oct  1 12:15:00 np0005464891 tender_moser[83983]: --> Creating keyring file for osd.1
Oct  1 12:15:00 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct  1 12:15:00 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct  1 12:15:00 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid de7d462b-eb5f-4e2e-be78-18c7710c6a61 --setuser ceph --setgroup ceph
Oct  1 12:15:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:15:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:15:02 np0005464891 tender_moser[83983]: stderr: 2025-10-01T16:15:00.764+0000 7efe29639740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 12:15:02 np0005464891 tender_moser[83983]: stderr: 2025-10-01T16:15:00.764+0000 7efe29639740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 12:15:02 np0005464891 tender_moser[83983]: stderr: 2025-10-01T16:15:00.765+0000 7efe29639740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 12:15:02 np0005464891 tender_moser[83983]: stderr: 2025-10-01T16:15:00.765+0000 7efe29639740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct  1 12:15:02 np0005464891 tender_moser[83983]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Oct  1 12:15:03 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  1 12:15:03 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct  1 12:15:03 np0005464891 tender_moser[83983]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  1 12:15:03 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct  1 12:15:03 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  1 12:15:03 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  1 12:15:03 np0005464891 tender_moser[83983]: --> ceph-volume lvm activate successful for osd ID: 1
Oct  1 12:15:03 np0005464891 tender_moser[83983]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Oct  1 12:15:03 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  1 12:15:03 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1f882664-54d4-4e41-96ff-3d2c8223e250
Oct  1 12:15:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250"} v 0) v1
Oct  1 12:15:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3622065536' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250"}]: dispatch
Oct  1 12:15:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct  1 12:15:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 12:15:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3622065536' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250"}]': finished
Oct  1 12:15:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Oct  1 12:15:03 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Oct  1 12:15:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 12:15:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 12:15:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 12:15:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 12:15:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 12:15:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 12:15:03 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 12:15:03 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 12:15:03 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 12:15:03 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  1 12:15:03 np0005464891 lvm[85914]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  1 12:15:03 np0005464891 lvm[85914]: VG ceph_vg2 finished
Oct  1 12:15:03 np0005464891 tender_moser[83983]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct  1 12:15:03 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Oct  1 12:15:03 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  1 12:15:03 np0005464891 tender_moser[83983]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  1 12:15:03 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct  1 12:15:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:15:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct  1 12:15:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4197408872' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  1 12:15:04 np0005464891 tender_moser[83983]: stderr: got monmap epoch 1
Oct  1 12:15:04 np0005464891 tender_moser[83983]: --> Creating keyring file for osd.2
Oct  1 12:15:04 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct  1 12:15:04 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct  1 12:15:04 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 1f882664-54d4-4e41-96ff-3d2c8223e250 --setuser ceph --setgroup ceph
Oct  1 12:15:04 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/3622065536' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250"}]: dispatch
Oct  1 12:15:04 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/3622065536' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250"}]': finished
Oct  1 12:15:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:15:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:15:06 np0005464891 tender_moser[83983]: stderr: 2025-10-01T16:15:04.373+0000 7f5bb3f28740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 12:15:06 np0005464891 tender_moser[83983]: stderr: 2025-10-01T16:15:04.373+0000 7f5bb3f28740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 12:15:06 np0005464891 tender_moser[83983]: stderr: 2025-10-01T16:15:04.373+0000 7f5bb3f28740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 12:15:06 np0005464891 tender_moser[83983]: stderr: 2025-10-01T16:15:04.374+0000 7f5bb3f28740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct  1 12:15:06 np0005464891 tender_moser[83983]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Oct  1 12:15:06 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  1 12:15:06 np0005464891 tender_moser[83983]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct  1 12:15:06 np0005464891 tender_moser[83983]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  1 12:15:06 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct  1 12:15:06 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  1 12:15:06 np0005464891 tender_moser[83983]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  1 12:15:06 np0005464891 tender_moser[83983]: --> ceph-volume lvm activate successful for osd ID: 2
Oct  1 12:15:06 np0005464891 tender_moser[83983]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Oct  1 12:15:07 np0005464891 systemd[1]: libpod-581e450cfba1d67f179d2d3640af93e9638f9637afc7e3a4cd7da744a234d26a.scope: Deactivated successfully.
Oct  1 12:15:07 np0005464891 systemd[1]: libpod-581e450cfba1d67f179d2d3640af93e9638f9637afc7e3a4cd7da744a234d26a.scope: Consumed 6.848s CPU time.
Oct  1 12:15:07 np0005464891 podman[86820]: 2025-10-01 16:15:07.08159829 +0000 UTC m=+0.044266394 container died 581e450cfba1d67f179d2d3640af93e9638f9637afc7e3a4cd7da744a234d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_moser, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 12:15:07 np0005464891 systemd[1]: var-lib-containers-storage-overlay-be6ed4737c0642af6605647aa4ea246b835b251b6e2b974ebb108546c54b26d5-merged.mount: Deactivated successfully.
Oct  1 12:15:07 np0005464891 podman[86820]: 2025-10-01 16:15:07.159555489 +0000 UTC m=+0.122223543 container remove 581e450cfba1d67f179d2d3640af93e9638f9637afc7e3a4cd7da744a234d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:07 np0005464891 systemd[1]: libpod-conmon-581e450cfba1d67f179d2d3640af93e9638f9637afc7e3a4cd7da744a234d26a.scope: Deactivated successfully.
Oct  1 12:15:07 np0005464891 podman[86976]: 2025-10-01 16:15:07.817711157 +0000 UTC m=+0.047312838 container create af69a2f7f20add396ce9b916ddadb1b4b6f98c04749c81d43bec625954d29afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:15:07 np0005464891 systemd[1]: Started libpod-conmon-af69a2f7f20add396ce9b916ddadb1b4b6f98c04749c81d43bec625954d29afa.scope.
Oct  1 12:15:07 np0005464891 podman[86976]: 2025-10-01 16:15:07.798703092 +0000 UTC m=+0.028304803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:07 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:07 np0005464891 podman[86976]: 2025-10-01 16:15:07.930044907 +0000 UTC m=+0.159646678 container init af69a2f7f20add396ce9b916ddadb1b4b6f98c04749c81d43bec625954d29afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Oct  1 12:15:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:15:07 np0005464891 podman[86976]: 2025-10-01 16:15:07.942064711 +0000 UTC m=+0.171666412 container start af69a2f7f20add396ce9b916ddadb1b4b6f98c04749c81d43bec625954d29afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mclean, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:07 np0005464891 busy_mclean[86992]: 167 167
Oct  1 12:15:07 np0005464891 podman[86976]: 2025-10-01 16:15:07.946616812 +0000 UTC m=+0.176218513 container attach af69a2f7f20add396ce9b916ddadb1b4b6f98c04749c81d43bec625954d29afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mclean, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:15:07 np0005464891 systemd[1]: libpod-af69a2f7f20add396ce9b916ddadb1b4b6f98c04749c81d43bec625954d29afa.scope: Deactivated successfully.
Oct  1 12:15:07 np0005464891 podman[86976]: 2025-10-01 16:15:07.947924634 +0000 UTC m=+0.177526335 container died af69a2f7f20add396ce9b916ddadb1b4b6f98c04749c81d43bec625954d29afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:15:07 np0005464891 systemd[1]: var-lib-containers-storage-overlay-2de8e665a4e68bfd299e2b55a601240bb9c85fec7e61691f051e48b42f654d05-merged.mount: Deactivated successfully.
Oct  1 12:15:07 np0005464891 podman[86976]: 2025-10-01 16:15:07.993624613 +0000 UTC m=+0.223226294 container remove af69a2f7f20add396ce9b916ddadb1b4b6f98c04749c81d43bec625954d29afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:08 np0005464891 systemd[1]: libpod-conmon-af69a2f7f20add396ce9b916ddadb1b4b6f98c04749c81d43bec625954d29afa.scope: Deactivated successfully.
Oct  1 12:15:08 np0005464891 podman[87017]: 2025-10-01 16:15:08.200893196 +0000 UTC m=+0.055506430 container create ff73b64ce8615b046bf3263704b1e9f9a7ab25ebf4047a8223413974cd4ed358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hypatia, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:15:08 np0005464891 systemd[1]: Started libpod-conmon-ff73b64ce8615b046bf3263704b1e9f9a7ab25ebf4047a8223413974cd4ed358.scope.
Oct  1 12:15:08 np0005464891 podman[87017]: 2025-10-01 16:15:08.172483911 +0000 UTC m=+0.027097205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:08 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:08 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4b03bc85f0bd8eda0b79efe1946a799c205510c597e84333e4cb49ff544a74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:08 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4b03bc85f0bd8eda0b79efe1946a799c205510c597e84333e4cb49ff544a74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:08 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4b03bc85f0bd8eda0b79efe1946a799c205510c597e84333e4cb49ff544a74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:08 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4b03bc85f0bd8eda0b79efe1946a799c205510c597e84333e4cb49ff544a74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:08 np0005464891 podman[87017]: 2025-10-01 16:15:08.311432232 +0000 UTC m=+0.166045516 container init ff73b64ce8615b046bf3263704b1e9f9a7ab25ebf4047a8223413974cd4ed358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hypatia, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:15:08 np0005464891 podman[87017]: 2025-10-01 16:15:08.324525422 +0000 UTC m=+0.179138666 container start ff73b64ce8615b046bf3263704b1e9f9a7ab25ebf4047a8223413974cd4ed358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:08 np0005464891 podman[87017]: 2025-10-01 16:15:08.330798786 +0000 UTC m=+0.185412080 container attach ff73b64ce8615b046bf3263704b1e9f9a7ab25ebf4047a8223413974cd4ed358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hypatia, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]: {
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:    "0": [
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:        {
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "devices": [
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "/dev/loop3"
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            ],
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "lv_name": "ceph_lv0",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "lv_size": "21470642176",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "name": "ceph_lv0",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "tags": {
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.cluster_name": "ceph",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.crush_device_class": "",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.encrypted": "0",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.osd_id": "0",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.type": "block",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.vdo": "0"
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            },
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "type": "block",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "vg_name": "ceph_vg0"
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:        }
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:    ],
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:    "1": [
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:        {
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "devices": [
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "/dev/loop4"
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            ],
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "lv_name": "ceph_lv1",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "lv_size": "21470642176",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "name": "ceph_lv1",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "tags": {
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.cluster_name": "ceph",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.crush_device_class": "",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.encrypted": "0",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.osd_id": "1",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.type": "block",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.vdo": "0"
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            },
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "type": "block",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "vg_name": "ceph_vg1"
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:        }
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:    ],
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:    "2": [
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:        {
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "devices": [
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "/dev/loop5"
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            ],
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "lv_name": "ceph_lv2",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "lv_size": "21470642176",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "name": "ceph_lv2",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "tags": {
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.cluster_name": "ceph",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.crush_device_class": "",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.encrypted": "0",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.osd_id": "2",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.type": "block",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:                "ceph.vdo": "0"
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            },
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "type": "block",
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:            "vg_name": "ceph_vg2"
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:        }
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]:    ]
Oct  1 12:15:09 np0005464891 quizzical_hypatia[87033]: }
Oct  1 12:15:09 np0005464891 systemd[1]: libpod-ff73b64ce8615b046bf3263704b1e9f9a7ab25ebf4047a8223413974cd4ed358.scope: Deactivated successfully.
Oct  1 12:15:09 np0005464891 podman[87042]: 2025-10-01 16:15:09.161966599 +0000 UTC m=+0.033545052 container died ff73b64ce8615b046bf3263704b1e9f9a7ab25ebf4047a8223413974cd4ed358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hypatia, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:09 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6b4b03bc85f0bd8eda0b79efe1946a799c205510c597e84333e4cb49ff544a74-merged.mount: Deactivated successfully.
Oct  1 12:15:09 np0005464891 podman[87042]: 2025-10-01 16:15:09.237120659 +0000 UTC m=+0.108699072 container remove ff73b64ce8615b046bf3263704b1e9f9a7ab25ebf4047a8223413974cd4ed358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 12:15:09 np0005464891 systemd[1]: libpod-conmon-ff73b64ce8615b046bf3263704b1e9f9a7ab25ebf4047a8223413974cd4ed358.scope: Deactivated successfully.
Oct  1 12:15:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct  1 12:15:09 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  1 12:15:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:15:09 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:15:09 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Oct  1 12:15:09 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Oct  1 12:15:09 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  1 12:15:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:15:10 np0005464891 podman[87198]: 2025-10-01 16:15:10.132496965 +0000 UTC m=+0.063345672 container create 0e65efbd293ed8ded9771bdeca9720bb066bc890c6ff6c4f7da63ff7b7c80842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:10 np0005464891 systemd[1]: Started libpod-conmon-0e65efbd293ed8ded9771bdeca9720bb066bc890c6ff6c4f7da63ff7b7c80842.scope.
Oct  1 12:15:10 np0005464891 podman[87198]: 2025-10-01 16:15:10.106908899 +0000 UTC m=+0.037757646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:10 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:10 np0005464891 podman[87198]: 2025-10-01 16:15:10.235442664 +0000 UTC m=+0.166291421 container init 0e65efbd293ed8ded9771bdeca9720bb066bc890c6ff6c4f7da63ff7b7c80842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 12:15:10 np0005464891 podman[87198]: 2025-10-01 16:15:10.245966772 +0000 UTC m=+0.176815489 container start 0e65efbd293ed8ded9771bdeca9720bb066bc890c6ff6c4f7da63ff7b7c80842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:10 np0005464891 podman[87198]: 2025-10-01 16:15:10.250399281 +0000 UTC m=+0.181247988 container attach 0e65efbd293ed8ded9771bdeca9720bb066bc890c6ff6c4f7da63ff7b7c80842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:15:10 np0005464891 optimistic_blackwell[87215]: 167 167
Oct  1 12:15:10 np0005464891 systemd[1]: libpod-0e65efbd293ed8ded9771bdeca9720bb066bc890c6ff6c4f7da63ff7b7c80842.scope: Deactivated successfully.
Oct  1 12:15:10 np0005464891 podman[87198]: 2025-10-01 16:15:10.252053751 +0000 UTC m=+0.182902458 container died 0e65efbd293ed8ded9771bdeca9720bb066bc890c6ff6c4f7da63ff7b7c80842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 12:15:10 np0005464891 systemd[1]: var-lib-containers-storage-overlay-a09a81206c915e981c2aff0052cd91ded374d49b88c0f3976f385fee2e682286-merged.mount: Deactivated successfully.
Oct  1 12:15:10 np0005464891 podman[87198]: 2025-10-01 16:15:10.300351294 +0000 UTC m=+0.231200011 container remove 0e65efbd293ed8ded9771bdeca9720bb066bc890c6ff6c4f7da63ff7b7c80842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:15:10 np0005464891 systemd[1]: libpod-conmon-0e65efbd293ed8ded9771bdeca9720bb066bc890c6ff6c4f7da63ff7b7c80842.scope: Deactivated successfully.
Oct  1 12:15:10 np0005464891 ceph-mon[74303]: Deploying daemon osd.0 on compute-0
Oct  1 12:15:10 np0005464891 podman[87247]: 2025-10-01 16:15:10.674294396 +0000 UTC m=+0.055311255 container create e2ae27dd6a2a54a60b15abee20ad663b8cdb050ca467d8246cc0bff4f6281817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 12:15:10 np0005464891 systemd[1]: Started libpod-conmon-e2ae27dd6a2a54a60b15abee20ad663b8cdb050ca467d8246cc0bff4f6281817.scope.
Oct  1 12:15:10 np0005464891 podman[87247]: 2025-10-01 16:15:10.649359856 +0000 UTC m=+0.030376775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:10 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:10 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3dc24792491bdfe20818ee0a2e4e8f99c843322fb8f029a8ec85f8e1c091a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:10 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3dc24792491bdfe20818ee0a2e4e8f99c843322fb8f029a8ec85f8e1c091a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:10 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3dc24792491bdfe20818ee0a2e4e8f99c843322fb8f029a8ec85f8e1c091a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:10 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3dc24792491bdfe20818ee0a2e4e8f99c843322fb8f029a8ec85f8e1c091a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:10 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3dc24792491bdfe20818ee0a2e4e8f99c843322fb8f029a8ec85f8e1c091a9/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:10 np0005464891 podman[87247]: 2025-10-01 16:15:10.777262327 +0000 UTC m=+0.158279196 container init e2ae27dd6a2a54a60b15abee20ad663b8cdb050ca467d8246cc0bff4f6281817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate-test, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:10 np0005464891 podman[87247]: 2025-10-01 16:15:10.794488108 +0000 UTC m=+0.175504967 container start e2ae27dd6a2a54a60b15abee20ad663b8cdb050ca467d8246cc0bff4f6281817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct  1 12:15:10 np0005464891 podman[87247]: 2025-10-01 16:15:10.799368188 +0000 UTC m=+0.180385017 container attach e2ae27dd6a2a54a60b15abee20ad663b8cdb050ca467d8246cc0bff4f6281817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 12:15:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:15:11 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate-test[87263]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  1 12:15:11 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate-test[87263]:                            [--no-systemd] [--no-tmpfs]
Oct  1 12:15:11 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate-test[87263]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  1 12:15:11 np0005464891 systemd[1]: libpod-e2ae27dd6a2a54a60b15abee20ad663b8cdb050ca467d8246cc0bff4f6281817.scope: Deactivated successfully.
Oct  1 12:15:11 np0005464891 podman[87247]: 2025-10-01 16:15:11.439361222 +0000 UTC m=+0.820378121 container died e2ae27dd6a2a54a60b15abee20ad663b8cdb050ca467d8246cc0bff4f6281817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate-test, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:11 np0005464891 systemd[1]: var-lib-containers-storage-overlay-7f3dc24792491bdfe20818ee0a2e4e8f99c843322fb8f029a8ec85f8e1c091a9-merged.mount: Deactivated successfully.
Oct  1 12:15:11 np0005464891 podman[87247]: 2025-10-01 16:15:11.503031861 +0000 UTC m=+0.884048730 container remove e2ae27dd6a2a54a60b15abee20ad663b8cdb050ca467d8246cc0bff4f6281817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate-test, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:11 np0005464891 systemd[1]: libpod-conmon-e2ae27dd6a2a54a60b15abee20ad663b8cdb050ca467d8246cc0bff4f6281817.scope: Deactivated successfully.
Oct  1 12:15:11 np0005464891 systemd[1]: Reloading.
Oct  1 12:15:11 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:15:11 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:15:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:15:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:15:11
Oct  1 12:15:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:15:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:15:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] No pools available
Oct  1 12:15:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:15:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:15:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:15:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:15:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:15:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:15:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:15:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:15:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:15:12 np0005464891 systemd[1]: Reloading.
Oct  1 12:15:12 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:15:12 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:15:12 np0005464891 systemd[1]: Starting Ceph osd.0 for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5...
Oct  1 12:15:12 np0005464891 podman[87425]: 2025-10-01 16:15:12.801817641 +0000 UTC m=+0.056453474 container create 8f21cf73eba27ab33050ca3a3df17007ea601114eb15c995affc2f0721e79dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:15:12 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/145d3f3d043141b36fd0f741d275ff586a2dc8e3a09eda658f68a4af1509b424/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:12 np0005464891 podman[87425]: 2025-10-01 16:15:12.780678873 +0000 UTC m=+0.035314716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/145d3f3d043141b36fd0f741d275ff586a2dc8e3a09eda658f68a4af1509b424/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/145d3f3d043141b36fd0f741d275ff586a2dc8e3a09eda658f68a4af1509b424/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/145d3f3d043141b36fd0f741d275ff586a2dc8e3a09eda658f68a4af1509b424/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/145d3f3d043141b36fd0f741d275ff586a2dc8e3a09eda658f68a4af1509b424/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:12 np0005464891 podman[87425]: 2025-10-01 16:15:12.891635019 +0000 UTC m=+0.146270932 container init 8f21cf73eba27ab33050ca3a3df17007ea601114eb15c995affc2f0721e79dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 12:15:12 np0005464891 podman[87425]: 2025-10-01 16:15:12.904934944 +0000 UTC m=+0.159570797 container start 8f21cf73eba27ab33050ca3a3df17007ea601114eb15c995affc2f0721e79dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:15:12 np0005464891 podman[87425]: 2025-10-01 16:15:12.908663106 +0000 UTC m=+0.163298979 container attach 8f21cf73eba27ab33050ca3a3df17007ea601114eb15c995affc2f0721e79dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:15:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:15:14 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate[87440]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  1 12:15:14 np0005464891 bash[87425]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  1 12:15:14 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate[87440]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct  1 12:15:14 np0005464891 bash[87425]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct  1 12:15:14 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate[87440]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct  1 12:15:14 np0005464891 bash[87425]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct  1 12:15:14 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate[87440]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  1 12:15:14 np0005464891 bash[87425]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  1 12:15:14 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate[87440]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  1 12:15:14 np0005464891 bash[87425]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  1 12:15:14 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate[87440]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  1 12:15:14 np0005464891 bash[87425]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  1 12:15:14 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate[87440]: --> ceph-volume raw activate successful for osd ID: 0
Oct  1 12:15:14 np0005464891 bash[87425]: --> ceph-volume raw activate successful for osd ID: 0
Oct  1 12:15:14 np0005464891 systemd[1]: libpod-8f21cf73eba27ab33050ca3a3df17007ea601114eb15c995affc2f0721e79dd1.scope: Deactivated successfully.
Oct  1 12:15:14 np0005464891 systemd[1]: libpod-8f21cf73eba27ab33050ca3a3df17007ea601114eb15c995affc2f0721e79dd1.scope: Consumed 1.254s CPU time.
Oct  1 12:15:14 np0005464891 podman[87425]: 2025-10-01 16:15:14.140026844 +0000 UTC m=+1.394662727 container died 8f21cf73eba27ab33050ca3a3df17007ea601114eb15c995affc2f0721e79dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 12:15:14 np0005464891 systemd[1]: var-lib-containers-storage-overlay-145d3f3d043141b36fd0f741d275ff586a2dc8e3a09eda658f68a4af1509b424-merged.mount: Deactivated successfully.
Oct  1 12:15:14 np0005464891 podman[87425]: 2025-10-01 16:15:14.212180111 +0000 UTC m=+1.466815944 container remove 8f21cf73eba27ab33050ca3a3df17007ea601114eb15c995affc2f0721e79dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0-activate, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:14 np0005464891 podman[87629]: 2025-10-01 16:15:14.445388239 +0000 UTC m=+0.050713833 container create 599972a68d3918f0123d4f161a8aa04f17d507f181e3c08465b45c122f67a62c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 12:15:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08b6992391f9543e1a08fd12d472789f902e954db05907efec6e9085583c032/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08b6992391f9543e1a08fd12d472789f902e954db05907efec6e9085583c032/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08b6992391f9543e1a08fd12d472789f902e954db05907efec6e9085583c032/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08b6992391f9543e1a08fd12d472789f902e954db05907efec6e9085583c032/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08b6992391f9543e1a08fd12d472789f902e954db05907efec6e9085583c032/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:14 np0005464891 podman[87629]: 2025-10-01 16:15:14.517217407 +0000 UTC m=+0.122543041 container init 599972a68d3918f0123d4f161a8aa04f17d507f181e3c08465b45c122f67a62c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:15:14 np0005464891 podman[87629]: 2025-10-01 16:15:14.421931245 +0000 UTC m=+0.027256839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:14 np0005464891 podman[87629]: 2025-10-01 16:15:14.53041563 +0000 UTC m=+0.135741224 container start 599972a68d3918f0123d4f161a8aa04f17d507f181e3c08465b45c122f67a62c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:14 np0005464891 bash[87629]: 599972a68d3918f0123d4f161a8aa04f17d507f181e3c08465b45c122f67a62c
Oct  1 12:15:14 np0005464891 systemd[1]: Started Ceph osd.0 for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5.
Oct  1 12:15:14 np0005464891 ceph-osd[87649]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 12:15:14 np0005464891 ceph-osd[87649]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  1 12:15:14 np0005464891 ceph-osd[87649]: pidfile_write: ignore empty --pid-file
Oct  1 12:15:14 np0005464891 ceph-osd[87649]: bdev(0x5605ab49f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  1 12:15:14 np0005464891 ceph-osd[87649]: bdev(0x5605ab49f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  1 12:15:14 np0005464891 ceph-osd[87649]: bdev(0x5605ab49f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:14 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 12:15:14 np0005464891 ceph-osd[87649]: bdev(0x5605ac2d7800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  1 12:15:14 np0005464891 ceph-osd[87649]: bdev(0x5605ac2d7800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  1 12:15:14 np0005464891 ceph-osd[87649]: bdev(0x5605ac2d7800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:14 np0005464891 ceph-osd[87649]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  1 12:15:14 np0005464891 ceph-osd[87649]: bdev(0x5605ac2d7800 /var/lib/ceph/osd/ceph-0/block) close
Oct  1 12:15:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:15:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:15:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct  1 12:15:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  1 12:15:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:15:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:15:14 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct  1 12:15:14 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct  1 12:15:14 np0005464891 ceph-osd[87649]: bdev(0x5605ab49f800 /var/lib/ceph/osd/ceph-0/block) close
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: load: jerasure load: lrc 
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36ac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36ac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36ac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36ac00 /var/lib/ceph/osd/ceph-0/block) close
Oct  1 12:15:15 np0005464891 python3[87797]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:15:15 np0005464891 podman[87826]: 2025-10-01 16:15:15.400621079 +0000 UTC m=+0.068674951 container create 634e6b981610b4cf2934dad598e800b4dd0cac6fbead8a2fed902fb7dea24f43 (image=quay.io/ceph/ceph:v18, name=lucid_blackburn, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36ac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36ac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36ac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36ac00 /var/lib/ceph/osd/ceph-0/block) close
Oct  1 12:15:15 np0005464891 podman[87848]: 2025-10-01 16:15:15.442676319 +0000 UTC m=+0.060843080 container create 8e6bc3bd68ed4d4024e28b9d7229e5ba897b9d553058622069d7c07004114e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:15 np0005464891 systemd[1]: Started libpod-conmon-634e6b981610b4cf2934dad598e800b4dd0cac6fbead8a2fed902fb7dea24f43.scope.
Oct  1 12:15:15 np0005464891 podman[87826]: 2025-10-01 16:15:15.370433231 +0000 UTC m=+0.038487223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:15:15 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:15 np0005464891 systemd[1]: Started libpod-conmon-8e6bc3bd68ed4d4024e28b9d7229e5ba897b9d553058622069d7c07004114e64.scope.
Oct  1 12:15:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07cf00b2ed2cc7e7b4d779304d7e46a3310c0b8acdb6599b99fa88ce198f6ac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07cf00b2ed2cc7e7b4d779304d7e46a3310c0b8acdb6599b99fa88ce198f6ac/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07cf00b2ed2cc7e7b4d779304d7e46a3310c0b8acdb6599b99fa88ce198f6ac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:15 np0005464891 podman[87848]: 2025-10-01 16:15:15.417921543 +0000 UTC m=+0.036088374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:15 np0005464891 podman[87826]: 2025-10-01 16:15:15.51830645 +0000 UTC m=+0.186360422 container init 634e6b981610b4cf2934dad598e800b4dd0cac6fbead8a2fed902fb7dea24f43 (image=quay.io/ceph/ceph:v18, name=lucid_blackburn, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 12:15:15 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:15 np0005464891 podman[87826]: 2025-10-01 16:15:15.5301405 +0000 UTC m=+0.198194382 container start 634e6b981610b4cf2934dad598e800b4dd0cac6fbead8a2fed902fb7dea24f43 (image=quay.io/ceph/ceph:v18, name=lucid_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 12:15:15 np0005464891 podman[87826]: 2025-10-01 16:15:15.534102067 +0000 UTC m=+0.202155979 container attach 634e6b981610b4cf2934dad598e800b4dd0cac6fbead8a2fed902fb7dea24f43 (image=quay.io/ceph/ceph:v18, name=lucid_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 12:15:15 np0005464891 podman[87848]: 2025-10-01 16:15:15.54155396 +0000 UTC m=+0.159720721 container init 8e6bc3bd68ed4d4024e28b9d7229e5ba897b9d553058622069d7c07004114e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:15 np0005464891 podman[87848]: 2025-10-01 16:15:15.552354533 +0000 UTC m=+0.170521314 container start 8e6bc3bd68ed4d4024e28b9d7229e5ba897b9d553058622069d7c07004114e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 12:15:15 np0005464891 podman[87848]: 2025-10-01 16:15:15.556196478 +0000 UTC m=+0.174363239 container attach 8e6bc3bd68ed4d4024e28b9d7229e5ba897b9d553058622069d7c07004114e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_stonebraker, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 12:15:15 np0005464891 elated_stonebraker[87875]: 167 167
Oct  1 12:15:15 np0005464891 systemd[1]: libpod-8e6bc3bd68ed4d4024e28b9d7229e5ba897b9d553058622069d7c07004114e64.scope: Deactivated successfully.
Oct  1 12:15:15 np0005464891 conmon[87875]: conmon 8e6bc3bd68ed4d4024e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e6bc3bd68ed4d4024e28b9d7229e5ba897b9d553058622069d7c07004114e64.scope/container/memory.events
Oct  1 12:15:15 np0005464891 podman[87848]: 2025-10-01 16:15:15.559420006 +0000 UTC m=+0.177586787 container died 8e6bc3bd68ed4d4024e28b9d7229e5ba897b9d553058622069d7c07004114e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 12:15:15 np0005464891 systemd[1]: var-lib-containers-storage-overlay-633ec72ccea1d065acf87e0d15b74338e0054bfa3a3a2e09f131276a37b09098-merged.mount: Deactivated successfully.
Oct  1 12:15:15 np0005464891 podman[87848]: 2025-10-01 16:15:15.609902212 +0000 UTC m=+0.228069003 container remove 8e6bc3bd68ed4d4024e28b9d7229e5ba897b9d553058622069d7c07004114e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:15:15 np0005464891 systemd[1]: libpod-conmon-8e6bc3bd68ed4d4024e28b9d7229e5ba897b9d553058622069d7c07004114e64.scope: Deactivated successfully.
Oct  1 12:15:15 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:15 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:15 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  1 12:15:15 np0005464891 ceph-mon[74303]: Deploying daemon osd.1 on compute-0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36ac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36ac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36ac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36b400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36b400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36b400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluefs mount
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluefs mount shared_bdev_used = 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: RocksDB version: 7.9.2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Git sha 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: DB SUMMARY
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: DB Session ID:  RQLFS3VXEYF2RLXIGJBB
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: CURRENT file:  CURRENT
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: IDENTITY file:  IDENTITY
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                         Options.error_if_exists: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.create_if_missing: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                         Options.paranoid_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                                     Options.env: 0x5605ac329c70
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                                Options.info_log: 0x5605ab5268a0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_file_opening_threads: 16
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                              Options.statistics: (nil)
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.use_fsync: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.max_log_file_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                         Options.allow_fallocate: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.use_direct_reads: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.create_missing_column_families: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                              Options.db_log_dir: 
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                                 Options.wal_dir: db.wal
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.advise_random_on_open: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.write_buffer_manager: 0x5605ac444460
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                            Options.rate_limiter: (nil)
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.unordered_write: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.row_cache: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                              Options.wal_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.allow_ingest_behind: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.two_write_queues: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.manual_wal_flush: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.wal_compression: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.atomic_flush: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.log_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.allow_data_in_errors: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.db_host_id: __hostname__
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.max_background_jobs: 4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.max_background_compactions: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.max_subcompactions: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.max_open_files: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.bytes_per_sync: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.max_background_flushes: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Compression algorithms supported:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: #011kZSTD supported: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: #011kXpressCompression supported: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: #011kBZip2Compression supported: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: #011kLZ4Compression supported: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: #011kZlibCompression supported: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: #011kSnappyCompression supported: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab5262c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab5131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab5262c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab5131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab5262c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab5131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab5262c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab5131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab5262c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab5131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab5262c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab5131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab5262c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab5131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab526240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab513090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab526240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab513090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab526240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab513090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 221e1426-7a4b-4341-b4e0-ca8374ed1fcf
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335315746098, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335315746621, "job": 1, "event": "recovery_finished"}
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: freelist init
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: freelist _read_cfg
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluefs umount
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36b400 /var/lib/ceph/osd/ceph-0/block) close
Oct  1 12:15:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:15:15 np0005464891 podman[88103]: 2025-10-01 16:15:15.944961483 +0000 UTC m=+0.068320163 container create 999fe5171c81d47cb820e5928f4ab440143d9483034434d7e6573d04a6595914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate-test, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36b400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36b400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bdev(0x5605ac36b400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluefs mount
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluefs mount shared_bdev_used = 4718592
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: RocksDB version: 7.9.2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Git sha 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: DB SUMMARY
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: DB Session ID:  RQLFS3VXEYF2RLXIGJBA
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: CURRENT file:  CURRENT
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: IDENTITY file:  IDENTITY
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                         Options.error_if_exists: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.create_if_missing: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                         Options.paranoid_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                                     Options.env: 0x5605ac4ec460
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                                Options.info_log: 0x5605ac325b00
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_file_opening_threads: 16
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                              Options.statistics: (nil)
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.use_fsync: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.max_log_file_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                         Options.allow_fallocate: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.use_direct_reads: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.create_missing_column_families: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                              Options.db_log_dir: 
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                                 Options.wal_dir: db.wal
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.advise_random_on_open: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.write_buffer_manager: 0x5605ac444a00
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                            Options.rate_limiter: (nil)
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.unordered_write: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.row_cache: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                              Options.wal_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.allow_ingest_behind: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.two_write_queues: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.manual_wal_flush: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.wal_compression: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.atomic_flush: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.log_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.allow_data_in_errors: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.db_host_id: __hostname__
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.max_background_jobs: 4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.max_background_compactions: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.max_subcompactions: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.max_open_files: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.bytes_per_sync: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.max_background_flushes: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Compression algorithms supported:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: #011kZSTD supported: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: #011kXpressCompression supported: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: #011kBZip2Compression supported: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: #011kLZ4Compression supported: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: #011kZlibCompression supported: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: #011kSnappyCompression supported: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab4f8fe0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab5131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab4f8fe0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab5131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab4f8fe0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab5131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab4f8fe0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab5131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab4f8fe0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab5131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab4f8fe0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab5131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:15 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:16 np0005464891 systemd[1]: Started libpod-conmon-999fe5171c81d47cb820e5928f4ab440143d9483034434d7e6573d04a6595914.scope.
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab4f8fe0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab5131f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab4f8f20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab513090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:16 np0005464891 podman[88103]: 2025-10-01 16:15:15.917228755 +0000 UTC m=+0.040587485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab4f8f20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab513090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605ab4f8f20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5605ab513090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 221e1426-7a4b-4341-b4e0-ca8374ed1fcf
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335316011018, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335316015051, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335316, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "221e1426-7a4b-4341-b4e0-ca8374ed1fcf", "db_session_id": "RQLFS3VXEYF2RLXIGJBA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335316019577, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335316, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "221e1426-7a4b-4341-b4e0-ca8374ed1fcf", "db_session_id": "RQLFS3VXEYF2RLXIGJBA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335316022389, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335316, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "221e1426-7a4b-4341-b4e0-ca8374ed1fcf", "db_session_id": "RQLFS3VXEYF2RLXIGJBA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335316024428, "job": 1, "event": "recovery_finished"}
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  1 12:15:16 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:16 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c57791eb31b52663c0791d31ba97083f7305ad8086846de1e6b1f49b76cf17f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:16 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c57791eb31b52663c0791d31ba97083f7305ad8086846de1e6b1f49b76cf17f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:16 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c57791eb31b52663c0791d31ba97083f7305ad8086846de1e6b1f49b76cf17f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:16 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c57791eb31b52663c0791d31ba97083f7305ad8086846de1e6b1f49b76cf17f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:16 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c57791eb31b52663c0791d31ba97083f7305ad8086846de1e6b1f49b76cf17f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5605ab680000
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: DB pointer 0x5605ac42da00
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5605ab5131f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5605ab5131f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5605ab5131f0#2 capacity: 460.80 MB usag
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: _get_class not permitted to load lua
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: _get_class not permitted to load sdk
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: _get_class not permitted to load test_remote_reads
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: osd.0 0 load_pgs
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: osd.0 0 load_pgs opened 0 pgs
Oct  1 12:15:16 np0005464891 ceph-osd[87649]: osd.0 0 log_to_monitors true
Oct  1 12:15:16 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0[87645]: 2025-10-01T16:15:16.056+0000 7fa71c6ef740 -1 osd.0 0 log_to_monitors true
Oct  1 12:15:16 np0005464891 podman[88103]: 2025-10-01 16:15:16.065570486 +0000 UTC m=+0.188929156 container init 999fe5171c81d47cb820e5928f4ab440143d9483034434d7e6573d04a6595914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate-test, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1215354502,v1:192.168.122.100:6803/1215354502]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  1 12:15:16 np0005464891 podman[88103]: 2025-10-01 16:15:16.072142756 +0000 UTC m=+0.195501416 container start 999fe5171c81d47cb820e5928f4ab440143d9483034434d7e6573d04a6595914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:15:16 np0005464891 podman[88103]: 2025-10-01 16:15:16.075200291 +0000 UTC m=+0.198558981 container attach 999fe5171c81d47cb820e5928f4ab440143d9483034434d7e6573d04a6595914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3069900377' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  1 12:15:16 np0005464891 lucid_blackburn[87866]: 
Oct  1 12:15:16 np0005464891 lucid_blackburn[87866]: {"fsid":"6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":110,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":6,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1759335303,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-01T16:15:13.931131+0000","services":{}},"progress_events":{}}
Oct  1 12:15:16 np0005464891 systemd[1]: libpod-634e6b981610b4cf2934dad598e800b4dd0cac6fbead8a2fed902fb7dea24f43.scope: Deactivated successfully.
Oct  1 12:15:16 np0005464891 podman[87826]: 2025-10-01 16:15:16.207756445 +0000 UTC m=+0.875810337 container died 634e6b981610b4cf2934dad598e800b4dd0cac6fbead8a2fed902fb7dea24f43 (image=quay.io/ceph/ceph:v18, name=lucid_blackburn, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:15:16 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e07cf00b2ed2cc7e7b4d779304d7e46a3310c0b8acdb6599b99fa88ce198f6ac-merged.mount: Deactivated successfully.
Oct  1 12:15:16 np0005464891 podman[87826]: 2025-10-01 16:15:16.26186943 +0000 UTC m=+0.929923342 container remove 634e6b981610b4cf2934dad598e800b4dd0cac6fbead8a2fed902fb7dea24f43 (image=quay.io/ceph/ceph:v18, name=lucid_blackburn, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:15:16 np0005464891 systemd[1]: libpod-conmon-634e6b981610b4cf2934dad598e800b4dd0cac6fbead8a2fed902fb7dea24f43.scope: Deactivated successfully.
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: from='osd.0 [v2:192.168.122.100:6802/1215354502,v1:192.168.122.100:6803/1215354502]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1215354502,v1:192.168.122.100:6803/1215354502]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1215354502,v1:192.168.122.100:6803/1215354502]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 12:15:16 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 12:15:16 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 12:15:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 12:15:16 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 12:15:16 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate-test[88292]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  1 12:15:16 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate-test[88292]:                            [--no-systemd] [--no-tmpfs]
Oct  1 12:15:16 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate-test[88292]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  1 12:15:16 np0005464891 systemd[1]: libpod-999fe5171c81d47cb820e5928f4ab440143d9483034434d7e6573d04a6595914.scope: Deactivated successfully.
Oct  1 12:15:16 np0005464891 podman[88103]: 2025-10-01 16:15:16.723308124 +0000 UTC m=+0.846666804 container died 999fe5171c81d47cb820e5928f4ab440143d9483034434d7e6573d04a6595914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 12:15:16 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9c57791eb31b52663c0791d31ba97083f7305ad8086846de1e6b1f49b76cf17f-merged.mount: Deactivated successfully.
Oct  1 12:15:16 np0005464891 podman[88103]: 2025-10-01 16:15:16.784738557 +0000 UTC m=+0.908097197 container remove 999fe5171c81d47cb820e5928f4ab440143d9483034434d7e6573d04a6595914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate-test, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 12:15:16 np0005464891 systemd[1]: libpod-conmon-999fe5171c81d47cb820e5928f4ab440143d9483034434d7e6573d04a6595914.scope: Deactivated successfully.
Oct  1 12:15:17 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  1 12:15:17 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  1 12:15:17 np0005464891 systemd[1]: Reloading.
Oct  1 12:15:17 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:15:17 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:15:17 np0005464891 systemd[1]: Reloading.
Oct  1 12:15:17 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:15:17 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:15:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct  1 12:15:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 12:15:17 np0005464891 ceph-mon[74303]: from='osd.0 [v2:192.168.122.100:6802/1215354502,v1:192.168.122.100:6803/1215354502]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  1 12:15:17 np0005464891 ceph-mon[74303]: from='osd.0 [v2:192.168.122.100:6802/1215354502,v1:192.168.122.100:6803/1215354502]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  1 12:15:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1215354502,v1:192.168.122.100:6803/1215354502]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  1 12:15:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Oct  1 12:15:17 np0005464891 ceph-osd[87649]: osd.0 0 done with init, starting boot process
Oct  1 12:15:17 np0005464891 ceph-osd[87649]: osd.0 0 start_boot
Oct  1 12:15:17 np0005464891 ceph-osd[87649]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  1 12:15:17 np0005464891 ceph-osd[87649]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  1 12:15:17 np0005464891 ceph-osd[87649]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  1 12:15:17 np0005464891 ceph-osd[87649]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  1 12:15:17 np0005464891 ceph-osd[87649]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct  1 12:15:17 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Oct  1 12:15:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 12:15:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 12:15:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 12:15:17 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 12:15:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 12:15:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 12:15:17 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 12:15:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 12:15:17 np0005464891 ceph-mgr[74592]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1215354502; not ready for session (expect reconnect)
Oct  1 12:15:17 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 12:15:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 12:15:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 12:15:17 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 12:15:17 np0005464891 systemd[1]: Starting Ceph osd.1 for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5...
Oct  1 12:15:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:15:18 np0005464891 podman[88523]: 2025-10-01 16:15:18.132404473 +0000 UTC m=+0.048106098 container create ffcdf1f0a36523dfe7e936e3ae812e092c5e2038790b90053efd4ddebd1a7764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:18 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:18 np0005464891 podman[88523]: 2025-10-01 16:15:18.112750262 +0000 UTC m=+0.028451907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:18 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382080d6abebe2b19e2e4607b4599e30566daedd360f4ad62905983350912cc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:18 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382080d6abebe2b19e2e4607b4599e30566daedd360f4ad62905983350912cc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:18 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382080d6abebe2b19e2e4607b4599e30566daedd360f4ad62905983350912cc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:18 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382080d6abebe2b19e2e4607b4599e30566daedd360f4ad62905983350912cc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:18 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382080d6abebe2b19e2e4607b4599e30566daedd360f4ad62905983350912cc8/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:18 np0005464891 podman[88523]: 2025-10-01 16:15:18.231271744 +0000 UTC m=+0.146973459 container init ffcdf1f0a36523dfe7e936e3ae812e092c5e2038790b90053efd4ddebd1a7764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:18 np0005464891 podman[88523]: 2025-10-01 16:15:18.243002461 +0000 UTC m=+0.158704126 container start ffcdf1f0a36523dfe7e936e3ae812e092c5e2038790b90053efd4ddebd1a7764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 12:15:18 np0005464891 podman[88523]: 2025-10-01 16:15:18.257068265 +0000 UTC m=+0.172769990 container attach ffcdf1f0a36523dfe7e936e3ae812e092c5e2038790b90053efd4ddebd1a7764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:18 np0005464891 ceph-mon[74303]: from='osd.0 [v2:192.168.122.100:6802/1215354502,v1:192.168.122.100:6803/1215354502]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  1 12:15:18 np0005464891 ceph-mgr[74592]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1215354502; not ready for session (expect reconnect)
Oct  1 12:15:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 12:15:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 12:15:18 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 12:15:19 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate[88539]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  1 12:15:19 np0005464891 bash[88523]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  1 12:15:19 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate[88539]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct  1 12:15:19 np0005464891 bash[88523]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct  1 12:15:19 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate[88539]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct  1 12:15:19 np0005464891 bash[88523]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct  1 12:15:19 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate[88539]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  1 12:15:19 np0005464891 bash[88523]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  1 12:15:19 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate[88539]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  1 12:15:19 np0005464891 bash[88523]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  1 12:15:19 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate[88539]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  1 12:15:19 np0005464891 bash[88523]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  1 12:15:19 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate[88539]: --> ceph-volume raw activate successful for osd ID: 1
Oct  1 12:15:19 np0005464891 bash[88523]: --> ceph-volume raw activate successful for osd ID: 1
Oct  1 12:15:19 np0005464891 systemd[1]: libpod-ffcdf1f0a36523dfe7e936e3ae812e092c5e2038790b90053efd4ddebd1a7764.scope: Deactivated successfully.
Oct  1 12:15:19 np0005464891 systemd[1]: libpod-ffcdf1f0a36523dfe7e936e3ae812e092c5e2038790b90053efd4ddebd1a7764.scope: Consumed 1.246s CPU time.
Oct  1 12:15:19 np0005464891 podman[88672]: 2025-10-01 16:15:19.532747399 +0000 UTC m=+0.042819549 container died ffcdf1f0a36523dfe7e936e3ae812e092c5e2038790b90053efd4ddebd1a7764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:19 np0005464891 systemd[1]: var-lib-containers-storage-overlay-382080d6abebe2b19e2e4607b4599e30566daedd360f4ad62905983350912cc8-merged.mount: Deactivated successfully.
Oct  1 12:15:19 np0005464891 podman[88672]: 2025-10-01 16:15:19.628170534 +0000 UTC m=+0.138242664 container remove ffcdf1f0a36523dfe7e936e3ae812e092c5e2038790b90053efd4ddebd1a7764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1-activate, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:19 np0005464891 ceph-mgr[74592]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1215354502; not ready for session (expect reconnect)
Oct  1 12:15:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 12:15:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 12:15:19 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 12:15:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:15:19 np0005464891 podman[88728]: 2025-10-01 16:15:19.990331399 +0000 UTC m=+0.071602384 container create 0985aa5e05fae5852e5661e0ad74a28801f499325fca4d5a256da996dc460aa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 12:15:20 np0005464891 podman[88728]: 2025-10-01 16:15:19.957199618 +0000 UTC m=+0.038470663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df14d068f281ee6cfbb9208eec100b748644f02b46515401360d92164cbcae5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df14d068f281ee6cfbb9208eec100b748644f02b46515401360d92164cbcae5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df14d068f281ee6cfbb9208eec100b748644f02b46515401360d92164cbcae5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df14d068f281ee6cfbb9208eec100b748644f02b46515401360d92164cbcae5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df14d068f281ee6cfbb9208eec100b748644f02b46515401360d92164cbcae5/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:20 np0005464891 podman[88728]: 2025-10-01 16:15:20.12641733 +0000 UTC m=+0.207688355 container init 0985aa5e05fae5852e5661e0ad74a28801f499325fca4d5a256da996dc460aa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:20 np0005464891 podman[88728]: 2025-10-01 16:15:20.137621334 +0000 UTC m=+0.218892329 container start 0985aa5e05fae5852e5661e0ad74a28801f499325fca4d5a256da996dc460aa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:20 np0005464891 bash[88728]: 0985aa5e05fae5852e5661e0ad74a28801f499325fca4d5a256da996dc460aa8
Oct  1 12:15:20 np0005464891 systemd[1]: Started Ceph osd.1 for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5.
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: pidfile_write: ignore empty --pid-file
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bdev(0x55a66b195800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bdev(0x55a66b195800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bdev(0x55a66b195800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bdev(0x55a66bfcd800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bdev(0x55a66bfcd800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bdev(0x55a66bfcd800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bdev(0x55a66bfcd800 /var/lib/ceph/osd/ceph-1/block) close
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bdev(0x55a66b195800 /var/lib/ceph/osd/ceph-1/block) close
Oct  1 12:15:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:15:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:15:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Oct  1 12:15:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  1 12:15:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:15:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:15:20 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Oct  1 12:15:20 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: load: jerasure load: lrc 
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bdev(0x55a66c04ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bdev(0x55a66c04ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bdev(0x55a66c04ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bdev(0x55a66c04ec00 /var/lib/ceph/osd/ceph-1/block) close
Oct  1 12:15:20 np0005464891 ceph-mgr[74592]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1215354502; not ready for session (expect reconnect)
Oct  1 12:15:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 12:15:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 12:15:20 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 12:15:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bdev(0x55a66c04ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bdev(0x55a66c04ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bdev(0x55a66c04ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 12:15:20 np0005464891 ceph-osd[88747]: bdev(0x55a66c04ec00 /var/lib/ceph/osd/ceph-1/block) close
Oct  1 12:15:20 np0005464891 ceph-osd[87649]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 23.170 iops: 5931.545 elapsed_sec: 0.506
Oct  1 12:15:20 np0005464891 ceph-osd[87649]: log_channel(cluster) log [WRN] : OSD bench result of 5931.544883 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  1 12:15:20 np0005464891 ceph-osd[87649]: osd.0 0 waiting for initial osdmap
Oct  1 12:15:20 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0[87645]: 2025-10-01T16:15:20.753+0000 7fa718e86640 -1 osd.0 0 waiting for initial osdmap
Oct  1 12:15:20 np0005464891 ceph-osd[87649]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct  1 12:15:20 np0005464891 ceph-osd[87649]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct  1 12:15:20 np0005464891 ceph-osd[87649]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct  1 12:15:20 np0005464891 ceph-osd[87649]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Oct  1 12:15:20 np0005464891 ceph-osd[87649]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  1 12:15:20 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-0[87645]: 2025-10-01T16:15:20.775+0000 7fa713c97640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  1 12:15:20 np0005464891 ceph-osd[87649]: osd.0 8 set_numa_affinity not setting numa affinity
Oct  1 12:15:20 np0005464891 ceph-osd[87649]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct  1 12:15:20 np0005464891 podman[88912]: 2025-10-01 16:15:20.915858042 +0000 UTC m=+0.053625913 container create 0eb7c45f066cc6cb2eb539bc474cc6fa44b15e8cf611252e8867e696b11cc025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 12:15:20 np0005464891 systemd[1]: Started libpod-conmon-0eb7c45f066cc6cb2eb539bc474cc6fa44b15e8cf611252e8867e696b11cc025.scope.
Oct  1 12:15:20 np0005464891 podman[88912]: 2025-10-01 16:15:20.890455701 +0000 UTC m=+0.028223652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:15:20 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bdev(0x55a66c04ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bdev(0x55a66c04ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bdev(0x55a66c04ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bdev(0x55a66c04f400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bdev(0x55a66c04f400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bdev(0x55a66c04f400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluefs mount
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  1 12:15:21 np0005464891 podman[88912]: 2025-10-01 16:15:21.011294268 +0000 UTC m=+0.149062219 container init 0eb7c45f066cc6cb2eb539bc474cc6fa44b15e8cf611252e8867e696b11cc025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluefs mount shared_bdev_used = 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: RocksDB version: 7.9.2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Git sha 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: DB SUMMARY
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: DB Session ID:  3FN2G5B3MB2W1RFF0IT6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: CURRENT file:  CURRENT
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: IDENTITY file:  IDENTITY
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                         Options.error_if_exists: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.create_if_missing: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                         Options.paranoid_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                                     Options.env: 0x55a66c01fc70
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                                Options.info_log: 0x55a66b21c8a0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_file_opening_threads: 16
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                              Options.statistics: (nil)
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.use_fsync: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.max_log_file_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                         Options.allow_fallocate: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.use_direct_reads: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.create_missing_column_families: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                              Options.db_log_dir: 
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                                 Options.wal_dir: db.wal
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.advise_random_on_open: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.write_buffer_manager: 0x55a66c128460
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                            Options.rate_limiter: (nil)
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.unordered_write: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.row_cache: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                              Options.wal_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.allow_ingest_behind: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.two_write_queues: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.manual_wal_flush: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.wal_compression: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.atomic_flush: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.log_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.allow_data_in_errors: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.db_host_id: __hostname__
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.max_background_jobs: 4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.max_background_compactions: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.max_subcompactions: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.max_open_files: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.bytes_per_sync: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.max_background_flushes: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Compression algorithms supported:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: #011kZSTD supported: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: #011kXpressCompression supported: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: #011kBZip2Compression supported: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: #011kLZ4Compression supported: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: #011kZlibCompression supported: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: #011kSnappyCompression supported: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21c2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b2091f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 podman[88912]: 2025-10-01 16:15:21.020931695 +0000 UTC m=+0.158699596 container start 0eb7c45f066cc6cb2eb539bc474cc6fa44b15e8cf611252e8867e696b11cc025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21c2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b2091f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21c2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b2091f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 conmon[88928]: conmon 0eb7c45f066cc6cb2eb5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0eb7c45f066cc6cb2eb539bc474cc6fa44b15e8cf611252e8867e696b11cc025.scope/container/memory.events
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21c2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b2091f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 podman[88912]: 2025-10-01 16:15:21.024983814 +0000 UTC m=+0.162751775 container attach 0eb7c45f066cc6cb2eb539bc474cc6fa44b15e8cf611252e8867e696b11cc025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 elegant_jemison[88928]: 167 167
Oct  1 12:15:21 np0005464891 systemd[1]: libpod-0eb7c45f066cc6cb2eb539bc474cc6fa44b15e8cf611252e8867e696b11cc025.scope: Deactivated successfully.
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 podman[88912]: 2025-10-01 16:15:21.030760945 +0000 UTC m=+0.168528826 container died 0eb7c45f066cc6cb2eb539bc474cc6fa44b15e8cf611252e8867e696b11cc025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21c2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b2091f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21c2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b2091f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21c2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b2091f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21c240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b209090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21c240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b209090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21c240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b209090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 250ba6a9-e9d8-45b9-b71f-aebd5bb0a010
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335321040209, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335321040582, "job": 1, "event": "recovery_finished"}
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: freelist init
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: freelist _read_cfg
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluefs umount
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bdev(0x55a66c04f400 /var/lib/ceph/osd/ceph-1/block) close
Oct  1 12:15:21 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b456d1e1de192089c0ff51f3eada81d6a377437804d3939728c4f639193421b1-merged.mount: Deactivated successfully.
Oct  1 12:15:21 np0005464891 podman[88912]: 2025-10-01 16:15:21.084568292 +0000 UTC m=+0.222336193 container remove 0eb7c45f066cc6cb2eb539bc474cc6fa44b15e8cf611252e8867e696b11cc025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:21 np0005464891 systemd[1]: libpod-conmon-0eb7c45f066cc6cb2eb539bc474cc6fa44b15e8cf611252e8867e696b11cc025.scope: Deactivated successfully.
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bdev(0x55a66c04f400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bdev(0x55a66c04f400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bdev(0x55a66c04f400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluefs mount
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluefs mount shared_bdev_used = 4718592
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: RocksDB version: 7.9.2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Git sha 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: DB SUMMARY
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: DB Session ID:  3FN2G5B3MB2W1RFF0IT7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: CURRENT file:  CURRENT
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: IDENTITY file:  IDENTITY
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                         Options.error_if_exists: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.create_if_missing: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                         Options.paranoid_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                                     Options.env: 0x55a66c1d0460
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                                Options.info_log: 0x55a66b21c620
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_file_opening_threads: 16
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                              Options.statistics: (nil)
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.use_fsync: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.max_log_file_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                         Options.allow_fallocate: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.use_direct_reads: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.create_missing_column_families: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                              Options.db_log_dir: 
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                                 Options.wal_dir: db.wal
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.advise_random_on_open: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.write_buffer_manager: 0x55a66c128460
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                            Options.rate_limiter: (nil)
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.unordered_write: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.row_cache: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                              Options.wal_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.allow_ingest_behind: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.two_write_queues: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.manual_wal_flush: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.wal_compression: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.atomic_flush: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.log_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.allow_data_in_errors: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.db_host_id: __hostname__
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.max_background_jobs: 4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.max_background_compactions: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.max_subcompactions: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.max_open_files: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.bytes_per_sync: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.max_background_flushes: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Compression algorithms supported:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: #011kZSTD supported: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: #011kXpressCompression supported: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: #011kBZip2Compression supported: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: #011kLZ4Compression supported: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: #011kZlibCompression supported: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: #011kSnappyCompression supported: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21ca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b2091f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21ca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b2091f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21ca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b2091f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21ca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b2091f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21ca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b2091f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21ca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b2091f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21ca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b2091f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21c380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b209090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21c380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b209090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a66b21c380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a66b209090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 250ba6a9-e9d8-45b9-b71f-aebd5bb0a010
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335321318244, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335321325733, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335321, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "250ba6a9-e9d8-45b9-b71f-aebd5bb0a010", "db_session_id": "3FN2G5B3MB2W1RFF0IT7", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335321329793, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335321, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "250ba6a9-e9d8-45b9-b71f-aebd5bb0a010", "db_session_id": "3FN2G5B3MB2W1RFF0IT7", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335321333349, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335321, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "250ba6a9-e9d8-45b9-b71f-aebd5bb0a010", "db_session_id": "3FN2G5B3MB2W1RFF0IT7", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335321338808, "job": 1, "event": "recovery_finished"}
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a66b376000
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: DB pointer 0x55a66c111a00
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a66b2091f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a66b2091f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a66b2091f0#2 capacity: 460.80 MB usag
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: _get_class not permitted to load lua
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: _get_class not permitted to load sdk
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: _get_class not permitted to load test_remote_reads
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: osd.1 0 load_pgs
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: osd.1 0 load_pgs opened 0 pgs
Oct  1 12:15:21 np0005464891 ceph-osd[88747]: osd.1 0 log_to_monitors true
Oct  1 12:15:21 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1[88743]: 2025-10-01T16:15:21.383+0000 7fd7195e2740 -1 osd.1 0 log_to_monitors true
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/263801485,v1:192.168.122.100:6807/263801485]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  1 12:15:21 np0005464891 podman[89334]: 2025-10-01 16:15:21.473560633 +0000 UTC m=+0.082201533 container create 33ffab07c5a31b73c4461a21c9328fd72fbe70c5d423d52bd0e6a33ee52f6f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate-test, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:15:21 np0005464891 systemd[1]: Started libpod-conmon-33ffab07c5a31b73c4461a21c9328fd72fbe70c5d423d52bd0e6a33ee52f6f4d.scope.
Oct  1 12:15:21 np0005464891 podman[89334]: 2025-10-01 16:15:21.441793556 +0000 UTC m=+0.050434506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:21 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:21 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ea96a45400155c51f68731ac67151a4d547b9168b81161cf2a41b58098fa14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:21 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ea96a45400155c51f68731ac67151a4d547b9168b81161cf2a41b58098fa14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:21 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ea96a45400155c51f68731ac67151a4d547b9168b81161cf2a41b58098fa14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:21 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ea96a45400155c51f68731ac67151a4d547b9168b81161cf2a41b58098fa14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:21 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ea96a45400155c51f68731ac67151a4d547b9168b81161cf2a41b58098fa14/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:21 np0005464891 podman[89334]: 2025-10-01 16:15:21.592249899 +0000 UTC m=+0.200890799 container init 33ffab07c5a31b73c4461a21c9328fd72fbe70c5d423d52bd0e6a33ee52f6f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:15:21 np0005464891 podman[89334]: 2025-10-01 16:15:21.609974542 +0000 UTC m=+0.218615442 container start 33ffab07c5a31b73c4461a21c9328fd72fbe70c5d423d52bd0e6a33ee52f6f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:15:21 np0005464891 podman[89334]: 2025-10-01 16:15:21.613931679 +0000 UTC m=+0.222572579 container attach 33ffab07c5a31b73c4461a21c9328fd72fbe70c5d423d52bd0e6a33ee52f6f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 12:15:21 np0005464891 ceph-mgr[74592]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1215354502; not ready for session (expect reconnect)
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 12:15:21 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: Deploying daemon osd.2 on compute-0
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: OSD bench result of 5931.544883 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: from='osd.1 [v2:192.168.122.100:6806/263801485,v1:192.168.122.100:6807/263801485]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  1 12:15:21 np0005464891 ceph-osd[87649]: osd.0 8 tick checking mon for new map
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/263801485,v1:192.168.122.100:6807/263801485]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1215354502,v1:192.168.122.100:6803/1215354502] boot
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/263801485,v1:192.168.122.100:6807/263801485]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 12:15:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 12:15:21 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 12:15:21 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 12:15:21 np0005464891 ceph-osd[87649]: osd.0 9 state: booting -> active
Oct  1 12:15:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 12:15:22 np0005464891 ceph-mgr[74592]: [devicehealth INFO root] creating mgr pool
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  1 12:15:22 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate-test[89383]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  1 12:15:22 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate-test[89383]:                            [--no-systemd] [--no-tmpfs]
Oct  1 12:15:22 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate-test[89383]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  1 12:15:22 np0005464891 systemd[1]: libpod-33ffab07c5a31b73c4461a21c9328fd72fbe70c5d423d52bd0e6a33ee52f6f4d.scope: Deactivated successfully.
Oct  1 12:15:22 np0005464891 podman[89390]: 2025-10-01 16:15:22.308556341 +0000 UTC m=+0.029220286 container died 33ffab07c5a31b73c4461a21c9328fd72fbe70c5d423d52bd0e6a33ee52f6f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate-test, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:22 np0005464891 systemd[1]: var-lib-containers-storage-overlay-78ea96a45400155c51f68731ac67151a4d547b9168b81161cf2a41b58098fa14-merged.mount: Deactivated successfully.
Oct  1 12:15:22 np0005464891 podman[89390]: 2025-10-01 16:15:22.385665588 +0000 UTC m=+0.106329503 container remove 33ffab07c5a31b73c4461a21c9328fd72fbe70c5d423d52bd0e6a33ee52f6f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 12:15:22 np0005464891 systemd[1]: libpod-conmon-33ffab07c5a31b73c4461a21c9328fd72fbe70c5d423d52bd0e6a33ee52f6f4d.scope: Deactivated successfully.
Oct  1 12:15:22 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  1 12:15:22 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/263801485,v1:192.168.122.100:6807/263801485]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Oct  1 12:15:22 np0005464891 ceph-osd[88747]: osd.1 0 done with init, starting boot process
Oct  1 12:15:22 np0005464891 ceph-osd[88747]: osd.1 0 start_boot
Oct  1 12:15:22 np0005464891 ceph-osd[88747]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  1 12:15:22 np0005464891 ceph-osd[88747]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  1 12:15:22 np0005464891 ceph-osd[88747]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  1 12:15:22 np0005464891 ceph-osd[88747]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  1 12:15:22 np0005464891 ceph-osd[88747]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct  1 12:15:22 np0005464891 systemd[1]: Reloading.
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 12:15:22 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Oct  1 12:15:22 np0005464891 ceph-osd[87649]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  1 12:15:22 np0005464891 ceph-osd[87649]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct  1 12:15:22 np0005464891 ceph-osd[87649]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  1 12:15:22 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: from='osd.1 [v2:192.168.122.100:6806/263801485,v1:192.168.122.100:6807/263801485]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: osd.0 [v2:192.168.122.100:6802/1215354502,v1:192.168.122.100:6803/1215354502] boot
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: from='osd.1 [v2:192.168.122.100:6806/263801485,v1:192.168.122.100:6807/263801485]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  1 12:15:22 np0005464891 ceph-mgr[74592]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/263801485; not ready for session (expect reconnect)
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 12:15:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 12:15:22 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 12:15:22 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:15:22 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:15:23 np0005464891 systemd[1]: Reloading.
Oct  1 12:15:23 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:15:23 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:15:23 np0005464891 systemd[1]: Starting Ceph osd.2 for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5...
Oct  1 12:15:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct  1 12:15:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  1 12:15:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Oct  1 12:15:23 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Oct  1 12:15:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 12:15:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 12:15:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 12:15:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 12:15:23 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 12:15:23 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 12:15:23 np0005464891 ceph-mgr[74592]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/263801485; not ready for session (expect reconnect)
Oct  1 12:15:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 12:15:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 12:15:23 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 12:15:23 np0005464891 ceph-mon[74303]: from='osd.1 [v2:192.168.122.100:6806/263801485,v1:192.168.122.100:6807/263801485]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  1 12:15:23 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  1 12:15:23 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  1 12:15:23 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  1 12:15:23 np0005464891 podman[89547]: 2025-10-01 16:15:23.795935106 +0000 UTC m=+0.070998548 container create e8457081d8d34b68c7f3b2fde419fb2d39e5ae3b9770490d6d0003d0731af412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 12:15:23 np0005464891 podman[89547]: 2025-10-01 16:15:23.764866826 +0000 UTC m=+0.039930298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:23 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456547fc41ab670f52ae39d462a78a608bc9511df837075a95ec0effeb76480f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456547fc41ab670f52ae39d462a78a608bc9511df837075a95ec0effeb76480f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456547fc41ab670f52ae39d462a78a608bc9511df837075a95ec0effeb76480f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456547fc41ab670f52ae39d462a78a608bc9511df837075a95ec0effeb76480f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456547fc41ab670f52ae39d462a78a608bc9511df837075a95ec0effeb76480f/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:23 np0005464891 podman[89547]: 2025-10-01 16:15:23.924104674 +0000 UTC m=+0.199168166 container init e8457081d8d34b68c7f3b2fde419fb2d39e5ae3b9770490d6d0003d0731af412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct  1 12:15:23 np0005464891 podman[89547]: 2025-10-01 16:15:23.940631128 +0000 UTC m=+0.215694570 container start e8457081d8d34b68c7f3b2fde419fb2d39e5ae3b9770490d6d0003d0731af412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:23 np0005464891 podman[89547]: 2025-10-01 16:15:23.95132839 +0000 UTC m=+0.226391822 container attach e8457081d8d34b68c7f3b2fde419fb2d39e5ae3b9770490d6d0003d0731af412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:24 np0005464891 ceph-mgr[74592]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/263801485; not ready for session (expect reconnect)
Oct  1 12:15:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 12:15:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 12:15:24 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 12:15:25 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate[89562]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  1 12:15:25 np0005464891 bash[89547]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  1 12:15:25 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate[89562]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct  1 12:15:25 np0005464891 bash[89547]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct  1 12:15:25 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate[89562]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct  1 12:15:25 np0005464891 bash[89547]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct  1 12:15:25 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate[89562]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  1 12:15:25 np0005464891 bash[89547]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  1 12:15:25 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate[89562]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  1 12:15:25 np0005464891 bash[89547]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  1 12:15:25 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate[89562]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  1 12:15:25 np0005464891 bash[89547]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  1 12:15:25 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate[89562]: --> ceph-volume raw activate successful for osd ID: 2
Oct  1 12:15:25 np0005464891 bash[89547]: --> ceph-volume raw activate successful for osd ID: 2
Oct  1 12:15:25 np0005464891 systemd[1]: libpod-e8457081d8d34b68c7f3b2fde419fb2d39e5ae3b9770490d6d0003d0731af412.scope: Deactivated successfully.
Oct  1 12:15:25 np0005464891 systemd[1]: libpod-e8457081d8d34b68c7f3b2fde419fb2d39e5ae3b9770490d6d0003d0731af412.scope: Consumed 1.247s CPU time.
Oct  1 12:15:25 np0005464891 podman[89547]: 2025-10-01 16:15:25.167870317 +0000 UTC m=+1.442933739 container died e8457081d8d34b68c7f3b2fde419fb2d39e5ae3b9770490d6d0003d0731af412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:15:25 np0005464891 systemd[1]: var-lib-containers-storage-overlay-456547fc41ab670f52ae39d462a78a608bc9511df837075a95ec0effeb76480f-merged.mount: Deactivated successfully.
Oct  1 12:15:25 np0005464891 podman[89547]: 2025-10-01 16:15:25.275527791 +0000 UTC m=+1.550591193 container remove e8457081d8d34b68c7f3b2fde419fb2d39e5ae3b9770490d6d0003d0731af412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:15:25 np0005464891 podman[89730]: 2025-10-01 16:15:25.579292767 +0000 UTC m=+0.074365272 container create 11b91835b8563e2546215de638dade50d85ec4a74c9c4614eaa37181f6d013d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa0be920d806bc8701ff78e7283d892da4956ace3c0adc2dc7af7f468f83e1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa0be920d806bc8701ff78e7283d892da4956ace3c0adc2dc7af7f468f83e1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa0be920d806bc8701ff78e7283d892da4956ace3c0adc2dc7af7f468f83e1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa0be920d806bc8701ff78e7283d892da4956ace3c0adc2dc7af7f468f83e1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa0be920d806bc8701ff78e7283d892da4956ace3c0adc2dc7af7f468f83e1c/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:25 np0005464891 podman[89730]: 2025-10-01 16:15:25.642146075 +0000 UTC m=+0.137218560 container init 11b91835b8563e2546215de638dade50d85ec4a74c9c4614eaa37181f6d013d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:25 np0005464891 podman[89730]: 2025-10-01 16:15:25.54961467 +0000 UTC m=+0.044687165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:25 np0005464891 podman[89730]: 2025-10-01 16:15:25.652848597 +0000 UTC m=+0.147921072 container start 11b91835b8563e2546215de638dade50d85ec4a74c9c4614eaa37181f6d013d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct  1 12:15:25 np0005464891 bash[89730]: 11b91835b8563e2546215de638dade50d85ec4a74c9c4614eaa37181f6d013d5
Oct  1 12:15:25 np0005464891 systemd[1]: Started Ceph osd.2 for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5.
Oct  1 12:15:25 np0005464891 ceph-osd[88747]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 32.863 iops: 8412.862 elapsed_sec: 0.357
Oct  1 12:15:25 np0005464891 ceph-osd[88747]: log_channel(cluster) log [WRN] : OSD bench result of 8412.861798 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  1 12:15:25 np0005464891 ceph-osd[88747]: osd.1 0 waiting for initial osdmap
Oct  1 12:15:25 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1[88743]: 2025-10-01T16:15:25.689+0000 7fd715562640 -1 osd.1 0 waiting for initial osdmap
Oct  1 12:15:25 np0005464891 ceph-osd[89750]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 12:15:25 np0005464891 ceph-osd[89750]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  1 12:15:25 np0005464891 ceph-osd[89750]: pidfile_write: ignore empty --pid-file
Oct  1 12:15:25 np0005464891 ceph-osd[89750]: bdev(0x56404cd3b800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  1 12:15:25 np0005464891 ceph-osd[89750]: bdev(0x56404cd3b800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  1 12:15:25 np0005464891 ceph-osd[89750]: bdev(0x56404cd3b800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:25 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 12:15:25 np0005464891 ceph-osd[89750]: bdev(0x56404db73800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  1 12:15:25 np0005464891 ceph-osd[89750]: bdev(0x56404db73800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  1 12:15:25 np0005464891 ceph-osd[89750]: bdev(0x56404db73800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:25 np0005464891 ceph-osd[89750]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  1 12:15:25 np0005464891 ceph-osd[89750]: bdev(0x56404db73800 /var/lib/ceph/osd/ceph-2/block) close
Oct  1 12:15:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:15:25 np0005464891 ceph-osd[88747]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  1 12:15:25 np0005464891 ceph-osd[88747]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct  1 12:15:25 np0005464891 ceph-osd[88747]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  1 12:15:25 np0005464891 ceph-osd[88747]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Oct  1 12:15:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:15:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:25 np0005464891 ceph-osd[88747]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  1 12:15:25 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-1[88743]: 2025-10-01T16:15:25.713+0000 7fd710b8a640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  1 12:15:25 np0005464891 ceph-osd[88747]: osd.1 11 set_numa_affinity not setting numa affinity
Oct  1 12:15:25 np0005464891 ceph-osd[88747]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Oct  1 12:15:25 np0005464891 ceph-osd[89750]: bdev(0x56404cd3b800 /var/lib/ceph/osd/ceph-2/block) close
Oct  1 12:15:25 np0005464891 ceph-mgr[74592]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/263801485; not ready for session (expect reconnect)
Oct  1 12:15:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 12:15:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 12:15:25 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 12:15:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v39: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct  1 12:15:25 np0005464891 ceph-osd[89750]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct  1 12:15:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:15:25 np0005464891 ceph-osd[89750]: load: jerasure load: lrc 
Oct  1 12:15:25 np0005464891 ceph-osd[89750]: bdev(0x56404dbf4c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf4c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf4c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf4c00 /var/lib/ceph/osd/ceph-2/block) close
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf4c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf4c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf4c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf4c00 /var/lib/ceph/osd/ceph-2/block) close
Oct  1 12:15:26 np0005464891 podman[89914]: 2025-10-01 16:15:26.485658951 +0000 UTC m=+0.061924127 container create 8a42fd0ec18a5cb1653dc740cb06de77d577da3db0e7b8317960f66bf872299c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jackson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:26 np0005464891 systemd[1]: Started libpod-conmon-8a42fd0ec18a5cb1653dc740cb06de77d577da3db0e7b8317960f66bf872299c.scope.
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf4c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf4c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  1 12:15:26 np0005464891 podman[89914]: 2025-10-01 16:15:26.459541482 +0000 UTC m=+0.035806718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf4c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf5400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf5400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf5400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluefs mount
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluefs mount shared_bdev_used = 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: RocksDB version: 7.9.2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Git sha 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: DB SUMMARY
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: DB Session ID:  SLRBZGLF50HR9KMTYLP9
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: CURRENT file:  CURRENT
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: IDENTITY file:  IDENTITY
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                         Options.error_if_exists: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.create_if_missing: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                         Options.paranoid_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                                     Options.env: 0x56404dbc5c70
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                                Options.info_log: 0x56404cdc28a0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_file_opening_threads: 16
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                              Options.statistics: (nil)
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.use_fsync: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.max_log_file_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                         Options.allow_fallocate: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.use_direct_reads: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.create_missing_column_families: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                              Options.db_log_dir: 
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                                 Options.wal_dir: db.wal
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.advise_random_on_open: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.write_buffer_manager: 0x56404dcd8460
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                            Options.rate_limiter: (nil)
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.unordered_write: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.row_cache: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                              Options.wal_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.allow_ingest_behind: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.two_write_queues: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.manual_wal_flush: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.wal_compression: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.atomic_flush: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.log_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.allow_data_in_errors: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.db_host_id: __hostname__
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.max_background_jobs: 4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.max_background_compactions: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.max_subcompactions: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.max_open_files: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.bytes_per_sync: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.max_background_flushes: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Compression algorithms supported:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: #011kZSTD supported: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: #011kXpressCompression supported: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: #011kBZip2Compression supported: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: #011kLZ4Compression supported: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: #011kZlibCompression supported: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: #011kSnappyCompression supported: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc2240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc2240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 podman[89914]: 2025-10-01 16:15:26.601448126 +0000 UTC m=+0.177713312 container init 8a42fd0ec18a5cb1653dc740cb06de77d577da3db0e7b8317960f66bf872299c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jackson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc2240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f7c00a8a-e6ed-4766-a24b-36dfc97849a9
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335326589445, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335326589710, "job": 1, "event": "recovery_finished"}
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: freelist init
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: freelist _read_cfg
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluefs umount
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf5400 /var/lib/ceph/osd/ceph-2/block) close
Oct  1 12:15:26 np0005464891 podman[89914]: 2025-10-01 16:15:26.609665987 +0000 UTC m=+0.185931143 container start 8a42fd0ec18a5cb1653dc740cb06de77d577da3db0e7b8317960f66bf872299c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jackson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 12:15:26 np0005464891 eloquent_jackson[89930]: 167 167
Oct  1 12:15:26 np0005464891 systemd[1]: libpod-8a42fd0ec18a5cb1653dc740cb06de77d577da3db0e7b8317960f66bf872299c.scope: Deactivated successfully.
Oct  1 12:15:26 np0005464891 podman[89914]: 2025-10-01 16:15:26.623394152 +0000 UTC m=+0.199659318 container attach 8a42fd0ec18a5cb1653dc740cb06de77d577da3db0e7b8317960f66bf872299c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jackson, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 12:15:26 np0005464891 podman[89914]: 2025-10-01 16:15:26.624442268 +0000 UTC m=+0.200707424 container died 8a42fd0ec18a5cb1653dc740cb06de77d577da3db0e7b8317960f66bf872299c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jackson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:15:26 np0005464891 systemd[1]: var-lib-containers-storage-overlay-7114e269c3d66640d2bcdef3b2d53759582d492c0960f9897e3d2b1f2785e512-merged.mount: Deactivated successfully.
Oct  1 12:15:26 np0005464891 ceph-osd[88747]: osd.1 11 tick checking mon for new map
Oct  1 12:15:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct  1 12:15:26 np0005464891 ceph-mgr[74592]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/263801485; not ready for session (expect reconnect)
Oct  1 12:15:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 12:15:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 12:15:26 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 12:15:26 np0005464891 podman[89914]: 2025-10-01 16:15:26.73444621 +0000 UTC m=+0.310711386 container remove 8a42fd0ec18a5cb1653dc740cb06de77d577da3db0e7b8317960f66bf872299c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Oct  1 12:15:26 np0005464891 systemd[1]: libpod-conmon-8a42fd0ec18a5cb1653dc740cb06de77d577da3db0e7b8317960f66bf872299c.scope: Deactivated successfully.
Oct  1 12:15:26 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/263801485,v1:192.168.122.100:6807/263801485] boot
Oct  1 12:15:26 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Oct  1 12:15:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 12:15:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 12:15:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 12:15:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 12:15:26 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 12:15:26 np0005464891 ceph-osd[88747]: osd.1 12 state: booting -> active
Oct  1 12:15:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:15:26 np0005464891 ceph-mon[74303]: OSD bench result of 8412.861798 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  1 12:15:26 np0005464891 ceph-mon[74303]: osd.1 [v2:192.168.122.100:6806/263801485,v1:192.168.122.100:6807/263801485] boot
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf5400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf5400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bdev(0x56404dbf5400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluefs mount
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluefs mount shared_bdev_used = 4718592
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: RocksDB version: 7.9.2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Git sha 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: DB SUMMARY
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: DB Session ID:  SLRBZGLF50HR9KMTYLP8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: CURRENT file:  CURRENT
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: IDENTITY file:  IDENTITY
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                         Options.error_if_exists: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.create_if_missing: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                         Options.paranoid_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                                     Options.env: 0x56404dd80460
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                                Options.info_log: 0x56404cdc2620
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_file_opening_threads: 16
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                              Options.statistics: (nil)
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.use_fsync: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.max_log_file_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                         Options.allow_fallocate: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.use_direct_reads: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.create_missing_column_families: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                              Options.db_log_dir: 
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                                 Options.wal_dir: db.wal
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.advise_random_on_open: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.write_buffer_manager: 0x56404dcd8460
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                            Options.rate_limiter: (nil)
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.unordered_write: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.row_cache: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                              Options.wal_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.allow_ingest_behind: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.two_write_queues: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.manual_wal_flush: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.wal_compression: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.atomic_flush: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.log_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.allow_data_in_errors: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.db_host_id: __hostname__
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.max_background_jobs: 4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.max_background_compactions: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.max_subcompactions: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.max_open_files: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.bytes_per_sync: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.max_background_flushes: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Compression algorithms supported:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: #011kZSTD supported: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: #011kXpressCompression supported: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: #011kBZip2Compression supported: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: #011kLZ4Compression supported: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: #011kZlibCompression supported: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: #011kSnappyCompression supported: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc2a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc2a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc2a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc2a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc2a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc2a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc2a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc2380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc2380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:           Options.merge_operator: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56404cdc2380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56404cdaf090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.compression: LZ4
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.num_levels: 7
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.bloom_locality: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                               Options.ttl: 2592000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                       Options.enable_blob_files: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                           Options.min_blob_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f7c00a8a-e6ed-4766-a24b-36dfc97849a9
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335326867739, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335326898304, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335326, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f7c00a8a-e6ed-4766-a24b-36dfc97849a9", "db_session_id": "SLRBZGLF50HR9KMTYLP8", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335326950874, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335326, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f7c00a8a-e6ed-4766-a24b-36dfc97849a9", "db_session_id": "SLRBZGLF50HR9KMTYLP8", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:15:26 np0005464891 podman[90197]: 2025-10-01 16:15:26.953865721 +0000 UTC m=+0.083755910 container create 8b01767022ae47645a635ea38a0f33f97071e2aa39543e779c26d371ef5a3e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_torvalds, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335326955642, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335326, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f7c00a8a-e6ed-4766-a24b-36dfc97849a9", "db_session_id": "SLRBZGLF50HR9KMTYLP8", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335326972454, "job": 1, "event": "recovery_finished"}
Oct  1 12:15:26 np0005464891 ceph-osd[89750]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  1 12:15:26 np0005464891 podman[90197]: 2025-10-01 16:15:26.894936508 +0000 UTC m=+0.024826717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:27 np0005464891 systemd[1]: Started libpod-conmon-8b01767022ae47645a635ea38a0f33f97071e2aa39543e779c26d371ef5a3e7c.scope.
Oct  1 12:15:27 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab66ec7cecf3822e7bffb2a037e43ef1beaceebb70768b39ff80b1ec7e06782b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab66ec7cecf3822e7bffb2a037e43ef1beaceebb70768b39ff80b1ec7e06782b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab66ec7cecf3822e7bffb2a037e43ef1beaceebb70768b39ff80b1ec7e06782b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab66ec7cecf3822e7bffb2a037e43ef1beaceebb70768b39ff80b1ec7e06782b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:27 np0005464891 podman[90197]: 2025-10-01 16:15:27.134247266 +0000 UTC m=+0.264137555 container init 8b01767022ae47645a635ea38a0f33f97071e2aa39543e779c26d371ef5a3e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_torvalds, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:27 np0005464891 podman[90197]: 2025-10-01 16:15:27.149581261 +0000 UTC m=+0.279471490 container start 8b01767022ae47645a635ea38a0f33f97071e2aa39543e779c26d371ef5a3e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:27 np0005464891 podman[90197]: 2025-10-01 16:15:27.153560489 +0000 UTC m=+0.283450718 container attach 8b01767022ae47645a635ea38a0f33f97071e2aa39543e779c26d371ef5a3e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56404cf1c000
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: rocksdb: DB pointer 0x56404dcb7a00
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.3 total, 0.3 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56404cdaf1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56404cdaf1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56404cdaf1f0#2 capacity: 460.80 MB usag
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: _get_class not permitted to load lua
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: _get_class not permitted to load sdk
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: _get_class not permitted to load test_remote_reads
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: osd.2 0 load_pgs
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: osd.2 0 load_pgs opened 0 pgs
Oct  1 12:15:27 np0005464891 ceph-osd[89750]: osd.2 0 log_to_monitors true
Oct  1 12:15:27 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2[89746]: 2025-10-01T16:15:27.165+0000 7f64fbfff740 -1 osd.2 0 log_to_monitors true
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/967598743,v1:192.168.122.100:6811/967598743]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/967598743,v1:192.168.122.100:6811/967598743]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/967598743,v1:192.168.122.100:6811/967598743]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 12:15:27 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 12:15:27 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:15:27 np0005464891 ceph-mgr[74592]: [devicehealth INFO root] creating main.db for devicehealth
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: from='osd.2 [v2:192.168.122.100:6810/967598743,v1:192.168.122.100:6811/967598743]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: from='osd.2 [v2:192.168.122.100:6810/967598743,v1:192.168.122.100:6811/967598743]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: from='osd.2 [v2:192.168.122.100:6810/967598743,v1:192.168.122.100:6811/967598743]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  1 12:15:27 np0005464891 ceph-mgr[74592]: [devicehealth INFO root] Check health
Oct  1 12:15:27 np0005464891 ceph-mgr[74592]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct  1 12:15:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  1 12:15:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v42: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]: {
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:        "osd_id": 2,
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:        "type": "bluestore"
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:    },
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:        "osd_id": 0,
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:        "type": "bluestore"
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:    },
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:        "osd_id": 1,
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:        "type": "bluestore"
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]:    }
Oct  1 12:15:28 np0005464891 ecstatic_torvalds[90348]: }
Oct  1 12:15:28 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  1 12:15:28 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  1 12:15:28 np0005464891 systemd[1]: libpod-8b01767022ae47645a635ea38a0f33f97071e2aa39543e779c26d371ef5a3e7c.scope: Deactivated successfully.
Oct  1 12:15:28 np0005464891 podman[90197]: 2025-10-01 16:15:28.254119787 +0000 UTC m=+1.384010046 container died 8b01767022ae47645a635ea38a0f33f97071e2aa39543e779c26d371ef5a3e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 12:15:28 np0005464891 systemd[1]: libpod-8b01767022ae47645a635ea38a0f33f97071e2aa39543e779c26d371ef5a3e7c.scope: Consumed 1.109s CPU time.
Oct  1 12:15:28 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ab66ec7cecf3822e7bffb2a037e43ef1beaceebb70768b39ff80b1ec7e06782b-merged.mount: Deactivated successfully.
Oct  1 12:15:28 np0005464891 podman[90197]: 2025-10-01 16:15:28.324269904 +0000 UTC m=+1.454160103 container remove 8b01767022ae47645a635ea38a0f33f97071e2aa39543e779c26d371ef5a3e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_torvalds, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:28 np0005464891 systemd[1]: libpod-conmon-8b01767022ae47645a635ea38a0f33f97071e2aa39543e779c26d371ef5a3e7c.scope: Deactivated successfully.
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/967598743,v1:192.168.122.100:6811/967598743]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Oct  1 12:15:28 np0005464891 ceph-osd[89750]: osd.2 0 done with init, starting boot process
Oct  1 12:15:28 np0005464891 ceph-osd[89750]: osd.2 0 start_boot
Oct  1 12:15:28 np0005464891 ceph-osd[89750]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  1 12:15:28 np0005464891 ceph-osd[89750]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  1 12:15:28 np0005464891 ceph-osd[89750]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  1 12:15:28 np0005464891 ceph-osd[89750]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  1 12:15:28 np0005464891 ceph-osd[89750]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 12:15:28 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 12:15:28 np0005464891 ceph-mgr[74592]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/967598743; not ready for session (expect reconnect)
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 12:15:28 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: from='osd.2 [v2:192.168.122.100:6810/967598743,v1:192.168.122.100:6811/967598743]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  1 12:15:28 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ieawdb(active, since 76s)
Oct  1 12:15:29 np0005464891 podman[90658]: 2025-10-01 16:15:29.597415085 +0000 UTC m=+0.104722703 container exec 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:15:29 np0005464891 podman[90658]: 2025-10-01 16:15:29.735958017 +0000 UTC m=+0.243265585 container exec_died 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:29 np0005464891 ceph-mgr[74592]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/967598743; not ready for session (expect reconnect)
Oct  1 12:15:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 12:15:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 12:15:29 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 12:15:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct  1 12:15:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:15:30 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:15:30 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:30 np0005464891 ceph-mgr[74592]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/967598743; not ready for session (expect reconnect)
Oct  1 12:15:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 12:15:30 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 12:15:30 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 12:15:30 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:30 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:15:31 np0005464891 ceph-mgr[74592]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/967598743; not ready for session (expect reconnect)
Oct  1 12:15:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 12:15:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 12:15:31 np0005464891 ceph-mgr[74592]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 12:15:31 np0005464891 ceph-osd[89750]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 30.790 iops: 7882.129 elapsed_sec: 0.381
Oct  1 12:15:31 np0005464891 ceph-osd[89750]: log_channel(cluster) log [WRN] : OSD bench result of 7882.129024 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  1 12:15:31 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2[89746]: 2025-10-01T16:15:31.806+0000 7f64f7f7f640 -1 osd.2 0 waiting for initial osdmap
Oct  1 12:15:31 np0005464891 ceph-osd[89750]: osd.2 0 waiting for initial osdmap
Oct  1 12:15:31 np0005464891 ceph-osd[89750]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  1 12:15:31 np0005464891 ceph-osd[89750]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct  1 12:15:31 np0005464891 ceph-osd[89750]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  1 12:15:31 np0005464891 ceph-osd[89750]: osd.2 14 check_osdmap_features require_osd_release unknown -> reef
Oct  1 12:15:31 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-osd-2[89746]: 2025-10-01T16:15:31.840+0000 7f64f35a7640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  1 12:15:31 np0005464891 ceph-osd[89750]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  1 12:15:31 np0005464891 ceph-osd[89750]: osd.2 14 set_numa_affinity not setting numa affinity
Oct  1 12:15:31 np0005464891 ceph-osd[89750]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Oct  1 12:15:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct  1 12:15:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Oct  1 12:15:31 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/967598743,v1:192.168.122.100:6811/967598743] boot
Oct  1 12:15:31 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Oct  1 12:15:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 12:15:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 12:15:31 np0005464891 ceph-osd[89750]: osd.2 15 state: booting -> active
Oct  1 12:15:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct  1 12:15:32 np0005464891 podman[91051]: 2025-10-01 16:15:32.112037446 +0000 UTC m=+0.044153192 container create 95723c2c9b6757af27e36ab5ba567b47d24a18a109ddced1d9b316c2c6e4ebfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hofstadter, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:15:32 np0005464891 systemd[1]: Started libpod-conmon-95723c2c9b6757af27e36ab5ba567b47d24a18a109ddced1d9b316c2c6e4ebfa.scope.
Oct  1 12:15:32 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:32 np0005464891 podman[91051]: 2025-10-01 16:15:32.093094822 +0000 UTC m=+0.025210568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:32 np0005464891 podman[91051]: 2025-10-01 16:15:32.195377245 +0000 UTC m=+0.127492991 container init 95723c2c9b6757af27e36ab5ba567b47d24a18a109ddced1d9b316c2c6e4ebfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hofstadter, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 12:15:32 np0005464891 podman[91051]: 2025-10-01 16:15:32.21068976 +0000 UTC m=+0.142805476 container start 95723c2c9b6757af27e36ab5ba567b47d24a18a109ddced1d9b316c2c6e4ebfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 12:15:32 np0005464891 podman[91051]: 2025-10-01 16:15:32.214761439 +0000 UTC m=+0.146877195 container attach 95723c2c9b6757af27e36ab5ba567b47d24a18a109ddced1d9b316c2c6e4ebfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hofstadter, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:32 np0005464891 modest_hofstadter[91067]: 167 167
Oct  1 12:15:32 np0005464891 systemd[1]: libpod-95723c2c9b6757af27e36ab5ba567b47d24a18a109ddced1d9b316c2c6e4ebfa.scope: Deactivated successfully.
Oct  1 12:15:32 np0005464891 podman[91051]: 2025-10-01 16:15:32.219302831 +0000 UTC m=+0.151418587 container died 95723c2c9b6757af27e36ab5ba567b47d24a18a109ddced1d9b316c2c6e4ebfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:15:32 np0005464891 systemd[1]: var-lib-containers-storage-overlay-97684bc4ccfda5333b3ecbf4ea8a142efddb8aa11e6a2bb94648274208deda48-merged.mount: Deactivated successfully.
Oct  1 12:15:32 np0005464891 podman[91051]: 2025-10-01 16:15:32.267348597 +0000 UTC m=+0.199464323 container remove 95723c2c9b6757af27e36ab5ba567b47d24a18a109ddced1d9b316c2c6e4ebfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hofstadter, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:15:32 np0005464891 systemd[1]: libpod-conmon-95723c2c9b6757af27e36ab5ba567b47d24a18a109ddced1d9b316c2c6e4ebfa.scope: Deactivated successfully.
Oct  1 12:15:32 np0005464891 podman[91090]: 2025-10-01 16:15:32.479396227 +0000 UTC m=+0.073164131 container create 468ae1b4350265d280eb6f02cdb324ab58413b5ad5893a7b3647028f27745c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_edison, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  1 12:15:32 np0005464891 systemd[1]: Started libpod-conmon-468ae1b4350265d280eb6f02cdb324ab58413b5ad5893a7b3647028f27745c32.scope.
Oct  1 12:15:32 np0005464891 podman[91090]: 2025-10-01 16:15:32.451276189 +0000 UTC m=+0.045044143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:32 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:32 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5733025bbec13927dcc2f48d5da608dab1a8e9459550fd5a525bd05852bff6ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:32 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5733025bbec13927dcc2f48d5da608dab1a8e9459550fd5a525bd05852bff6ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:32 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5733025bbec13927dcc2f48d5da608dab1a8e9459550fd5a525bd05852bff6ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:32 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5733025bbec13927dcc2f48d5da608dab1a8e9459550fd5a525bd05852bff6ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:32 np0005464891 podman[91090]: 2025-10-01 16:15:32.58696002 +0000 UTC m=+0.180727974 container init 468ae1b4350265d280eb6f02cdb324ab58413b5ad5893a7b3647028f27745c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 12:15:32 np0005464891 podman[91090]: 2025-10-01 16:15:32.60165597 +0000 UTC m=+0.195423884 container start 468ae1b4350265d280eb6f02cdb324ab58413b5ad5893a7b3647028f27745c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:15:32 np0005464891 podman[91090]: 2025-10-01 16:15:32.605631947 +0000 UTC m=+0.199399861 container attach 468ae1b4350265d280eb6f02cdb324ab58413b5ad5893a7b3647028f27745c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_edison, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:15:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct  1 12:15:32 np0005464891 ceph-mon[74303]: OSD bench result of 7882.129024 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  1 12:15:32 np0005464891 ceph-mon[74303]: osd.2 [v2:192.168.122.100:6810/967598743,v1:192.168.122.100:6811/967598743] boot
Oct  1 12:15:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Oct  1 12:15:32 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Oct  1 12:15:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]: [
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:    {
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:        "available": false,
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:        "ceph_device": false,
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:        "lsm_data": {},
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:        "lvs": [],
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:        "path": "/dev/sr0",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:        "rejected_reasons": [
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "Insufficient space (<5GB)",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "Has a FileSystem"
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:        ],
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:        "sys_api": {
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "actuators": null,
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "device_nodes": "sr0",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "devname": "sr0",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "human_readable_size": "482.00 KB",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "id_bus": "ata",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "model": "QEMU DVD-ROM",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "nr_requests": "2",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "parent": "/dev/sr0",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "partitions": {},
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "path": "/dev/sr0",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "removable": "1",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "rev": "2.5+",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "ro": "0",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "rotational": "0",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "sas_address": "",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "sas_device_handle": "",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "scheduler_mode": "mq-deadline",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "sectors": 0,
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "sectorsize": "2048",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "size": 493568.0,
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "support_discard": "2048",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "type": "disk",
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:            "vendor": "QEMU"
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:        }
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]:    }
Oct  1 12:15:34 np0005464891 inspiring_edison[91107]: ]
Oct  1 12:15:34 np0005464891 systemd[1]: libpod-468ae1b4350265d280eb6f02cdb324ab58413b5ad5893a7b3647028f27745c32.scope: Deactivated successfully.
Oct  1 12:15:34 np0005464891 podman[91090]: 2025-10-01 16:15:34.090367608 +0000 UTC m=+1.684135522 container died 468ae1b4350265d280eb6f02cdb324ab58413b5ad5893a7b3647028f27745c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:15:34 np0005464891 systemd[1]: libpod-468ae1b4350265d280eb6f02cdb324ab58413b5ad5893a7b3647028f27745c32.scope: Consumed 1.552s CPU time.
Oct  1 12:15:34 np0005464891 systemd[1]: var-lib-containers-storage-overlay-5733025bbec13927dcc2f48d5da608dab1a8e9459550fd5a525bd05852bff6ef-merged.mount: Deactivated successfully.
Oct  1 12:15:34 np0005464891 podman[91090]: 2025-10-01 16:15:34.16154559 +0000 UTC m=+1.755313504 container remove 468ae1b4350265d280eb6f02cdb324ab58413b5ad5893a7b3647028f27745c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_edison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:34 np0005464891 systemd[1]: libpod-conmon-468ae1b4350265d280eb6f02cdb324ab58413b5ad5893a7b3647028f27745c32.scope: Deactivated successfully.
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  1 12:15:34 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43643k
Oct  1 12:15:34 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43643k
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct  1 12:15:34 np0005464891 ceph-mgr[74592]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44690500: error parsing value: Value '44690500' is below minimum 939524096
Oct  1 12:15:34 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44690500: error parsing value: Value '44690500' is below minimum 939524096
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:34 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev bca6bb90-0f93-4610-9fc6-798fcbc15219 does not exist
Oct  1 12:15:34 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev c86c9790-ded8-436a-be0f-dc04d1c1ce00 does not exist
Oct  1 12:15:34 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev ee496ff8-b16a-42d5-b0a9-6ae4b1afc1f0 does not exist
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:15:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:15:35 np0005464891 podman[92891]: 2025-10-01 16:15:35.078619747 +0000 UTC m=+0.058744009 container create 8d2f60ed0f405a9c91a03a6d80c3e03f518678d6036748524aba276711c54d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:35 np0005464891 systemd[1]: Started libpod-conmon-8d2f60ed0f405a9c91a03a6d80c3e03f518678d6036748524aba276711c54d1c.scope.
Oct  1 12:15:35 np0005464891 podman[92891]: 2025-10-01 16:15:35.058321799 +0000 UTC m=+0.038446071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:35 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:35 np0005464891 podman[92891]: 2025-10-01 16:15:35.182304575 +0000 UTC m=+0.162428867 container init 8d2f60ed0f405a9c91a03a6d80c3e03f518678d6036748524aba276711c54d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 12:15:35 np0005464891 podman[92891]: 2025-10-01 16:15:35.193070198 +0000 UTC m=+0.173194460 container start 8d2f60ed0f405a9c91a03a6d80c3e03f518678d6036748524aba276711c54d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:35 np0005464891 podman[92891]: 2025-10-01 16:15:35.196777409 +0000 UTC m=+0.176901681 container attach 8d2f60ed0f405a9c91a03a6d80c3e03f518678d6036748524aba276711c54d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:35 np0005464891 peaceful_diffie[92909]: 167 167
Oct  1 12:15:35 np0005464891 systemd[1]: libpod-8d2f60ed0f405a9c91a03a6d80c3e03f518678d6036748524aba276711c54d1c.scope: Deactivated successfully.
Oct  1 12:15:35 np0005464891 podman[92891]: 2025-10-01 16:15:35.200562581 +0000 UTC m=+0.180686863 container died 8d2f60ed0f405a9c91a03a6d80c3e03f518678d6036748524aba276711c54d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 12:15:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  1 12:15:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  1 12:15:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  1 12:15:35 np0005464891 ceph-mon[74303]: Adjusting osd_memory_target on compute-0 to 43643k
Oct  1 12:15:35 np0005464891 ceph-mon[74303]: Unable to set osd_memory_target on compute-0 to 44690500: error parsing value: Value '44690500' is below minimum 939524096
Oct  1 12:15:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:15:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:15:35 np0005464891 systemd[1]: var-lib-containers-storage-overlay-8da550306724874bdcb9b3347230207996603aa6bb32d1234509af24722cc7c2-merged.mount: Deactivated successfully.
Oct  1 12:15:35 np0005464891 podman[92891]: 2025-10-01 16:15:35.249206652 +0000 UTC m=+0.229330924 container remove 8d2f60ed0f405a9c91a03a6d80c3e03f518678d6036748524aba276711c54d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 12:15:35 np0005464891 systemd[1]: libpod-conmon-8d2f60ed0f405a9c91a03a6d80c3e03f518678d6036748524aba276711c54d1c.scope: Deactivated successfully.
Oct  1 12:15:35 np0005464891 podman[92933]: 2025-10-01 16:15:35.512496587 +0000 UTC m=+0.074448334 container create 6c0791a88acadf7f9dcc9fdf304e59f83c8fcc07a726dec7a649f06bffe80f0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct  1 12:15:35 np0005464891 systemd[1]: Started libpod-conmon-6c0791a88acadf7f9dcc9fdf304e59f83c8fcc07a726dec7a649f06bffe80f0c.scope.
Oct  1 12:15:35 np0005464891 podman[92933]: 2025-10-01 16:15:35.485085666 +0000 UTC m=+0.047037503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:35 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3b1f2e4dc4edee171d7e6326b05578ed526e5ef0f967bc8d28e8ec5abddcdb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3b1f2e4dc4edee171d7e6326b05578ed526e5ef0f967bc8d28e8ec5abddcdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3b1f2e4dc4edee171d7e6326b05578ed526e5ef0f967bc8d28e8ec5abddcdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3b1f2e4dc4edee171d7e6326b05578ed526e5ef0f967bc8d28e8ec5abddcdb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3b1f2e4dc4edee171d7e6326b05578ed526e5ef0f967bc8d28e8ec5abddcdb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:35 np0005464891 podman[92933]: 2025-10-01 16:15:35.618585423 +0000 UTC m=+0.180537180 container init 6c0791a88acadf7f9dcc9fdf304e59f83c8fcc07a726dec7a649f06bffe80f0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_margulis, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 12:15:35 np0005464891 podman[92933]: 2025-10-01 16:15:35.633889698 +0000 UTC m=+0.195841445 container start 6c0791a88acadf7f9dcc9fdf304e59f83c8fcc07a726dec7a649f06bffe80f0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_margulis, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:15:35 np0005464891 podman[92933]: 2025-10-01 16:15:35.637844684 +0000 UTC m=+0.199796431 container attach 6c0791a88acadf7f9dcc9fdf304e59f83c8fcc07a726dec7a649f06bffe80f0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_margulis, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 12:15:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct  1 12:15:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:15:36 np0005464891 funny_margulis[92949]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:15:36 np0005464891 funny_margulis[92949]: --> relative data size: 1.0
Oct  1 12:15:36 np0005464891 funny_margulis[92949]: --> All data devices are unavailable
Oct  1 12:15:36 np0005464891 systemd[1]: libpod-6c0791a88acadf7f9dcc9fdf304e59f83c8fcc07a726dec7a649f06bffe80f0c.scope: Deactivated successfully.
Oct  1 12:15:36 np0005464891 systemd[1]: libpod-6c0791a88acadf7f9dcc9fdf304e59f83c8fcc07a726dec7a649f06bffe80f0c.scope: Consumed 1.151s CPU time.
Oct  1 12:15:36 np0005464891 podman[92933]: 2025-10-01 16:15:36.821117038 +0000 UTC m=+1.383068785 container died 6c0791a88acadf7f9dcc9fdf304e59f83c8fcc07a726dec7a649f06bffe80f0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 12:15:36 np0005464891 systemd[1]: var-lib-containers-storage-overlay-dd3b1f2e4dc4edee171d7e6326b05578ed526e5ef0f967bc8d28e8ec5abddcdb-merged.mount: Deactivated successfully.
Oct  1 12:15:36 np0005464891 podman[92933]: 2025-10-01 16:15:36.888252881 +0000 UTC m=+1.450204618 container remove 6c0791a88acadf7f9dcc9fdf304e59f83c8fcc07a726dec7a649f06bffe80f0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_margulis, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 12:15:36 np0005464891 systemd[1]: libpod-conmon-6c0791a88acadf7f9dcc9fdf304e59f83c8fcc07a726dec7a649f06bffe80f0c.scope: Deactivated successfully.
Oct  1 12:15:37 np0005464891 podman[93132]: 2025-10-01 16:15:37.720303017 +0000 UTC m=+0.064207213 container create 2719735f50b6c33fea853b6ac4f36791e87b8e9829f1316d728205aa8e6c57d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 12:15:37 np0005464891 systemd[1]: Started libpod-conmon-2719735f50b6c33fea853b6ac4f36791e87b8e9829f1316d728205aa8e6c57d3.scope.
Oct  1 12:15:37 np0005464891 podman[93132]: 2025-10-01 16:15:37.693201823 +0000 UTC m=+0.037106059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:37 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:37 np0005464891 podman[93132]: 2025-10-01 16:15:37.819918244 +0000 UTC m=+0.163822490 container init 2719735f50b6c33fea853b6ac4f36791e87b8e9829f1316d728205aa8e6c57d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:37 np0005464891 podman[93132]: 2025-10-01 16:15:37.830951784 +0000 UTC m=+0.174855970 container start 2719735f50b6c33fea853b6ac4f36791e87b8e9829f1316d728205aa8e6c57d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:15:37 np0005464891 podman[93132]: 2025-10-01 16:15:37.834683916 +0000 UTC m=+0.178588112 container attach 2719735f50b6c33fea853b6ac4f36791e87b8e9829f1316d728205aa8e6c57d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:37 np0005464891 crazy_banzai[93148]: 167 167
Oct  1 12:15:37 np0005464891 systemd[1]: libpod-2719735f50b6c33fea853b6ac4f36791e87b8e9829f1316d728205aa8e6c57d3.scope: Deactivated successfully.
Oct  1 12:15:37 np0005464891 podman[93132]: 2025-10-01 16:15:37.839384321 +0000 UTC m=+0.183288507 container died 2719735f50b6c33fea853b6ac4f36791e87b8e9829f1316d728205aa8e6c57d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 12:15:37 np0005464891 systemd[1]: var-lib-containers-storage-overlay-3eeb5b5d55281180e4bd7bc8425aa7e4976a4f6dbce0b7cf7344acaa1cb6fbd6-merged.mount: Deactivated successfully.
Oct  1 12:15:37 np0005464891 podman[93132]: 2025-10-01 16:15:37.892112342 +0000 UTC m=+0.236016538 container remove 2719735f50b6c33fea853b6ac4f36791e87b8e9829f1316d728205aa8e6c57d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 12:15:37 np0005464891 systemd[1]: libpod-conmon-2719735f50b6c33fea853b6ac4f36791e87b8e9829f1316d728205aa8e6c57d3.scope: Deactivated successfully.
Oct  1 12:15:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Oct  1 12:15:38 np0005464891 podman[93173]: 2025-10-01 16:15:38.153958371 +0000 UTC m=+0.076377961 container create e4b74b0f24b6010c895367bbd1156d5ccd40931a721196eefd5e46ef254c7959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 12:15:38 np0005464891 systemd[1]: Started libpod-conmon-e4b74b0f24b6010c895367bbd1156d5ccd40931a721196eefd5e46ef254c7959.scope.
Oct  1 12:15:38 np0005464891 podman[93173]: 2025-10-01 16:15:38.122970072 +0000 UTC m=+0.045389712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:38 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:38 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b454ce7216f66297de3ba2abdd09674e3975ceb018d42a897d29cacdefb734ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:38 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b454ce7216f66297de3ba2abdd09674e3975ceb018d42a897d29cacdefb734ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:38 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b454ce7216f66297de3ba2abdd09674e3975ceb018d42a897d29cacdefb734ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:38 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b454ce7216f66297de3ba2abdd09674e3975ceb018d42a897d29cacdefb734ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:38 np0005464891 podman[93173]: 2025-10-01 16:15:38.251426086 +0000 UTC m=+0.173845636 container init e4b74b0f24b6010c895367bbd1156d5ccd40931a721196eefd5e46ef254c7959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gould, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:38 np0005464891 podman[93173]: 2025-10-01 16:15:38.264805194 +0000 UTC m=+0.187224744 container start e4b74b0f24b6010c895367bbd1156d5ccd40931a721196eefd5e46ef254c7959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gould, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:15:38 np0005464891 podman[93173]: 2025-10-01 16:15:38.267747376 +0000 UTC m=+0.190166946 container attach e4b74b0f24b6010c895367bbd1156d5ccd40931a721196eefd5e46ef254c7959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:15:38 np0005464891 cool_gould[93189]: {
Oct  1 12:15:38 np0005464891 cool_gould[93189]:    "0": [
Oct  1 12:15:38 np0005464891 cool_gould[93189]:        {
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "devices": [
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "/dev/loop3"
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            ],
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "lv_name": "ceph_lv0",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "lv_size": "21470642176",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "name": "ceph_lv0",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "tags": {
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.cluster_name": "ceph",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.crush_device_class": "",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.encrypted": "0",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.osd_id": "0",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.type": "block",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.vdo": "0"
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            },
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "type": "block",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "vg_name": "ceph_vg0"
Oct  1 12:15:38 np0005464891 cool_gould[93189]:        }
Oct  1 12:15:38 np0005464891 cool_gould[93189]:    ],
Oct  1 12:15:38 np0005464891 cool_gould[93189]:    "1": [
Oct  1 12:15:38 np0005464891 cool_gould[93189]:        {
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "devices": [
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "/dev/loop4"
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            ],
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "lv_name": "ceph_lv1",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "lv_size": "21470642176",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "name": "ceph_lv1",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "tags": {
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.cluster_name": "ceph",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.crush_device_class": "",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.encrypted": "0",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.osd_id": "1",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.type": "block",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.vdo": "0"
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            },
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "type": "block",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "vg_name": "ceph_vg1"
Oct  1 12:15:38 np0005464891 cool_gould[93189]:        }
Oct  1 12:15:38 np0005464891 cool_gould[93189]:    ],
Oct  1 12:15:38 np0005464891 cool_gould[93189]:    "2": [
Oct  1 12:15:38 np0005464891 cool_gould[93189]:        {
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "devices": [
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "/dev/loop5"
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            ],
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "lv_name": "ceph_lv2",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "lv_size": "21470642176",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "name": "ceph_lv2",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "tags": {
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.cluster_name": "ceph",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.crush_device_class": "",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.encrypted": "0",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.osd_id": "2",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.type": "block",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:                "ceph.vdo": "0"
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            },
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "type": "block",
Oct  1 12:15:38 np0005464891 cool_gould[93189]:            "vg_name": "ceph_vg2"
Oct  1 12:15:38 np0005464891 cool_gould[93189]:        }
Oct  1 12:15:38 np0005464891 cool_gould[93189]:    ]
Oct  1 12:15:39 np0005464891 cool_gould[93189]: }
Oct  1 12:15:39 np0005464891 systemd[1]: libpod-e4b74b0f24b6010c895367bbd1156d5ccd40931a721196eefd5e46ef254c7959.scope: Deactivated successfully.
Oct  1 12:15:39 np0005464891 podman[93173]: 2025-10-01 16:15:39.026062357 +0000 UTC m=+0.948481917 container died e4b74b0f24b6010c895367bbd1156d5ccd40931a721196eefd5e46ef254c7959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gould, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 12:15:39 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b454ce7216f66297de3ba2abdd09674e3975ceb018d42a897d29cacdefb734ed-merged.mount: Deactivated successfully.
Oct  1 12:15:39 np0005464891 podman[93173]: 2025-10-01 16:15:39.091212642 +0000 UTC m=+1.013632192 container remove e4b74b0f24b6010c895367bbd1156d5ccd40931a721196eefd5e46ef254c7959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gould, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct  1 12:15:39 np0005464891 systemd[1]: libpod-conmon-e4b74b0f24b6010c895367bbd1156d5ccd40931a721196eefd5e46ef254c7959.scope: Deactivated successfully.
Oct  1 12:15:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Oct  1 12:15:39 np0005464891 podman[93350]: 2025-10-01 16:15:39.990905393 +0000 UTC m=+0.066293663 container create 239751ec7065b8448633d1c81282915f370cb3e23d739c9684f3d20f1e21cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_galois, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:40 np0005464891 systemd[1]: Started libpod-conmon-239751ec7065b8448633d1c81282915f370cb3e23d739c9684f3d20f1e21cfb8.scope.
Oct  1 12:15:40 np0005464891 podman[93350]: 2025-10-01 16:15:39.96544134 +0000 UTC m=+0.040829670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:40 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:40 np0005464891 podman[93350]: 2025-10-01 16:15:40.087992709 +0000 UTC m=+0.163380969 container init 239751ec7065b8448633d1c81282915f370cb3e23d739c9684f3d20f1e21cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_galois, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:15:40 np0005464891 podman[93350]: 2025-10-01 16:15:40.097784869 +0000 UTC m=+0.173173099 container start 239751ec7065b8448633d1c81282915f370cb3e23d739c9684f3d20f1e21cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_galois, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:15:40 np0005464891 podman[93350]: 2025-10-01 16:15:40.101282555 +0000 UTC m=+0.176670855 container attach 239751ec7065b8448633d1c81282915f370cb3e23d739c9684f3d20f1e21cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 12:15:40 np0005464891 laughing_galois[93367]: 167 167
Oct  1 12:15:40 np0005464891 systemd[1]: libpod-239751ec7065b8448633d1c81282915f370cb3e23d739c9684f3d20f1e21cfb8.scope: Deactivated successfully.
Oct  1 12:15:40 np0005464891 podman[93350]: 2025-10-01 16:15:40.105238151 +0000 UTC m=+0.180626411 container died 239751ec7065b8448633d1c81282915f370cb3e23d739c9684f3d20f1e21cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_galois, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:40 np0005464891 systemd[1]: var-lib-containers-storage-overlay-bc98791573666c6e5a6ead17a9ea0440d7144d6e41918c45d4679a496fecc05b-merged.mount: Deactivated successfully.
Oct  1 12:15:40 np0005464891 podman[93350]: 2025-10-01 16:15:40.154660131 +0000 UTC m=+0.230048371 container remove 239751ec7065b8448633d1c81282915f370cb3e23d739c9684f3d20f1e21cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 12:15:40 np0005464891 systemd[1]: libpod-conmon-239751ec7065b8448633d1c81282915f370cb3e23d739c9684f3d20f1e21cfb8.scope: Deactivated successfully.
Oct  1 12:15:40 np0005464891 podman[93390]: 2025-10-01 16:15:40.394925872 +0000 UTC m=+0.064267854 container create 412b089ff71bcd257c0beb1c551bba905499edfbed646d3e0845f5815af51e6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 12:15:40 np0005464891 systemd[1]: Started libpod-conmon-412b089ff71bcd257c0beb1c551bba905499edfbed646d3e0845f5815af51e6a.scope.
Oct  1 12:15:40 np0005464891 podman[93390]: 2025-10-01 16:15:40.369341435 +0000 UTC m=+0.038683467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:40 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:40 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e90e0e080f07e0d9af577852a24bead440d8e4b199060bf4003348bcd0c97f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:40 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e90e0e080f07e0d9af577852a24bead440d8e4b199060bf4003348bcd0c97f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:40 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e90e0e080f07e0d9af577852a24bead440d8e4b199060bf4003348bcd0c97f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:40 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e90e0e080f07e0d9af577852a24bead440d8e4b199060bf4003348bcd0c97f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:40 np0005464891 podman[93390]: 2025-10-01 16:15:40.495874602 +0000 UTC m=+0.165216664 container init 412b089ff71bcd257c0beb1c551bba905499edfbed646d3e0845f5815af51e6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ramanujan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 12:15:40 np0005464891 podman[93390]: 2025-10-01 16:15:40.511641769 +0000 UTC m=+0.180983761 container start 412b089ff71bcd257c0beb1c551bba905499edfbed646d3e0845f5815af51e6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 12:15:40 np0005464891 podman[93390]: 2025-10-01 16:15:40.515667498 +0000 UTC m=+0.185009490 container attach 412b089ff71bcd257c0beb1c551bba905499edfbed646d3e0845f5815af51e6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ramanujan, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]: {
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:        "osd_id": 2,
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:        "type": "bluestore"
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:    },
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:        "osd_id": 0,
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:        "type": "bluestore"
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:    },
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:        "osd_id": 1,
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:        "type": "bluestore"
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]:    }
Oct  1 12:15:41 np0005464891 sweet_ramanujan[93407]: }
Oct  1 12:15:41 np0005464891 systemd[1]: libpod-412b089ff71bcd257c0beb1c551bba905499edfbed646d3e0845f5815af51e6a.scope: Deactivated successfully.
Oct  1 12:15:41 np0005464891 systemd[1]: libpod-412b089ff71bcd257c0beb1c551bba905499edfbed646d3e0845f5815af51e6a.scope: Consumed 1.097s CPU time.
Oct  1 12:15:41 np0005464891 podman[93390]: 2025-10-01 16:15:41.599088945 +0000 UTC m=+1.268430897 container died 412b089ff71bcd257c0beb1c551bba905499edfbed646d3e0845f5815af51e6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ramanujan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:41 np0005464891 systemd[1]: var-lib-containers-storage-overlay-63e90e0e080f07e0d9af577852a24bead440d8e4b199060bf4003348bcd0c97f-merged.mount: Deactivated successfully.
Oct  1 12:15:41 np0005464891 podman[93390]: 2025-10-01 16:15:41.657139696 +0000 UTC m=+1.326481648 container remove 412b089ff71bcd257c0beb1c551bba905499edfbed646d3e0845f5815af51e6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ramanujan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 12:15:41 np0005464891 systemd[1]: libpod-conmon-412b089ff71bcd257c0beb1c551bba905499edfbed646d3e0845f5815af51e6a.scope: Deactivated successfully.
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:41 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct  1 12:15:41 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:15:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:15:41 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct  1 12:15:41 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct  1 12:15:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Oct  1 12:15:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:15:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:15:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:15:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:15:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:15:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:15:42 np0005464891 podman[93618]: 2025-10-01 16:15:42.621763967 +0000 UTC m=+0.064937071 container create 66ed05dd4d7017d57d2c49869952c544fbc3c6112eaef0993187d9fd78e4780f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:15:42 np0005464891 systemd[1]: Started libpod-conmon-66ed05dd4d7017d57d2c49869952c544fbc3c6112eaef0993187d9fd78e4780f.scope.
Oct  1 12:15:42 np0005464891 podman[93618]: 2025-10-01 16:15:42.595667208 +0000 UTC m=+0.038840352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:42 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  1 12:15:42 np0005464891 podman[93618]: 2025-10-01 16:15:42.71952908 +0000 UTC m=+0.162702224 container init 66ed05dd4d7017d57d2c49869952c544fbc3c6112eaef0993187d9fd78e4780f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 12:15:42 np0005464891 podman[93618]: 2025-10-01 16:15:42.7285266 +0000 UTC m=+0.171699694 container start 66ed05dd4d7017d57d2c49869952c544fbc3c6112eaef0993187d9fd78e4780f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 12:15:42 np0005464891 podman[93618]: 2025-10-01 16:15:42.732632651 +0000 UTC m=+0.175805805 container attach 66ed05dd4d7017d57d2c49869952c544fbc3c6112eaef0993187d9fd78e4780f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 12:15:42 np0005464891 dreamy_merkle[93634]: 167 167
Oct  1 12:15:42 np0005464891 systemd[1]: libpod-66ed05dd4d7017d57d2c49869952c544fbc3c6112eaef0993187d9fd78e4780f.scope: Deactivated successfully.
Oct  1 12:15:42 np0005464891 conmon[93634]: conmon 66ed05dd4d7017d57d2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66ed05dd4d7017d57d2c49869952c544fbc3c6112eaef0993187d9fd78e4780f.scope/container/memory.events
Oct  1 12:15:42 np0005464891 podman[93618]: 2025-10-01 16:15:42.738829072 +0000 UTC m=+0.182002166 container died 66ed05dd4d7017d57d2c49869952c544fbc3c6112eaef0993187d9fd78e4780f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 12:15:42 np0005464891 systemd[1]: var-lib-containers-storage-overlay-917a7b9063ec9fb5dd2837dd39b8285d62fe66fb52710b580fddcb4dd72e8f47-merged.mount: Deactivated successfully.
Oct  1 12:15:42 np0005464891 podman[93618]: 2025-10-01 16:15:42.786274433 +0000 UTC m=+0.229447507 container remove 66ed05dd4d7017d57d2c49869952c544fbc3c6112eaef0993187d9fd78e4780f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 12:15:42 np0005464891 systemd[1]: libpod-conmon-66ed05dd4d7017d57d2c49869952c544fbc3c6112eaef0993187d9fd78e4780f.scope: Deactivated successfully.
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:42 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.ieawdb (unknown last config time)...
Oct  1 12:15:42 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.ieawdb (unknown last config time)...
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ieawdb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ieawdb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:15:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:15:42 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.ieawdb on compute-0
Oct  1 12:15:42 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.ieawdb on compute-0
Oct  1 12:15:43 np0005464891 podman[93767]: 2025-10-01 16:15:43.520843593 +0000 UTC m=+0.066447437 container create b83c48844b4b1c971f0a0a59a82b32d385daccd3d45188cc2aeb255ddd5c0b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct  1 12:15:43 np0005464891 systemd[1]: Started libpod-conmon-b83c48844b4b1c971f0a0a59a82b32d385daccd3d45188cc2aeb255ddd5c0b34.scope.
Oct  1 12:15:43 np0005464891 podman[93767]: 2025-10-01 16:15:43.493969086 +0000 UTC m=+0.039572990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:43 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:43 np0005464891 podman[93767]: 2025-10-01 16:15:43.606616713 +0000 UTC m=+0.152220567 container init b83c48844b4b1c971f0a0a59a82b32d385daccd3d45188cc2aeb255ddd5c0b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:15:43 np0005464891 podman[93767]: 2025-10-01 16:15:43.617190642 +0000 UTC m=+0.162794486 container start b83c48844b4b1c971f0a0a59a82b32d385daccd3d45188cc2aeb255ddd5c0b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:43 np0005464891 podman[93767]: 2025-10-01 16:15:43.621334833 +0000 UTC m=+0.166938737 container attach b83c48844b4b1c971f0a0a59a82b32d385daccd3d45188cc2aeb255ddd5c0b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:15:43 np0005464891 cranky_lovelace[93783]: 167 167
Oct  1 12:15:43 np0005464891 systemd[1]: libpod-b83c48844b4b1c971f0a0a59a82b32d385daccd3d45188cc2aeb255ddd5c0b34.scope: Deactivated successfully.
Oct  1 12:15:43 np0005464891 podman[93767]: 2025-10-01 16:15:43.624196433 +0000 UTC m=+0.169800287 container died b83c48844b4b1c971f0a0a59a82b32d385daccd3d45188cc2aeb255ddd5c0b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 12:15:43 np0005464891 systemd[1]: var-lib-containers-storage-overlay-254d1675bf2aad658568b725fa3cbc22ddbdfcb82b34dcc3a694ed71218a772a-merged.mount: Deactivated successfully.
Oct  1 12:15:43 np0005464891 podman[93767]: 2025-10-01 16:15:43.67188776 +0000 UTC m=+0.217491614 container remove b83c48844b4b1c971f0a0a59a82b32d385daccd3d45188cc2aeb255ddd5c0b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:15:43 np0005464891 systemd[1]: libpod-conmon-b83c48844b4b1c971f0a0a59a82b32d385daccd3d45188cc2aeb255ddd5c0b34.scope: Deactivated successfully.
Oct  1 12:15:43 np0005464891 ceph-mon[74303]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct  1 12:15:43 np0005464891 ceph-mon[74303]: Reconfiguring daemon mon.compute-0 on compute-0
Oct  1 12:15:43 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:43 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:43 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ieawdb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  1 12:15:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:15:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:15:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Oct  1 12:15:44 np0005464891 ceph-mon[74303]: Reconfiguring mgr.compute-0.ieawdb (unknown last config time)...
Oct  1 12:15:44 np0005464891 ceph-mon[74303]: Reconfiguring daemon mgr.compute-0.ieawdb on compute-0
Oct  1 12:15:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:44 np0005464891 podman[93974]: 2025-10-01 16:15:44.777995234 +0000 UTC m=+0.080754538 container exec 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:15:44 np0005464891 podman[93974]: 2025-10-01 16:15:44.893251614 +0000 UTC m=+0.196010918 container exec_died 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:45 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 37e3950a-9362-4b68-a043-d1c079d5447a does not exist
Oct  1 12:15:45 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 811d3d38-3f9b-41c1-8acb-f472bea29f30 does not exist
Oct  1 12:15:45 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev da8811ec-5a38-4e85-b158-151b4ec7c8ee does not exist
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:15:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Oct  1 12:15:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:15:46 np0005464891 podman[94237]: 2025-10-01 16:15:46.312279878 +0000 UTC m=+0.045546917 container create 707eda3926c0011312050c32ef0994b53ea843a60005591446b7dfa2a699ac19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 12:15:46 np0005464891 systemd[1]: Started libpod-conmon-707eda3926c0011312050c32ef0994b53ea843a60005591446b7dfa2a699ac19.scope.
Oct  1 12:15:46 np0005464891 podman[94237]: 2025-10-01 16:15:46.288653509 +0000 UTC m=+0.021920648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:46 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:46 np0005464891 podman[94237]: 2025-10-01 16:15:46.406702678 +0000 UTC m=+0.139969807 container init 707eda3926c0011312050c32ef0994b53ea843a60005591446b7dfa2a699ac19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kalam, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:46 np0005464891 podman[94237]: 2025-10-01 16:15:46.412779128 +0000 UTC m=+0.146046197 container start 707eda3926c0011312050c32ef0994b53ea843a60005591446b7dfa2a699ac19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kalam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:15:46 np0005464891 podman[94237]: 2025-10-01 16:15:46.416259062 +0000 UTC m=+0.149526141 container attach 707eda3926c0011312050c32ef0994b53ea843a60005591446b7dfa2a699ac19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kalam, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 12:15:46 np0005464891 jolly_kalam[94273]: 167 167
Oct  1 12:15:46 np0005464891 systemd[1]: libpod-707eda3926c0011312050c32ef0994b53ea843a60005591446b7dfa2a699ac19.scope: Deactivated successfully.
Oct  1 12:15:46 np0005464891 podman[94237]: 2025-10-01 16:15:46.419908531 +0000 UTC m=+0.153175601 container died 707eda3926c0011312050c32ef0994b53ea843a60005591446b7dfa2a699ac19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kalam, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  1 12:15:46 np0005464891 systemd[1]: var-lib-containers-storage-overlay-0938cf807bfb1cfc807db6fa9c9375fb05f4bf11f66f1c85cf8fbee39c8ab710-merged.mount: Deactivated successfully.
Oct  1 12:15:46 np0005464891 podman[94237]: 2025-10-01 16:15:46.471240168 +0000 UTC m=+0.204507237 container remove 707eda3926c0011312050c32ef0994b53ea843a60005591446b7dfa2a699ac19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kalam, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:46 np0005464891 systemd[1]: libpod-conmon-707eda3926c0011312050c32ef0994b53ea843a60005591446b7dfa2a699ac19.scope: Deactivated successfully.
Oct  1 12:15:46 np0005464891 python3[94281]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:15:46 np0005464891 podman[94301]: 2025-10-01 16:15:46.595134431 +0000 UTC m=+0.048831437 container create 4a0e33d9b83e0ddf52b711606d5f86adbaade69b4e44ba28da8e69d03a8340aa (image=quay.io/ceph/ceph:v18, name=modest_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 12:15:46 np0005464891 systemd[1]: Started libpod-conmon-4a0e33d9b83e0ddf52b711606d5f86adbaade69b4e44ba28da8e69d03a8340aa.scope.
Oct  1 12:15:46 np0005464891 podman[94320]: 2025-10-01 16:15:46.657797844 +0000 UTC m=+0.064573301 container create 81dbec42e78a455ff93fb31c5ddb9df92f68f50fdbcc80fba128f64b6f9ba6a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chatterjee, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:46 np0005464891 podman[94301]: 2025-10-01 16:15:46.571126803 +0000 UTC m=+0.024823919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:15:46 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e859d71670d8a47ca79114a3be9daa8e57cd06498e2cc899438e02c41f135d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e859d71670d8a47ca79114a3be9daa8e57cd06498e2cc899438e02c41f135d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e859d71670d8a47ca79114a3be9daa8e57cd06498e2cc899438e02c41f135d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:46 np0005464891 systemd[1]: Started libpod-conmon-81dbec42e78a455ff93fb31c5ddb9df92f68f50fdbcc80fba128f64b6f9ba6a7.scope.
Oct  1 12:15:46 np0005464891 podman[94301]: 2025-10-01 16:15:46.71562446 +0000 UTC m=+0.169321486 container init 4a0e33d9b83e0ddf52b711606d5f86adbaade69b4e44ba28da8e69d03a8340aa (image=quay.io/ceph/ceph:v18, name=modest_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 12:15:46 np0005464891 podman[94320]: 2025-10-01 16:15:46.632021704 +0000 UTC m=+0.038797221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:46 np0005464891 podman[94301]: 2025-10-01 16:15:46.726752582 +0000 UTC m=+0.180449618 container start 4a0e33d9b83e0ddf52b711606d5f86adbaade69b4e44ba28da8e69d03a8340aa (image=quay.io/ceph/ceph:v18, name=modest_sanderson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:46 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8685328914360017f19f4e57391dfb0b5c412c7da1b8d6c83868b970f38452/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8685328914360017f19f4e57391dfb0b5c412c7da1b8d6c83868b970f38452/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8685328914360017f19f4e57391dfb0b5c412c7da1b8d6c83868b970f38452/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8685328914360017f19f4e57391dfb0b5c412c7da1b8d6c83868b970f38452/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8685328914360017f19f4e57391dfb0b5c412c7da1b8d6c83868b970f38452/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:46 np0005464891 podman[94301]: 2025-10-01 16:15:46.731039957 +0000 UTC m=+0.184736993 container attach 4a0e33d9b83e0ddf52b711606d5f86adbaade69b4e44ba28da8e69d03a8340aa (image=quay.io/ceph/ceph:v18, name=modest_sanderson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:46 np0005464891 podman[94320]: 2025-10-01 16:15:46.751731434 +0000 UTC m=+0.158506901 container init 81dbec42e78a455ff93fb31c5ddb9df92f68f50fdbcc80fba128f64b6f9ba6a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chatterjee, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 12:15:46 np0005464891 podman[94320]: 2025-10-01 16:15:46.764514876 +0000 UTC m=+0.171290333 container start 81dbec42e78a455ff93fb31c5ddb9df92f68f50fdbcc80fba128f64b6f9ba6a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chatterjee, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 12:15:46 np0005464891 podman[94320]: 2025-10-01 16:15:46.767594732 +0000 UTC m=+0.174370179 container attach 81dbec42e78a455ff93fb31c5ddb9df92f68f50fdbcc80fba128f64b6f9ba6a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chatterjee, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 12:15:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  1 12:15:47 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2334673688' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  1 12:15:47 np0005464891 modest_sanderson[94336]: 
Oct  1 12:15:47 np0005464891 modest_sanderson[94336]: {"fsid":"6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":141,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1759335331,"num_in_osds":3,"osd_in_since":1759335303,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":922177536,"bytes_avail":63489748992,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-01T16:15:13.931131+0000","services":{}},"progress_events":{}}
Oct  1 12:15:47 np0005464891 systemd[1]: libpod-4a0e33d9b83e0ddf52b711606d5f86adbaade69b4e44ba28da8e69d03a8340aa.scope: Deactivated successfully.
Oct  1 12:15:47 np0005464891 podman[94301]: 2025-10-01 16:15:47.33256104 +0000 UTC m=+0.786258076 container died 4a0e33d9b83e0ddf52b711606d5f86adbaade69b4e44ba28da8e69d03a8340aa (image=quay.io/ceph/ceph:v18, name=modest_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 12:15:47 np0005464891 systemd[1]: var-lib-containers-storage-overlay-89e859d71670d8a47ca79114a3be9daa8e57cd06498e2cc899438e02c41f135d-merged.mount: Deactivated successfully.
Oct  1 12:15:47 np0005464891 podman[94301]: 2025-10-01 16:15:47.394768553 +0000 UTC m=+0.848465579 container remove 4a0e33d9b83e0ddf52b711606d5f86adbaade69b4e44ba28da8e69d03a8340aa (image=quay.io/ceph/ceph:v18, name=modest_sanderson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 12:15:47 np0005464891 systemd[1]: libpod-conmon-4a0e33d9b83e0ddf52b711606d5f86adbaade69b4e44ba28da8e69d03a8340aa.scope: Deactivated successfully.
Oct  1 12:15:47 np0005464891 vigorous_chatterjee[94341]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:15:47 np0005464891 vigorous_chatterjee[94341]: --> relative data size: 1.0
Oct  1 12:15:47 np0005464891 vigorous_chatterjee[94341]: --> All data devices are unavailable
Oct  1 12:15:47 np0005464891 systemd[1]: libpod-81dbec42e78a455ff93fb31c5ddb9df92f68f50fdbcc80fba128f64b6f9ba6a7.scope: Deactivated successfully.
Oct  1 12:15:47 np0005464891 python3[94424]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:15:47 np0005464891 systemd[1]: libpod-81dbec42e78a455ff93fb31c5ddb9df92f68f50fdbcc80fba128f64b6f9ba6a7.scope: Consumed 1.117s CPU time.
Oct  1 12:15:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Oct  1 12:15:47 np0005464891 podman[94429]: 2025-10-01 16:15:47.980066129 +0000 UTC m=+0.033701216 container died 81dbec42e78a455ff93fb31c5ddb9df92f68f50fdbcc80fba128f64b6f9ba6a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 12:15:48 np0005464891 podman[94430]: 2025-10-01 16:15:48.00094584 +0000 UTC m=+0.050792165 container create ddb9edc5c3a7a01a3c720f26f2d8df3691d6080be1d3405a8469b0bdeb243370 (image=quay.io/ceph/ceph:v18, name=vigorous_mclean, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:48 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6f8685328914360017f19f4e57391dfb0b5c412c7da1b8d6c83868b970f38452-merged.mount: Deactivated successfully.
Oct  1 12:15:48 np0005464891 podman[94429]: 2025-10-01 16:15:48.054267375 +0000 UTC m=+0.107902482 container remove 81dbec42e78a455ff93fb31c5ddb9df92f68f50fdbcc80fba128f64b6f9ba6a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:48 np0005464891 systemd[1]: Started libpod-conmon-ddb9edc5c3a7a01a3c720f26f2d8df3691d6080be1d3405a8469b0bdeb243370.scope.
Oct  1 12:15:48 np0005464891 systemd[1]: libpod-conmon-81dbec42e78a455ff93fb31c5ddb9df92f68f50fdbcc80fba128f64b6f9ba6a7.scope: Deactivated successfully.
Oct  1 12:15:48 np0005464891 podman[94430]: 2025-10-01 16:15:47.981061013 +0000 UTC m=+0.030907318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:15:48 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:48 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0644f46348e50a08a427b1698fd9f871cff0ad239596fa0205159070da7d91bf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:48 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0644f46348e50a08a427b1698fd9f871cff0ad239596fa0205159070da7d91bf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:48 np0005464891 podman[94430]: 2025-10-01 16:15:48.138136278 +0000 UTC m=+0.187982583 container init ddb9edc5c3a7a01a3c720f26f2d8df3691d6080be1d3405a8469b0bdeb243370 (image=quay.io/ceph/ceph:v18, name=vigorous_mclean, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  1 12:15:48 np0005464891 podman[94430]: 2025-10-01 16:15:48.146997245 +0000 UTC m=+0.196843500 container start ddb9edc5c3a7a01a3c720f26f2d8df3691d6080be1d3405a8469b0bdeb243370 (image=quay.io/ceph/ceph:v18, name=vigorous_mclean, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:48 np0005464891 podman[94430]: 2025-10-01 16:15:48.150426619 +0000 UTC m=+0.200272964 container attach ddb9edc5c3a7a01a3c720f26f2d8df3691d6080be1d3405a8469b0bdeb243370 (image=quay.io/ceph/ceph:v18, name=vigorous_mclean, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  1 12:15:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1575384321' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 12:15:48 np0005464891 podman[94623]: 2025-10-01 16:15:48.738630846 +0000 UTC m=+0.056635227 container create ab66ddc06c7a487db5407103acdf3c3f784c6307bef1ed5c8fc697c7c8f0456a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 12:15:48 np0005464891 systemd[1]: Started libpod-conmon-ab66ddc06c7a487db5407103acdf3c3f784c6307bef1ed5c8fc697c7c8f0456a.scope.
Oct  1 12:15:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct  1 12:15:48 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1575384321' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 12:15:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1575384321' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 12:15:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Oct  1 12:15:48 np0005464891 vigorous_mclean[94459]: pool 'vms' created
Oct  1 12:15:48 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Oct  1 12:15:48 np0005464891 podman[94623]: 2025-10-01 16:15:48.708615361 +0000 UTC m=+0.026619782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:48 np0005464891 podman[94430]: 2025-10-01 16:15:48.811430388 +0000 UTC m=+0.861276643 container died ddb9edc5c3a7a01a3c720f26f2d8df3691d6080be1d3405a8469b0bdeb243370 (image=quay.io/ceph/ceph:v18, name=vigorous_mclean, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 12:15:48 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:48 np0005464891 systemd[1]: libpod-ddb9edc5c3a7a01a3c720f26f2d8df3691d6080be1d3405a8469b0bdeb243370.scope: Deactivated successfully.
Oct  1 12:15:48 np0005464891 podman[94623]: 2025-10-01 16:15:48.827950072 +0000 UTC m=+0.145954513 container init ab66ddc06c7a487db5407103acdf3c3f784c6307bef1ed5c8fc697c7c8f0456a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:48 np0005464891 systemd[1]: var-lib-containers-storage-overlay-0644f46348e50a08a427b1698fd9f871cff0ad239596fa0205159070da7d91bf-merged.mount: Deactivated successfully.
Oct  1 12:15:48 np0005464891 podman[94623]: 2025-10-01 16:15:48.838418328 +0000 UTC m=+0.156422689 container start ab66ddc06c7a487db5407103acdf3c3f784c6307bef1ed5c8fc697c7c8f0456a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kapitsa, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:48 np0005464891 friendly_kapitsa[94642]: 167 167
Oct  1 12:15:48 np0005464891 podman[94623]: 2025-10-01 16:15:48.841819332 +0000 UTC m=+0.159823773 container attach ab66ddc06c7a487db5407103acdf3c3f784c6307bef1ed5c8fc697c7c8f0456a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:48 np0005464891 systemd[1]: libpod-ab66ddc06c7a487db5407103acdf3c3f784c6307bef1ed5c8fc697c7c8f0456a.scope: Deactivated successfully.
Oct  1 12:15:48 np0005464891 podman[94430]: 2025-10-01 16:15:48.862355014 +0000 UTC m=+0.912201279 container remove ddb9edc5c3a7a01a3c720f26f2d8df3691d6080be1d3405a8469b0bdeb243370 (image=quay.io/ceph/ceph:v18, name=vigorous_mclean, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:15:48 np0005464891 podman[94623]: 2025-10-01 16:15:48.862946528 +0000 UTC m=+0.180950919 container died ab66ddc06c7a487db5407103acdf3c3f784c6307bef1ed5c8fc697c7c8f0456a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:15:48 np0005464891 systemd[1]: libpod-conmon-ddb9edc5c3a7a01a3c720f26f2d8df3691d6080be1d3405a8469b0bdeb243370.scope: Deactivated successfully.
Oct  1 12:15:48 np0005464891 systemd[1]: var-lib-containers-storage-overlay-edf0fd11c09446cf82fcf728820abcfe6ebd6392338cb0ff9a44275edd3d23af-merged.mount: Deactivated successfully.
Oct  1 12:15:48 np0005464891 podman[94623]: 2025-10-01 16:15:48.910303927 +0000 UTC m=+0.228308318 container remove ab66ddc06c7a487db5407103acdf3c3f784c6307bef1ed5c8fc697c7c8f0456a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 12:15:48 np0005464891 systemd[1]: libpod-conmon-ab66ddc06c7a487db5407103acdf3c3f784c6307bef1ed5c8fc697c7c8f0456a.scope: Deactivated successfully.
Oct  1 12:15:49 np0005464891 podman[94704]: 2025-10-01 16:15:49.140131983 +0000 UTC m=+0.070862945 container create 794875240e30efcafde155637740913b5ea6233cb5bf01af8d24dde9168302c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_jones, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 12:15:49 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:15:49 np0005464891 systemd[1]: Started libpod-conmon-794875240e30efcafde155637740913b5ea6233cb5bf01af8d24dde9168302c4.scope.
Oct  1 12:15:49 np0005464891 podman[94704]: 2025-10-01 16:15:49.101362904 +0000 UTC m=+0.032093966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:49 np0005464891 python3[94701]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:15:49 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:49 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d974a3900acc185cf5c1febb5f39829de9bab0662dbadaeedfa2c09f195fdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:49 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d974a3900acc185cf5c1febb5f39829de9bab0662dbadaeedfa2c09f195fdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:49 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d974a3900acc185cf5c1febb5f39829de9bab0662dbadaeedfa2c09f195fdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:49 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d974a3900acc185cf5c1febb5f39829de9bab0662dbadaeedfa2c09f195fdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:49 np0005464891 podman[94704]: 2025-10-01 16:15:49.242345475 +0000 UTC m=+0.173076457 container init 794875240e30efcafde155637740913b5ea6233cb5bf01af8d24dde9168302c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_jones, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:49 np0005464891 podman[94704]: 2025-10-01 16:15:49.258073419 +0000 UTC m=+0.188804381 container start 794875240e30efcafde155637740913b5ea6233cb5bf01af8d24dde9168302c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_jones, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 12:15:49 np0005464891 podman[94704]: 2025-10-01 16:15:49.26214766 +0000 UTC m=+0.192878712 container attach 794875240e30efcafde155637740913b5ea6233cb5bf01af8d24dde9168302c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_jones, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 12:15:49 np0005464891 podman[94724]: 2025-10-01 16:15:49.291233841 +0000 UTC m=+0.051401949 container create 6ea61b646dee1926365543f70d02675c48e58538ad08fc98a1bafb2ae05e2339 (image=quay.io/ceph/ceph:v18, name=intelligent_chatelet, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 12:15:49 np0005464891 systemd[1]: Started libpod-conmon-6ea61b646dee1926365543f70d02675c48e58538ad08fc98a1bafb2ae05e2339.scope.
Oct  1 12:15:49 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:49 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82ae520fe643918d4264ea3ebb336c7509b6cef0ae9a7bf9543aad4b1fa3aa23/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:49 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82ae520fe643918d4264ea3ebb336c7509b6cef0ae9a7bf9543aad4b1fa3aa23/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:49 np0005464891 podman[94724]: 2025-10-01 16:15:49.27524616 +0000 UTC m=+0.035414278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:15:49 np0005464891 podman[94724]: 2025-10-01 16:15:49.375653287 +0000 UTC m=+0.135821385 container init 6ea61b646dee1926365543f70d02675c48e58538ad08fc98a1bafb2ae05e2339 (image=quay.io/ceph/ceph:v18, name=intelligent_chatelet, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 12:15:49 np0005464891 podman[94724]: 2025-10-01 16:15:49.383729125 +0000 UTC m=+0.143897233 container start 6ea61b646dee1926365543f70d02675c48e58538ad08fc98a1bafb2ae05e2339 (image=quay.io/ceph/ceph:v18, name=intelligent_chatelet, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 12:15:49 np0005464891 podman[94724]: 2025-10-01 16:15:49.387174589 +0000 UTC m=+0.147342707 container attach 6ea61b646dee1926365543f70d02675c48e58538ad08fc98a1bafb2ae05e2339 (image=quay.io/ceph/ceph:v18, name=intelligent_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 12:15:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct  1 12:15:49 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1575384321' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 12:15:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Oct  1 12:15:49 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Oct  1 12:15:49 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:15:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  1 12:15:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/46546206' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 12:15:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v58: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Oct  1 12:15:49 np0005464891 jovial_jones[94721]: {
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:    "0": [
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:        {
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "devices": [
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "/dev/loop3"
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            ],
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "lv_name": "ceph_lv0",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "lv_size": "21470642176",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "name": "ceph_lv0",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "tags": {
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.cluster_name": "ceph",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.crush_device_class": "",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.encrypted": "0",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.osd_id": "0",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.type": "block",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.vdo": "0"
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            },
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "type": "block",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "vg_name": "ceph_vg0"
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:        }
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:    ],
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:    "1": [
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:        {
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "devices": [
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "/dev/loop4"
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            ],
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "lv_name": "ceph_lv1",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "lv_size": "21470642176",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "name": "ceph_lv1",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "tags": {
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.cluster_name": "ceph",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.crush_device_class": "",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.encrypted": "0",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.osd_id": "1",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.type": "block",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.vdo": "0"
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            },
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "type": "block",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "vg_name": "ceph_vg1"
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:        }
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:    ],
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:    "2": [
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:        {
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "devices": [
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "/dev/loop5"
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            ],
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "lv_name": "ceph_lv2",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "lv_size": "21470642176",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "name": "ceph_lv2",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "tags": {
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.cluster_name": "ceph",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.crush_device_class": "",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.encrypted": "0",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.osd_id": "2",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.type": "block",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:                "ceph.vdo": "0"
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            },
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "type": "block",
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:            "vg_name": "ceph_vg2"
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:        }
Oct  1 12:15:50 np0005464891 jovial_jones[94721]:    ]
Oct  1 12:15:50 np0005464891 jovial_jones[94721]: }
Oct  1 12:15:50 np0005464891 systemd[1]: libpod-794875240e30efcafde155637740913b5ea6233cb5bf01af8d24dde9168302c4.scope: Deactivated successfully.
Oct  1 12:15:50 np0005464891 podman[94704]: 2025-10-01 16:15:50.019439615 +0000 UTC m=+0.950170587 container died 794875240e30efcafde155637740913b5ea6233cb5bf01af8d24dde9168302c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_jones, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct  1 12:15:50 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b5d974a3900acc185cf5c1febb5f39829de9bab0662dbadaeedfa2c09f195fdd-merged.mount: Deactivated successfully.
Oct  1 12:15:50 np0005464891 podman[94704]: 2025-10-01 16:15:50.082417467 +0000 UTC m=+1.013148459 container remove 794875240e30efcafde155637740913b5ea6233cb5bf01af8d24dde9168302c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_jones, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 12:15:50 np0005464891 systemd[1]: libpod-conmon-794875240e30efcafde155637740913b5ea6233cb5bf01af8d24dde9168302c4.scope: Deactivated successfully.
Oct  1 12:15:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct  1 12:15:50 np0005464891 ceph-mon[74303]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 12:15:50 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/46546206' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 12:15:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/46546206' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 12:15:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Oct  1 12:15:50 np0005464891 intelligent_chatelet[94741]: pool 'volumes' created
Oct  1 12:15:50 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Oct  1 12:15:50 np0005464891 systemd[1]: libpod-6ea61b646dee1926365543f70d02675c48e58538ad08fc98a1bafb2ae05e2339.scope: Deactivated successfully.
Oct  1 12:15:50 np0005464891 podman[94724]: 2025-10-01 16:15:50.859324363 +0000 UTC m=+1.619492491 container died 6ea61b646dee1926365543f70d02675c48e58538ad08fc98a1bafb2ae05e2339 (image=quay.io/ceph/ceph:v18, name=intelligent_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 12:15:50 np0005464891 systemd[1]: var-lib-containers-storage-overlay-82ae520fe643918d4264ea3ebb336c7509b6cef0ae9a7bf9543aad4b1fa3aa23-merged.mount: Deactivated successfully.
Oct  1 12:15:50 np0005464891 podman[94724]: 2025-10-01 16:15:50.909794908 +0000 UTC m=+1.669963006 container remove 6ea61b646dee1926365543f70d02675c48e58538ad08fc98a1bafb2ae05e2339 (image=quay.io/ceph/ceph:v18, name=intelligent_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:50 np0005464891 systemd[1]: libpod-conmon-6ea61b646dee1926365543f70d02675c48e58538ad08fc98a1bafb2ae05e2339.scope: Deactivated successfully.
Oct  1 12:15:50 np0005464891 podman[94931]: 2025-10-01 16:15:50.966776552 +0000 UTC m=+0.057761244 container create 4edb2a6effb5220e84cd9f90f946b55359b19af5d50d9289b6fa0d71cd0a02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 12:15:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:15:51 np0005464891 systemd[1]: Started libpod-conmon-4edb2a6effb5220e84cd9f90f946b55359b19af5d50d9289b6fa0d71cd0a02b6.scope.
Oct  1 12:15:51 np0005464891 podman[94931]: 2025-10-01 16:15:50.947047719 +0000 UTC m=+0.038032411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:51 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:51 np0005464891 podman[94931]: 2025-10-01 16:15:51.054827168 +0000 UTC m=+0.145811870 container init 4edb2a6effb5220e84cd9f90f946b55359b19af5d50d9289b6fa0d71cd0a02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:15:51 np0005464891 podman[94931]: 2025-10-01 16:15:51.063240774 +0000 UTC m=+0.154225456 container start 4edb2a6effb5220e84cd9f90f946b55359b19af5d50d9289b6fa0d71cd0a02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meninsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:51 np0005464891 podman[94931]: 2025-10-01 16:15:51.068265266 +0000 UTC m=+0.159249988 container attach 4edb2a6effb5220e84cd9f90f946b55359b19af5d50d9289b6fa0d71cd0a02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct  1 12:15:51 np0005464891 nice_meninsky[94954]: 167 167
Oct  1 12:15:51 np0005464891 systemd[1]: libpod-4edb2a6effb5220e84cd9f90f946b55359b19af5d50d9289b6fa0d71cd0a02b6.scope: Deactivated successfully.
Oct  1 12:15:51 np0005464891 podman[94931]: 2025-10-01 16:15:51.070363868 +0000 UTC m=+0.161348550 container died 4edb2a6effb5220e84cd9f90f946b55359b19af5d50d9289b6fa0d71cd0a02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meninsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:15:51 np0005464891 systemd[1]: var-lib-containers-storage-overlay-afddb5dae5d82f138c776e7053d171841bc7aa8618532b07b6ac9582eb28ef87-merged.mount: Deactivated successfully.
Oct  1 12:15:51 np0005464891 podman[94931]: 2025-10-01 16:15:51.102780432 +0000 UTC m=+0.193765114 container remove 4edb2a6effb5220e84cd9f90f946b55359b19af5d50d9289b6fa0d71cd0a02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:15:51 np0005464891 systemd[1]: libpod-conmon-4edb2a6effb5220e84cd9f90f946b55359b19af5d50d9289b6fa0d71cd0a02b6.scope: Deactivated successfully.
Oct  1 12:15:51 np0005464891 python3[94981]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:15:51 np0005464891 podman[94998]: 2025-10-01 16:15:51.308619089 +0000 UTC m=+0.062720235 container create 7fdfe1f2d4393853f25be44908a042b37213f700cfcb056608495c16a40a1c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:51 np0005464891 podman[94997]: 2025-10-01 16:15:51.312422473 +0000 UTC m=+0.067014141 container create dbfb8950a325ec77b3f3b1299fb4b7cfdc38aa07a49ec03bc644c5343d839d40 (image=quay.io/ceph/ceph:v18, name=mystifying_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 12:15:51 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:15:51 np0005464891 systemd[1]: Started libpod-conmon-7fdfe1f2d4393853f25be44908a042b37213f700cfcb056608495c16a40a1c3c.scope.
Oct  1 12:15:51 np0005464891 systemd[1]: Started libpod-conmon-dbfb8950a325ec77b3f3b1299fb4b7cfdc38aa07a49ec03bc644c5343d839d40.scope.
Oct  1 12:15:51 np0005464891 podman[94998]: 2025-10-01 16:15:51.281911346 +0000 UTC m=+0.036012552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:15:51 np0005464891 podman[94997]: 2025-10-01 16:15:51.288030506 +0000 UTC m=+0.042622224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:15:51 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:51 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e13bbfd7d902789530450f891cd5a1a979259e86b0918f9f72aa2276c9d844/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbd5fd612dabac55f5ae96409bcd930c04349d9b30a55737be027ed31c75693/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbd5fd612dabac55f5ae96409bcd930c04349d9b30a55737be027ed31c75693/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e13bbfd7d902789530450f891cd5a1a979259e86b0918f9f72aa2276c9d844/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e13bbfd7d902789530450f891cd5a1a979259e86b0918f9f72aa2276c9d844/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e13bbfd7d902789530450f891cd5a1a979259e86b0918f9f72aa2276c9d844/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:51 np0005464891 podman[94997]: 2025-10-01 16:15:51.41977869 +0000 UTC m=+0.174370418 container init dbfb8950a325ec77b3f3b1299fb4b7cfdc38aa07a49ec03bc644c5343d839d40 (image=quay.io/ceph/ceph:v18, name=mystifying_grothendieck, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 12:15:51 np0005464891 podman[94998]: 2025-10-01 16:15:51.424831174 +0000 UTC m=+0.178932340 container init 7fdfe1f2d4393853f25be44908a042b37213f700cfcb056608495c16a40a1c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:51 np0005464891 podman[94997]: 2025-10-01 16:15:51.43813113 +0000 UTC m=+0.192722808 container start dbfb8950a325ec77b3f3b1299fb4b7cfdc38aa07a49ec03bc644c5343d839d40 (image=quay.io/ceph/ceph:v18, name=mystifying_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  1 12:15:51 np0005464891 podman[94997]: 2025-10-01 16:15:51.442910136 +0000 UTC m=+0.197501814 container attach dbfb8950a325ec77b3f3b1299fb4b7cfdc38aa07a49ec03bc644c5343d839d40 (image=quay.io/ceph/ceph:v18, name=mystifying_grothendieck, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:51 np0005464891 podman[94998]: 2025-10-01 16:15:51.446311259 +0000 UTC m=+0.200412395 container start 7fdfe1f2d4393853f25be44908a042b37213f700cfcb056608495c16a40a1c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:15:51 np0005464891 podman[94998]: 2025-10-01 16:15:51.450586835 +0000 UTC m=+0.204687991 container attach 7fdfe1f2d4393853f25be44908a042b37213f700cfcb056608495c16a40a1c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 12:15:51 np0005464891 ceph-mon[74303]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 12:15:51 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/46546206' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 12:15:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct  1 12:15:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Oct  1 12:15:51 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Oct  1 12:15:51 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:15:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v61: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Oct  1 12:15:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  1 12:15:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2535602754' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 12:15:52 np0005464891 charming_lamport[95030]: {
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:        "osd_id": 2,
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:        "type": "bluestore"
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:    },
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:        "osd_id": 0,
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:        "type": "bluestore"
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:    },
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:        "osd_id": 1,
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:        "type": "bluestore"
Oct  1 12:15:52 np0005464891 charming_lamport[95030]:    }
Oct  1 12:15:52 np0005464891 charming_lamport[95030]: }
Oct  1 12:15:52 np0005464891 systemd[1]: libpod-7fdfe1f2d4393853f25be44908a042b37213f700cfcb056608495c16a40a1c3c.scope: Deactivated successfully.
Oct  1 12:15:52 np0005464891 podman[94998]: 2025-10-01 16:15:52.566040947 +0000 UTC m=+1.320142103 container died 7fdfe1f2d4393853f25be44908a042b37213f700cfcb056608495c16a40a1c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:15:52 np0005464891 systemd[1]: libpod-7fdfe1f2d4393853f25be44908a042b37213f700cfcb056608495c16a40a1c3c.scope: Consumed 1.125s CPU time.
Oct  1 12:15:52 np0005464891 systemd[1]: var-lib-containers-storage-overlay-32e13bbfd7d902789530450f891cd5a1a979259e86b0918f9f72aa2276c9d844-merged.mount: Deactivated successfully.
Oct  1 12:15:52 np0005464891 podman[94998]: 2025-10-01 16:15:52.63401273 +0000 UTC m=+1.388113836 container remove 7fdfe1f2d4393853f25be44908a042b37213f700cfcb056608495c16a40a1c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:15:52 np0005464891 systemd[1]: libpod-conmon-7fdfe1f2d4393853f25be44908a042b37213f700cfcb056608495c16a40a1c3c.scope: Deactivated successfully.
Oct  1 12:15:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:15:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:15:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct  1 12:15:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2535602754' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 12:15:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Oct  1 12:15:52 np0005464891 mystifying_grothendieck[95032]: pool 'backups' created
Oct  1 12:15:52 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Oct  1 12:15:52 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/2535602754' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 12:15:52 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:52 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:15:52 np0005464891 systemd[1]: libpod-dbfb8950a325ec77b3f3b1299fb4b7cfdc38aa07a49ec03bc644c5343d839d40.scope: Deactivated successfully.
Oct  1 12:15:52 np0005464891 podman[94997]: 2025-10-01 16:15:52.872167449 +0000 UTC m=+1.626759107 container died dbfb8950a325ec77b3f3b1299fb4b7cfdc38aa07a49ec03bc644c5343d839d40 (image=quay.io/ceph/ceph:v18, name=mystifying_grothendieck, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:52 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9fbd5fd612dabac55f5ae96409bcd930c04349d9b30a55737be027ed31c75693-merged.mount: Deactivated successfully.
Oct  1 12:15:52 np0005464891 podman[94997]: 2025-10-01 16:15:52.926682644 +0000 UTC m=+1.681274282 container remove dbfb8950a325ec77b3f3b1299fb4b7cfdc38aa07a49ec03bc644c5343d839d40 (image=quay.io/ceph/ceph:v18, name=mystifying_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:15:52 np0005464891 systemd[1]: libpod-conmon-dbfb8950a325ec77b3f3b1299fb4b7cfdc38aa07a49ec03bc644c5343d839d40.scope: Deactivated successfully.
Oct  1 12:15:53 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:15:53 np0005464891 python3[95187]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:15:53 np0005464891 podman[95188]: 2025-10-01 16:15:53.289604957 +0000 UTC m=+0.051071631 container create 21921bfe560f8b0aa8b66c02ea55fee2f9c4e431de5289f53f674f8d8cb9a810 (image=quay.io/ceph/ceph:v18, name=festive_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 12:15:53 np0005464891 systemd[1]: Started libpod-conmon-21921bfe560f8b0aa8b66c02ea55fee2f9c4e431de5289f53f674f8d8cb9a810.scope.
Oct  1 12:15:53 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:53 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa649c22d87c379b0f616f6ea4bca9263d2a13260841c3788690b951d5f2e0a9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:53 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa649c22d87c379b0f616f6ea4bca9263d2a13260841c3788690b951d5f2e0a9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:53 np0005464891 podman[95188]: 2025-10-01 16:15:53.264851241 +0000 UTC m=+0.026317935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:15:53 np0005464891 podman[95188]: 2025-10-01 16:15:53.365842003 +0000 UTC m=+0.127308687 container init 21921bfe560f8b0aa8b66c02ea55fee2f9c4e431de5289f53f674f8d8cb9a810 (image=quay.io/ceph/ceph:v18, name=festive_robinson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 12:15:53 np0005464891 podman[95188]: 2025-10-01 16:15:53.376165215 +0000 UTC m=+0.137631879 container start 21921bfe560f8b0aa8b66c02ea55fee2f9c4e431de5289f53f674f8d8cb9a810 (image=quay.io/ceph/ceph:v18, name=festive_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 12:15:53 np0005464891 podman[95188]: 2025-10-01 16:15:53.379321822 +0000 UTC m=+0.140788486 container attach 21921bfe560f8b0aa8b66c02ea55fee2f9c4e431de5289f53f674f8d8cb9a810 (image=quay.io/ceph/ceph:v18, name=festive_robinson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:15:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct  1 12:15:53 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/2535602754' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 12:15:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Oct  1 12:15:53 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Oct  1 12:15:53 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:15:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  1 12:15:53 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2073983757' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 12:15:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v64: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:15:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct  1 12:15:54 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2073983757' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 12:15:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Oct  1 12:15:54 np0005464891 festive_robinson[95203]: pool 'images' created
Oct  1 12:15:54 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Oct  1 12:15:54 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:15:54 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/2073983757' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 12:15:54 np0005464891 systemd[1]: libpod-21921bfe560f8b0aa8b66c02ea55fee2f9c4e431de5289f53f674f8d8cb9a810.scope: Deactivated successfully.
Oct  1 12:15:54 np0005464891 podman[95188]: 2025-10-01 16:15:54.911891025 +0000 UTC m=+1.673357689 container died 21921bfe560f8b0aa8b66c02ea55fee2f9c4e431de5289f53f674f8d8cb9a810 (image=quay.io/ceph/ceph:v18, name=festive_robinson, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 12:15:54 np0005464891 systemd[1]: var-lib-containers-storage-overlay-fa649c22d87c379b0f616f6ea4bca9263d2a13260841c3788690b951d5f2e0a9-merged.mount: Deactivated successfully.
Oct  1 12:15:54 np0005464891 podman[95188]: 2025-10-01 16:15:54.948727767 +0000 UTC m=+1.710194431 container remove 21921bfe560f8b0aa8b66c02ea55fee2f9c4e431de5289f53f674f8d8cb9a810 (image=quay.io/ceph/ceph:v18, name=festive_robinson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct  1 12:15:54 np0005464891 systemd[1]: libpod-conmon-21921bfe560f8b0aa8b66c02ea55fee2f9c4e431de5289f53f674f8d8cb9a810.scope: Deactivated successfully.
Oct  1 12:15:55 np0005464891 python3[95269]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:15:55 np0005464891 podman[95270]: 2025-10-01 16:15:55.341630503 +0000 UTC m=+0.068213610 container create 03b1f0229b514eedd1ebe79d723980ad1dd48d156f45580bd180c674e92f7a25 (image=quay.io/ceph/ceph:v18, name=agitated_tharp, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 12:15:55 np0005464891 systemd[1]: Started libpod-conmon-03b1f0229b514eedd1ebe79d723980ad1dd48d156f45580bd180c674e92f7a25.scope.
Oct  1 12:15:55 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:55 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff68c3e0fc1f24abbe4748f5aaec270fbc46fa5f85c341cd1555dc5abc29b24/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:55 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff68c3e0fc1f24abbe4748f5aaec270fbc46fa5f85c341cd1555dc5abc29b24/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:55 np0005464891 podman[95270]: 2025-10-01 16:15:55.315400272 +0000 UTC m=+0.041983469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:15:55 np0005464891 podman[95270]: 2025-10-01 16:15:55.418130526 +0000 UTC m=+0.144713663 container init 03b1f0229b514eedd1ebe79d723980ad1dd48d156f45580bd180c674e92f7a25 (image=quay.io/ceph/ceph:v18, name=agitated_tharp, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 12:15:55 np0005464891 podman[95270]: 2025-10-01 16:15:55.427390083 +0000 UTC m=+0.153973200 container start 03b1f0229b514eedd1ebe79d723980ad1dd48d156f45580bd180c674e92f7a25 (image=quay.io/ceph/ceph:v18, name=agitated_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:15:55 np0005464891 podman[95270]: 2025-10-01 16:15:55.430766095 +0000 UTC m=+0.157349202 container attach 03b1f0229b514eedd1ebe79d723980ad1dd48d156f45580bd180c674e92f7a25 (image=quay.io/ceph/ceph:v18, name=agitated_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 12:15:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct  1 12:15:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Oct  1 12:15:55 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Oct  1 12:15:55 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/2073983757' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 12:15:55 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:15:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  1 12:15:55 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2515368431' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 12:15:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v67: 5 pgs: 2 unknown, 3 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:15:55 np0005464891 ceph-mon[74303]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 12:15:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:15:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct  1 12:15:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2515368431' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 12:15:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Oct  1 12:15:56 np0005464891 agitated_tharp[95284]: pool 'cephfs.cephfs.meta' created
Oct  1 12:15:56 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/2515368431' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 12:15:56 np0005464891 ceph-mon[74303]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 12:15:56 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Oct  1 12:15:56 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 25 pg[6.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:15:56 np0005464891 systemd[1]: libpod-03b1f0229b514eedd1ebe79d723980ad1dd48d156f45580bd180c674e92f7a25.scope: Deactivated successfully.
Oct  1 12:15:56 np0005464891 podman[95270]: 2025-10-01 16:15:56.961806519 +0000 UTC m=+1.688389666 container died 03b1f0229b514eedd1ebe79d723980ad1dd48d156f45580bd180c674e92f7a25 (image=quay.io/ceph/ceph:v18, name=agitated_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:56 np0005464891 systemd[1]: var-lib-containers-storage-overlay-eff68c3e0fc1f24abbe4748f5aaec270fbc46fa5f85c341cd1555dc5abc29b24-merged.mount: Deactivated successfully.
Oct  1 12:15:57 np0005464891 podman[95270]: 2025-10-01 16:15:57.017540844 +0000 UTC m=+1.744123991 container remove 03b1f0229b514eedd1ebe79d723980ad1dd48d156f45580bd180c674e92f7a25 (image=quay.io/ceph/ceph:v18, name=agitated_tharp, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 12:15:57 np0005464891 systemd[1]: libpod-conmon-03b1f0229b514eedd1ebe79d723980ad1dd48d156f45580bd180c674e92f7a25.scope: Deactivated successfully.
Oct  1 12:15:57 np0005464891 python3[95349]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:15:57 np0005464891 podman[95350]: 2025-10-01 16:15:57.418815035 +0000 UTC m=+0.055648052 container create 2281a74344aa520d8199fc0e0417730ac99aade97054a4771c0b05a271bfe31f (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 12:15:57 np0005464891 systemd[1]: Started libpod-conmon-2281a74344aa520d8199fc0e0417730ac99aade97054a4771c0b05a271bfe31f.scope.
Oct  1 12:15:57 np0005464891 podman[95350]: 2025-10-01 16:15:57.393342072 +0000 UTC m=+0.030175129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:15:57 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:57 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ea9767eb5583e314b7d6526ed405432385f2003cb38500b765883ee0c25aff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:57 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ea9767eb5583e314b7d6526ed405432385f2003cb38500b765883ee0c25aff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:57 np0005464891 podman[95350]: 2025-10-01 16:15:57.514392325 +0000 UTC m=+0.151225352 container init 2281a74344aa520d8199fc0e0417730ac99aade97054a4771c0b05a271bfe31f (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:15:57 np0005464891 podman[95350]: 2025-10-01 16:15:57.519337296 +0000 UTC m=+0.156170303 container start 2281a74344aa520d8199fc0e0417730ac99aade97054a4771c0b05a271bfe31f (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:57 np0005464891 podman[95350]: 2025-10-01 16:15:57.52275401 +0000 UTC m=+0.159587117 container attach 2281a74344aa520d8199fc0e0417730ac99aade97054a4771c0b05a271bfe31f (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct  1 12:15:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Oct  1 12:15:57 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Oct  1 12:15:57 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/2515368431' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 12:15:57 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:15:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v70: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:15:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  1 12:15:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1291885720' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 12:15:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct  1 12:15:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1291885720' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 12:15:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Oct  1 12:15:58 np0005464891 eloquent_swanson[95366]: pool 'cephfs.cephfs.data' created
Oct  1 12:15:58 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1291885720' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 12:15:58 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Oct  1 12:15:58 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 27 pg[7.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [1] r=0 lpr=27 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:15:58 np0005464891 systemd[1]: libpod-2281a74344aa520d8199fc0e0417730ac99aade97054a4771c0b05a271bfe31f.scope: Deactivated successfully.
Oct  1 12:15:58 np0005464891 podman[95350]: 2025-10-01 16:15:58.984439766 +0000 UTC m=+1.621272803 container died 2281a74344aa520d8199fc0e0417730ac99aade97054a4771c0b05a271bfe31f (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 12:15:59 np0005464891 systemd[1]: var-lib-containers-storage-overlay-63ea9767eb5583e314b7d6526ed405432385f2003cb38500b765883ee0c25aff-merged.mount: Deactivated successfully.
Oct  1 12:15:59 np0005464891 podman[95350]: 2025-10-01 16:15:59.039136945 +0000 UTC m=+1.675969952 container remove 2281a74344aa520d8199fc0e0417730ac99aade97054a4771c0b05a271bfe31f (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:15:59 np0005464891 systemd[1]: libpod-conmon-2281a74344aa520d8199fc0e0417730ac99aade97054a4771c0b05a271bfe31f.scope: Deactivated successfully.
Oct  1 12:15:59 np0005464891 python3[95428]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:15:59 np0005464891 podman[95429]: 2025-10-01 16:15:59.524850174 +0000 UTC m=+0.067730649 container create b57a39b8218ad1ec57db82887701532668afbd775bcd7ff485879860f4da9af9 (image=quay.io/ceph/ceph:v18, name=crazy_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:15:59 np0005464891 systemd[1]: Started libpod-conmon-b57a39b8218ad1ec57db82887701532668afbd775bcd7ff485879860f4da9af9.scope.
Oct  1 12:15:59 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:15:59 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16e9a2dfc3acac921e283773e219529b6b17ad1be534c177a6d084b5a9910c81/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:59 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16e9a2dfc3acac921e283773e219529b6b17ad1be534c177a6d084b5a9910c81/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:15:59 np0005464891 podman[95429]: 2025-10-01 16:15:59.493969628 +0000 UTC m=+0.036850173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:15:59 np0005464891 podman[95429]: 2025-10-01 16:15:59.599241085 +0000 UTC m=+0.142121640 container init b57a39b8218ad1ec57db82887701532668afbd775bcd7ff485879860f4da9af9 (image=quay.io/ceph/ceph:v18, name=crazy_heisenberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct  1 12:15:59 np0005464891 podman[95429]: 2025-10-01 16:15:59.606061891 +0000 UTC m=+0.148942356 container start b57a39b8218ad1ec57db82887701532668afbd775bcd7ff485879860f4da9af9 (image=quay.io/ceph/ceph:v18, name=crazy_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:15:59 np0005464891 podman[95429]: 2025-10-01 16:15:59.609424823 +0000 UTC m=+0.152305278 container attach b57a39b8218ad1ec57db82887701532668afbd775bcd7ff485879860f4da9af9 (image=quay.io/ceph/ceph:v18, name=crazy_heisenberg, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 12:15:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 1 unknown, 1 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:15:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct  1 12:15:59 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1291885720' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 12:15:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Oct  1 12:15:59 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Oct  1 12:16:00 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [1] r=0 lpr=27 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Oct  1 12:16:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1342118877' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  1 12:16:00 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1342118877' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  1 12:16:00 np0005464891 ceph-mon[74303]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 12:16:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:16:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct  1 12:16:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1342118877' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  1 12:16:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Oct  1 12:16:01 np0005464891 crazy_heisenberg[95444]: enabled application 'rbd' on pool 'vms'
Oct  1 12:16:01 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Oct  1 12:16:01 np0005464891 systemd[1]: libpod-b57a39b8218ad1ec57db82887701532668afbd775bcd7ff485879860f4da9af9.scope: Deactivated successfully.
Oct  1 12:16:01 np0005464891 podman[95429]: 2025-10-01 16:16:01.089341946 +0000 UTC m=+1.632222431 container died b57a39b8218ad1ec57db82887701532668afbd775bcd7ff485879860f4da9af9 (image=quay.io/ceph/ceph:v18, name=crazy_heisenberg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 12:16:01 np0005464891 systemd[1]: var-lib-containers-storage-overlay-16e9a2dfc3acac921e283773e219529b6b17ad1be534c177a6d084b5a9910c81-merged.mount: Deactivated successfully.
Oct  1 12:16:01 np0005464891 podman[95429]: 2025-10-01 16:16:01.14260743 +0000 UTC m=+1.685487885 container remove b57a39b8218ad1ec57db82887701532668afbd775bcd7ff485879860f4da9af9 (image=quay.io/ceph/ceph:v18, name=crazy_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 12:16:01 np0005464891 systemd[1]: libpod-conmon-b57a39b8218ad1ec57db82887701532668afbd775bcd7ff485879860f4da9af9.scope: Deactivated successfully.
Oct  1 12:16:01 np0005464891 python3[95506]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:01 np0005464891 podman[95507]: 2025-10-01 16:16:01.50949846 +0000 UTC m=+0.048678533 container create 954f7f06b2db65d5c5ed29f2548cddb3babcdc75b1f38d6b076747b8a27d11fa (image=quay.io/ceph/ceph:v18, name=silly_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 12:16:01 np0005464891 systemd[1]: Started libpod-conmon-954f7f06b2db65d5c5ed29f2548cddb3babcdc75b1f38d6b076747b8a27d11fa.scope.
Oct  1 12:16:01 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:01 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd68809136d126894961744e28bf1e350603c01e67893d5d19e67c13eb8dc9c7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:01 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd68809136d126894961744e28bf1e350603c01e67893d5d19e67c13eb8dc9c7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:01 np0005464891 podman[95507]: 2025-10-01 16:16:01.488849084 +0000 UTC m=+0.028029217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:01 np0005464891 podman[95507]: 2025-10-01 16:16:01.584535546 +0000 UTC m=+0.123715629 container init 954f7f06b2db65d5c5ed29f2548cddb3babcdc75b1f38d6b076747b8a27d11fa (image=quay.io/ceph/ceph:v18, name=silly_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:01 np0005464891 podman[95507]: 2025-10-01 16:16:01.590619856 +0000 UTC m=+0.129799929 container start 954f7f06b2db65d5c5ed29f2548cddb3babcdc75b1f38d6b076747b8a27d11fa (image=quay.io/ceph/ceph:v18, name=silly_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 12:16:01 np0005464891 podman[95507]: 2025-10-01 16:16:01.59487707 +0000 UTC m=+0.134057143 container attach 954f7f06b2db65d5c5ed29f2548cddb3babcdc75b1f38d6b076747b8a27d11fa (image=quay.io/ceph/ceph:v18, name=silly_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 1 unknown, 1 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:01 np0005464891 ceph-mon[74303]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 12:16:01 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1342118877' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  1 12:16:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Oct  1 12:16:02 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3828169166' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  1 12:16:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct  1 12:16:03 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/3828169166' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  1 12:16:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3828169166' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  1 12:16:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Oct  1 12:16:03 np0005464891 silly_kare[95521]: enabled application 'rbd' on pool 'volumes'
Oct  1 12:16:03 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Oct  1 12:16:03 np0005464891 systemd[1]: libpod-954f7f06b2db65d5c5ed29f2548cddb3babcdc75b1f38d6b076747b8a27d11fa.scope: Deactivated successfully.
Oct  1 12:16:03 np0005464891 podman[95507]: 2025-10-01 16:16:03.096162136 +0000 UTC m=+1.635342199 container died 954f7f06b2db65d5c5ed29f2548cddb3babcdc75b1f38d6b076747b8a27d11fa (image=quay.io/ceph/ceph:v18, name=silly_kare, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 12:16:03 np0005464891 systemd[1]: var-lib-containers-storage-overlay-dd68809136d126894961744e28bf1e350603c01e67893d5d19e67c13eb8dc9c7-merged.mount: Deactivated successfully.
Oct  1 12:16:03 np0005464891 podman[95507]: 2025-10-01 16:16:03.136902724 +0000 UTC m=+1.676082797 container remove 954f7f06b2db65d5c5ed29f2548cddb3babcdc75b1f38d6b076747b8a27d11fa (image=quay.io/ceph/ceph:v18, name=silly_kare, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:03 np0005464891 systemd[1]: libpod-conmon-954f7f06b2db65d5c5ed29f2548cddb3babcdc75b1f38d6b076747b8a27d11fa.scope: Deactivated successfully.
Oct  1 12:16:03 np0005464891 python3[95583]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:03 np0005464891 podman[95584]: 2025-10-01 16:16:03.508667253 +0000 UTC m=+0.082523701 container create 2fa17d296c47d2182c45ac1b0ef9bc4574635c80199abfac05dc47017c879a20 (image=quay.io/ceph/ceph:v18, name=gifted_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:03 np0005464891 podman[95584]: 2025-10-01 16:16:03.453689538 +0000 UTC m=+0.027546066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:03 np0005464891 systemd[1]: Started libpod-conmon-2fa17d296c47d2182c45ac1b0ef9bc4574635c80199abfac05dc47017c879a20.scope.
Oct  1 12:16:03 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:03 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beeac20bf90fedc194903720152212220fe3d8cb039399a8731877983b1b3a7b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:03 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beeac20bf90fedc194903720152212220fe3d8cb039399a8731877983b1b3a7b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:03 np0005464891 podman[95584]: 2025-10-01 16:16:03.641207508 +0000 UTC m=+0.215064026 container init 2fa17d296c47d2182c45ac1b0ef9bc4574635c80199abfac05dc47017c879a20 (image=quay.io/ceph/ceph:v18, name=gifted_bose, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct  1 12:16:03 np0005464891 podman[95584]: 2025-10-01 16:16:03.65197455 +0000 UTC m=+0.225830988 container start 2fa17d296c47d2182c45ac1b0ef9bc4574635c80199abfac05dc47017c879a20 (image=quay.io/ceph/ceph:v18, name=gifted_bose, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:16:03 np0005464891 podman[95584]: 2025-10-01 16:16:03.663078772 +0000 UTC m=+0.236935240 container attach 2fa17d296c47d2182c45ac1b0ef9bc4574635c80199abfac05dc47017c879a20 (image=quay.io/ceph/ceph:v18, name=gifted_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:16:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:04 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/3828169166' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  1 12:16:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Oct  1 12:16:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1952729009' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  1 12:16:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct  1 12:16:05 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1952729009' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  1 12:16:05 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1952729009' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  1 12:16:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Oct  1 12:16:05 np0005464891 gifted_bose[95599]: enabled application 'rbd' on pool 'backups'
Oct  1 12:16:05 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Oct  1 12:16:05 np0005464891 systemd[1]: libpod-2fa17d296c47d2182c45ac1b0ef9bc4574635c80199abfac05dc47017c879a20.scope: Deactivated successfully.
Oct  1 12:16:05 np0005464891 podman[95584]: 2025-10-01 16:16:05.125886475 +0000 UTC m=+1.699742913 container died 2fa17d296c47d2182c45ac1b0ef9bc4574635c80199abfac05dc47017c879a20 (image=quay.io/ceph/ceph:v18, name=gifted_bose, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:05 np0005464891 systemd[1]: var-lib-containers-storage-overlay-beeac20bf90fedc194903720152212220fe3d8cb039399a8731877983b1b3a7b-merged.mount: Deactivated successfully.
Oct  1 12:16:05 np0005464891 podman[95584]: 2025-10-01 16:16:05.176257535 +0000 UTC m=+1.750113973 container remove 2fa17d296c47d2182c45ac1b0ef9bc4574635c80199abfac05dc47017c879a20 (image=quay.io/ceph/ceph:v18, name=gifted_bose, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:05 np0005464891 systemd[1]: libpod-conmon-2fa17d296c47d2182c45ac1b0ef9bc4574635c80199abfac05dc47017c879a20.scope: Deactivated successfully.
Oct  1 12:16:05 np0005464891 python3[95663]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:05 np0005464891 podman[95664]: 2025-10-01 16:16:05.564548643 +0000 UTC m=+0.053308374 container create 0da7ad9ceff99f32df320791640b611a2d8243ec6006dbe4360465d2ef504c43 (image=quay.io/ceph/ceph:v18, name=pedantic_morse, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:05 np0005464891 systemd[1]: Started libpod-conmon-0da7ad9ceff99f32df320791640b611a2d8243ec6006dbe4360465d2ef504c43.scope.
Oct  1 12:16:05 np0005464891 podman[95664]: 2025-10-01 16:16:05.541430847 +0000 UTC m=+0.030190568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:05 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:05 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0495549ad5e2dcb1d9d743a58e8d7b419c6f980f7d481f15a585508af139211d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:05 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0495549ad5e2dcb1d9d743a58e8d7b419c6f980f7d481f15a585508af139211d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:05 np0005464891 podman[95664]: 2025-10-01 16:16:05.658829849 +0000 UTC m=+0.147589610 container init 0da7ad9ceff99f32df320791640b611a2d8243ec6006dbe4360465d2ef504c43 (image=quay.io/ceph/ceph:v18, name=pedantic_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:05 np0005464891 podman[95664]: 2025-10-01 16:16:05.669516222 +0000 UTC m=+0.158275923 container start 0da7ad9ceff99f32df320791640b611a2d8243ec6006dbe4360465d2ef504c43 (image=quay.io/ceph/ceph:v18, name=pedantic_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 12:16:05 np0005464891 podman[95664]: 2025-10-01 16:16:05.673390872 +0000 UTC m=+0.162150653 container attach 0da7ad9ceff99f32df320791640b611a2d8243ec6006dbe4360465d2ef504c43 (image=quay.io/ceph/ceph:v18, name=pedantic_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 12:16:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:05 np0005464891 ceph-mon[74303]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 12:16:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:16:06 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1952729009' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  1 12:16:06 np0005464891 ceph-mon[74303]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 12:16:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Oct  1 12:16:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1656615960' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  1 12:16:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct  1 12:16:07 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1656615960' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  1 12:16:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1656615960' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  1 12:16:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Oct  1 12:16:07 np0005464891 pedantic_morse[95679]: enabled application 'rbd' on pool 'images'
Oct  1 12:16:07 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Oct  1 12:16:07 np0005464891 systemd[1]: libpod-0da7ad9ceff99f32df320791640b611a2d8243ec6006dbe4360465d2ef504c43.scope: Deactivated successfully.
Oct  1 12:16:07 np0005464891 podman[95704]: 2025-10-01 16:16:07.222237776 +0000 UTC m=+0.040919352 container died 0da7ad9ceff99f32df320791640b611a2d8243ec6006dbe4360465d2ef504c43 (image=quay.io/ceph/ceph:v18, name=pedantic_morse, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 12:16:07 np0005464891 systemd[1]: var-lib-containers-storage-overlay-0495549ad5e2dcb1d9d743a58e8d7b419c6f980f7d481f15a585508af139211d-merged.mount: Deactivated successfully.
Oct  1 12:16:07 np0005464891 podman[95704]: 2025-10-01 16:16:07.274964142 +0000 UTC m=+0.093645658 container remove 0da7ad9ceff99f32df320791640b611a2d8243ec6006dbe4360465d2ef504c43 (image=quay.io/ceph/ceph:v18, name=pedantic_morse, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:16:07 np0005464891 systemd[1]: libpod-conmon-0da7ad9ceff99f32df320791640b611a2d8243ec6006dbe4360465d2ef504c43.scope: Deactivated successfully.
Oct  1 12:16:07 np0005464891 python3[95745]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:07 np0005464891 podman[95746]: 2025-10-01 16:16:07.689612249 +0000 UTC m=+0.046275954 container create 91adfa629d380400429ce48bd014e55301e9150e4e17e381611057e77e9898c5 (image=quay.io/ceph/ceph:v18, name=peaceful_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:16:07 np0005464891 systemd[1]: Started libpod-conmon-91adfa629d380400429ce48bd014e55301e9150e4e17e381611057e77e9898c5.scope.
Oct  1 12:16:07 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:07 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c5b59667feecbf4564eebe2f320f47d6e7816b79ef9f6de5f23d27b07ea9586/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:07 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c5b59667feecbf4564eebe2f320f47d6e7816b79ef9f6de5f23d27b07ea9586/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:07 np0005464891 podman[95746]: 2025-10-01 16:16:07.672115533 +0000 UTC m=+0.028779278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:07 np0005464891 podman[95746]: 2025-10-01 16:16:07.784864532 +0000 UTC m=+0.141528297 container init 91adfa629d380400429ce48bd014e55301e9150e4e17e381611057e77e9898c5 (image=quay.io/ceph/ceph:v18, name=peaceful_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:07 np0005464891 podman[95746]: 2025-10-01 16:16:07.795757732 +0000 UTC m=+0.152421487 container start 91adfa629d380400429ce48bd014e55301e9150e4e17e381611057e77e9898c5 (image=quay.io/ceph/ceph:v18, name=peaceful_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:16:07 np0005464891 podman[95746]: 2025-10-01 16:16:07.799648862 +0000 UTC m=+0.156312607 container attach 91adfa629d380400429ce48bd014e55301e9150e4e17e381611057e77e9898c5 (image=quay.io/ceph/ceph:v18, name=peaceful_dubinsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:08 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1656615960' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  1 12:16:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Oct  1 12:16:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2833651684' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  1 12:16:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct  1 12:16:09 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/2833651684' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  1 12:16:09 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2833651684' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  1 12:16:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Oct  1 12:16:09 np0005464891 peaceful_dubinsky[95762]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct  1 12:16:09 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Oct  1 12:16:09 np0005464891 systemd[1]: libpod-91adfa629d380400429ce48bd014e55301e9150e4e17e381611057e77e9898c5.scope: Deactivated successfully.
Oct  1 12:16:09 np0005464891 podman[95746]: 2025-10-01 16:16:09.200766503 +0000 UTC m=+1.557430218 container died 91adfa629d380400429ce48bd014e55301e9150e4e17e381611057e77e9898c5 (image=quay.io/ceph/ceph:v18, name=peaceful_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:09 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6c5b59667feecbf4564eebe2f320f47d6e7816b79ef9f6de5f23d27b07ea9586-merged.mount: Deactivated successfully.
Oct  1 12:16:09 np0005464891 podman[95746]: 2025-10-01 16:16:09.263012309 +0000 UTC m=+1.619676034 container remove 91adfa629d380400429ce48bd014e55301e9150e4e17e381611057e77e9898c5 (image=quay.io/ceph/ceph:v18, name=peaceful_dubinsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 12:16:09 np0005464891 systemd[1]: libpod-conmon-91adfa629d380400429ce48bd014e55301e9150e4e17e381611057e77e9898c5.scope: Deactivated successfully.
Oct  1 12:16:09 np0005464891 python3[95824]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:09 np0005464891 podman[95825]: 2025-10-01 16:16:09.696209403 +0000 UTC m=+0.057529624 container create cc68c8277468aaf540996e2d165a80cb4a417eade180279b503f76e94bbf2325 (image=quay.io/ceph/ceph:v18, name=quizzical_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 12:16:09 np0005464891 systemd[1]: Started libpod-conmon-cc68c8277468aaf540996e2d165a80cb4a417eade180279b503f76e94bbf2325.scope.
Oct  1 12:16:09 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503ba668bde0ddca935724303530bf7837ecef4640fad1eba19588d6659b2be6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503ba668bde0ddca935724303530bf7837ecef4640fad1eba19588d6659b2be6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:09 np0005464891 podman[95825]: 2025-10-01 16:16:09.759001374 +0000 UTC m=+0.120321585 container init cc68c8277468aaf540996e2d165a80cb4a417eade180279b503f76e94bbf2325 (image=quay.io/ceph/ceph:v18, name=quizzical_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 12:16:09 np0005464891 podman[95825]: 2025-10-01 16:16:09.668776844 +0000 UTC m=+0.030097135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:09 np0005464891 podman[95825]: 2025-10-01 16:16:09.768908406 +0000 UTC m=+0.130228667 container start cc68c8277468aaf540996e2d165a80cb4a417eade180279b503f76e94bbf2325 (image=quay.io/ceph/ceph:v18, name=quizzical_jemison, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 12:16:09 np0005464891 podman[95825]: 2025-10-01 16:16:09.772846587 +0000 UTC m=+0.134166818 container attach cc68c8277468aaf540996e2d165a80cb4a417eade180279b503f76e94bbf2325 (image=quay.io/ceph/ceph:v18, name=quizzical_jemison, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v83: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:10 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/2833651684' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  1 12:16:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Oct  1 12:16:10 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3128584871' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  1 12:16:10 np0005464891 ceph-mon[74303]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 12:16:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:16:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct  1 12:16:11 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/3128584871' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  1 12:16:11 np0005464891 ceph-mon[74303]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 12:16:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3128584871' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  1 12:16:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct  1 12:16:11 np0005464891 quizzical_jemison[95840]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct  1 12:16:11 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct  1 12:16:11 np0005464891 systemd[1]: libpod-cc68c8277468aaf540996e2d165a80cb4a417eade180279b503f76e94bbf2325.scope: Deactivated successfully.
Oct  1 12:16:11 np0005464891 conmon[95840]: conmon cc68c8277468aaf54099 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc68c8277468aaf540996e2d165a80cb4a417eade180279b503f76e94bbf2325.scope/container/memory.events
Oct  1 12:16:11 np0005464891 podman[95825]: 2025-10-01 16:16:11.225341247 +0000 UTC m=+1.586661448 container died cc68c8277468aaf540996e2d165a80cb4a417eade180279b503f76e94bbf2325 (image=quay.io/ceph/ceph:v18, name=quizzical_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:16:11 np0005464891 systemd[1]: var-lib-containers-storage-overlay-503ba668bde0ddca935724303530bf7837ecef4640fad1eba19588d6659b2be6-merged.mount: Deactivated successfully.
Oct  1 12:16:11 np0005464891 podman[95825]: 2025-10-01 16:16:11.280278776 +0000 UTC m=+1.641598987 container remove cc68c8277468aaf540996e2d165a80cb4a417eade180279b503f76e94bbf2325 (image=quay.io/ceph/ceph:v18, name=quizzical_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 12:16:11 np0005464891 systemd[1]: libpod-conmon-cc68c8277468aaf540996e2d165a80cb4a417eade180279b503f76e94bbf2325.scope: Deactivated successfully.
Oct  1 12:16:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:16:11
Oct  1 12:16:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:16:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:16:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', '.mgr', 'vms', 'images', 'backups', 'cephfs.cephfs.data']
Oct  1 12:16:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:16:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  1 12:16:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 12:16:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:16:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct  1 12:16:12 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  1 12:16:12 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  1 12:16:12 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/3128584871' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  1 12:16:12 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:16:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:16:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct  1 12:16:12 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct  1 12:16:12 np0005464891 ceph-mgr[74592]: [progress INFO root] update: starting ev b179b1a7-f28b-41e4-ac0a-c78491d6bbdf (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct  1 12:16:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 12:16:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:16:12 np0005464891 python3[95953]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 12:16:12 np0005464891 python3[96024]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759335371.9991782-33313-266399962547924/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:16:13 np0005464891 ceph-mon[74303]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  1 12:16:13 np0005464891 ceph-mon[74303]: Cluster is now healthy
Oct  1 12:16:13 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:16:13 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:16:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct  1 12:16:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:16:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct  1 12:16:13 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct  1 12:16:13 np0005464891 ceph-mgr[74592]: [progress INFO root] update: starting ev 723611b6-d6cd-4ae0-98a7-de44bce359ed (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct  1 12:16:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 12:16:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:16:13 np0005464891 python3[96126]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 12:16:13 np0005464891 python3[96201]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759335373.0037599-33327-144788468774841/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=7efe129eac7774dce2698b7da446a65fb4a693ad backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:16:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v88: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 12:16:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 12:16:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct  1 12:16:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:16:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:16:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:16:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct  1 12:16:14 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:16:14 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:16:14 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:14 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:14 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=37 pruub=9.589785576s) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active pruub 62.472541809s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:14 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct  1 12:16:14 np0005464891 ceph-mgr[74592]: [progress INFO root] update: starting ev 50490752-e60b-44ea-b3c1-80ddb46f5be8 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct  1 12:16:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 12:16:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:16:14 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=37 pruub=9.589785576s) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown pruub 62.472541809s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:14 np0005464891 python3[96251]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:14 np0005464891 podman[96252]: 2025-10-01 16:16:14.421371315 +0000 UTC m=+0.065520271 container create 863d3778d1dc821e6bfc37b4a43887f66838c3218da51173f28922a99f987866 (image=quay.io/ceph/ceph:v18, name=dazzling_banach, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 12:16:14 np0005464891 systemd[1]: Started libpod-conmon-863d3778d1dc821e6bfc37b4a43887f66838c3218da51173f28922a99f987866.scope.
Oct  1 12:16:14 np0005464891 podman[96252]: 2025-10-01 16:16:14.393397971 +0000 UTC m=+0.037546967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:14 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e7be3a9845d1afbfddb2cb060b3003a988ca197bf9f7a45bbf4de681a744ca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e7be3a9845d1afbfddb2cb060b3003a988ca197bf9f7a45bbf4de681a744ca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e7be3a9845d1afbfddb2cb060b3003a988ca197bf9f7a45bbf4de681a744ca/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:14 np0005464891 podman[96252]: 2025-10-01 16:16:14.515321641 +0000 UTC m=+0.159470647 container init 863d3778d1dc821e6bfc37b4a43887f66838c3218da51173f28922a99f987866 (image=quay.io/ceph/ceph:v18, name=dazzling_banach, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:16:14 np0005464891 podman[96252]: 2025-10-01 16:16:14.526017564 +0000 UTC m=+0.170166520 container start 863d3778d1dc821e6bfc37b4a43887f66838c3218da51173f28922a99f987866 (image=quay.io/ceph/ceph:v18, name=dazzling_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:14 np0005464891 podman[96252]: 2025-10-01 16:16:14.52940504 +0000 UTC m=+0.173553996 container attach 863d3778d1dc821e6bfc37b4a43887f66838c3218da51173f28922a99f987866 (image=quay.io/ceph/ceph:v18, name=dazzling_banach, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2973530421' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2973530421' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  1 12:16:15 np0005464891 dazzling_banach[96267]: 
Oct  1 12:16:15 np0005464891 dazzling_banach[96267]: [global]
Oct  1 12:16:15 np0005464891 dazzling_banach[96267]: #011fsid = 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5
Oct  1 12:16:15 np0005464891 dazzling_banach[96267]: #011mon_host = 192.168.122.100
Oct  1 12:16:15 np0005464891 systemd[1]: libpod-863d3778d1dc821e6bfc37b4a43887f66838c3218da51173f28922a99f987866.scope: Deactivated successfully.
Oct  1 12:16:15 np0005464891 podman[96252]: 2025-10-01 16:16:15.132711542 +0000 UTC m=+0.776860498 container died 863d3778d1dc821e6bfc37b4a43887f66838c3218da51173f28922a99f987866 (image=quay.io/ceph/ceph:v18, name=dazzling_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:15 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e2e7be3a9845d1afbfddb2cb060b3003a988ca197bf9f7a45bbf4de681a744ca-merged.mount: Deactivated successfully.
Oct  1 12:16:15 np0005464891 podman[96252]: 2025-10-01 16:16:15.180539509 +0000 UTC m=+0.824688455 container remove 863d3778d1dc821e6bfc37b4a43887f66838c3218da51173f28922a99f987866 (image=quay.io/ceph/ceph:v18, name=dazzling_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 12:16:15 np0005464891 systemd[1]: libpod-conmon-863d3778d1dc821e6bfc37b4a43887f66838c3218da51173f28922a99f987866.scope: Deactivated successfully.
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct  1 12:16:15 np0005464891 ceph-mgr[74592]: [progress INFO root] update: starting ev da5a3770-7194-45ca-a18b-f6d7edfa1fbc (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/2973530421' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/2973530421' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=37/38 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 37 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=37 pruub=14.334425926s) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active pruub 62.659030914s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=37 pruub=14.334425926s) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown pruub 62.659030914s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.3( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.4( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.5( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.6( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.f( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.10( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.1( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.2( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.9( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.a( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.8( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.7( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.d( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.e( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.c( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.b( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.11( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.12( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.13( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.15( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.14( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.19( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.16( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.1a( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.18( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.17( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.1d( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.1b( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.1c( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.1e( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 38 pg[2.1f( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:15 np0005464891 python3[96411]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:15 np0005464891 podman[96430]: 2025-10-01 16:16:15.612729683 +0000 UTC m=+0.061990540 container create acd7e20b2b9fd1909cd4f6cd12c54ed3d4cfa34c8071839a59e8c7f0c7ffd6b2 (image=quay.io/ceph/ceph:v18, name=pedantic_easley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:15 np0005464891 systemd[1]: Started libpod-conmon-acd7e20b2b9fd1909cd4f6cd12c54ed3d4cfa34c8071839a59e8c7f0c7ffd6b2.scope.
Oct  1 12:16:15 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:15 np0005464891 podman[96430]: 2025-10-01 16:16:15.59178524 +0000 UTC m=+0.041046137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df678f32efdd70237b832a1699094ce3febfab844e081d1935e6112c33b56e91/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df678f32efdd70237b832a1699094ce3febfab844e081d1935e6112c33b56e91/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df678f32efdd70237b832a1699094ce3febfab844e081d1935e6112c33b56e91/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:15 np0005464891 podman[96430]: 2025-10-01 16:16:15.700962228 +0000 UTC m=+0.150223075 container init acd7e20b2b9fd1909cd4f6cd12c54ed3d4cfa34c8071839a59e8c7f0c7ffd6b2 (image=quay.io/ceph/ceph:v18, name=pedantic_easley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct  1 12:16:15 np0005464891 podman[96430]: 2025-10-01 16:16:15.706142254 +0000 UTC m=+0.155403111 container start acd7e20b2b9fd1909cd4f6cd12c54ed3d4cfa34c8071839a59e8c7f0c7ffd6b2 (image=quay.io/ceph/ceph:v18, name=pedantic_easley, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 12:16:15 np0005464891 podman[96430]: 2025-10-01 16:16:15.709360366 +0000 UTC m=+0.158621243 container attach acd7e20b2b9fd1909cd4f6cd12c54ed3d4cfa34c8071839a59e8c7f0c7ffd6b2 (image=quay.io/ceph/ceph:v18, name=pedantic_easley, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:16:15 np0005464891 podman[96522]: 2025-10-01 16:16:15.953171445 +0000 UTC m=+0.063507883 container exec 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 12:16:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v91: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:16:16 np0005464891 podman[96522]: 2025-10-01 16:16:16.071845723 +0000 UTC m=+0.182182091 container exec_died 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct  1 12:16:16 np0005464891 ceph-mgr[74592]: [progress INFO root] update: starting ev b15da69f-0e8f-4cbd-a02b-e7187fce23d8 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=11.657836914s) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active pruub 60.761459351s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=11.657836914s) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown pruub 60.761459351s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.1b( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.18( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.1a( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.16( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.17( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.14( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.13( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.15( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.12( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.10( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.f( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.19( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.e( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.11( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.d( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.1( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.7( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.2( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.c( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.4( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.6( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.3( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.5( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.8( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.b( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.a( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.0( empty local-lis/les=37/39 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.1d( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.9( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.1c( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.1e( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 39 pg[2.1f( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [2] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1993962104' entity='client.admin' 
Oct  1 12:16:16 np0005464891 pedantic_easley[96471]: set ssl_option
Oct  1 12:16:16 np0005464891 systemd[1]: libpod-acd7e20b2b9fd1909cd4f6cd12c54ed3d4cfa34c8071839a59e8c7f0c7ffd6b2.scope: Deactivated successfully.
Oct  1 12:16:16 np0005464891 podman[96636]: 2025-10-01 16:16:16.439028562 +0000 UTC m=+0.042119596 container died acd7e20b2b9fd1909cd4f6cd12c54ed3d4cfa34c8071839a59e8c7f0c7ffd6b2 (image=quay.io/ceph/ceph:v18, name=pedantic_easley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:16 np0005464891 systemd[1]: var-lib-containers-storage-overlay-df678f32efdd70237b832a1699094ce3febfab844e081d1935e6112c33b56e91-merged.mount: Deactivated successfully.
Oct  1 12:16:16 np0005464891 podman[96636]: 2025-10-01 16:16:16.490068851 +0000 UTC m=+0.093159845 container remove acd7e20b2b9fd1909cd4f6cd12c54ed3d4cfa34c8071839a59e8c7f0c7ffd6b2 (image=quay.io/ceph/ceph:v18, name=pedantic_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 12:16:16 np0005464891 systemd[1]: libpod-conmon-acd7e20b2b9fd1909cd4f6cd12c54ed3d4cfa34c8071839a59e8c7f0c7ffd6b2.scope: Deactivated successfully.
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:16 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 9a8336da-5e51-4cd2-add2-f2214e0fff86 does not exist
Oct  1 12:16:16 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 81489aee-2a91-4cf0-b399-3c30a97adb37 does not exist
Oct  1 12:16:16 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 42de5ce8-146c-400b-9317-f73fe3ba9966 does not exist
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:16:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:16:16 np0005464891 python3[96729]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:16 np0005464891 podman[96789]: 2025-10-01 16:16:16.910844451 +0000 UTC m=+0.044096531 container create 9fc97b63dc4b5520a7b688d2ecaa611e096d3ac3fd243f535ea268e6104e1b3f (image=quay.io/ceph/ceph:v18, name=sweet_varahamihira, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:16 np0005464891 systemd[1]: Started libpod-conmon-9fc97b63dc4b5520a7b688d2ecaa611e096d3ac3fd243f535ea268e6104e1b3f.scope.
Oct  1 12:16:16 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:16 np0005464891 podman[96789]: 2025-10-01 16:16:16.892735528 +0000 UTC m=+0.025987638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19010585954d80c6cd684241a9fa9db368095f220a595430b4bac7a16c851cb5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19010585954d80c6cd684241a9fa9db368095f220a595430b4bac7a16c851cb5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19010585954d80c6cd684241a9fa9db368095f220a595430b4bac7a16c851cb5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:17 np0005464891 podman[96789]: 2025-10-01 16:16:17.004995884 +0000 UTC m=+0.138248054 container init 9fc97b63dc4b5520a7b688d2ecaa611e096d3ac3fd243f535ea268e6104e1b3f (image=quay.io/ceph/ceph:v18, name=sweet_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 12:16:17 np0005464891 podman[96789]: 2025-10-01 16:16:17.013069563 +0000 UTC m=+0.146321653 container start 9fc97b63dc4b5520a7b688d2ecaa611e096d3ac3fd243f535ea268e6104e1b3f (image=quay.io/ceph/ceph:v18, name=sweet_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:17 np0005464891 podman[96789]: 2025-10-01 16:16:17.016343876 +0000 UTC m=+0.149596056 container attach 9fc97b63dc4b5520a7b688d2ecaa611e096d3ac3fd243f535ea268e6104e1b3f (image=quay.io/ceph/ceph:v18, name=sweet_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: [progress INFO root] update: starting ev 482079a0-1f01-4cfd-b99d-c0ded7af96e4 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: [progress INFO root] complete: finished ev b179b1a7-f28b-41e4-ac0a-c78491d6bbdf (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: [progress INFO root] Completed event b179b1a7-f28b-41e4-ac0a-c78491d6bbdf (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: [progress INFO root] complete: finished ev 723611b6-d6cd-4ae0-98a7-de44bce359ed (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: [progress INFO root] Completed event 723611b6-d6cd-4ae0-98a7-de44bce359ed (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: [progress INFO root] complete: finished ev 50490752-e60b-44ea-b3c1-80ddb46f5be8 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: [progress INFO root] Completed event 50490752-e60b-44ea-b3c1-80ddb46f5be8 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: [progress INFO root] complete: finished ev da5a3770-7194-45ca-a18b-f6d7edfa1fbc (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: [progress INFO root] Completed event da5a3770-7194-45ca-a18b-f6d7edfa1fbc (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: [progress INFO root] complete: finished ev b15da69f-0e8f-4cbd-a02b-e7187fce23d8 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: [progress INFO root] Completed event b15da69f-0e8f-4cbd-a02b-e7187fce23d8 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: [progress INFO root] complete: finished ev 482079a0-1f01-4cfd-b99d-c0ded7af96e4 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 39 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=39 pruub=8.634066582s) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active pruub 69.854263306s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: [progress INFO root] Completed event 482079a0-1f01-4cfd-b99d-c0ded7af96e4 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=39 pruub=8.634066582s) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown pruub 69.854263306s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.15( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.16( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.17( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.19( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.1d( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.1e( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.1f( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.3( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.6( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.b( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 40 pg[4.c( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=39/40 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [2] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/1993962104' entity='client.admin' 
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:16:17 np0005464891 podman[96865]: 2025-10-01 16:16:17.333610309 +0000 UTC m=+0.076489671 container create 1012f693aa332a37fa011930982a9ecb26b41521db76ba6d7653e5b7b079b368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_fermat, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 12:16:17 np0005464891 systemd[1]: Started libpod-conmon-1012f693aa332a37fa011930982a9ecb26b41521db76ba6d7653e5b7b079b368.scope.
Oct  1 12:16:17 np0005464891 podman[96865]: 2025-10-01 16:16:17.305676276 +0000 UTC m=+0.048555678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:17 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:17 np0005464891 podman[96865]: 2025-10-01 16:16:17.413907468 +0000 UTC m=+0.156786820 container init 1012f693aa332a37fa011930982a9ecb26b41521db76ba6d7653e5b7b079b368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_fermat, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:17 np0005464891 podman[96865]: 2025-10-01 16:16:17.423708246 +0000 UTC m=+0.166587568 container start 1012f693aa332a37fa011930982a9ecb26b41521db76ba6d7653e5b7b079b368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:17 np0005464891 podman[96865]: 2025-10-01 16:16:17.426826945 +0000 UTC m=+0.169706307 container attach 1012f693aa332a37fa011930982a9ecb26b41521db76ba6d7653e5b7b079b368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_fermat, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 12:16:17 np0005464891 magical_fermat[96899]: 167 167
Oct  1 12:16:17 np0005464891 systemd[1]: libpod-1012f693aa332a37fa011930982a9ecb26b41521db76ba6d7653e5b7b079b368.scope: Deactivated successfully.
Oct  1 12:16:17 np0005464891 podman[96865]: 2025-10-01 16:16:17.429736877 +0000 UTC m=+0.172616209 container died 1012f693aa332a37fa011930982a9ecb26b41521db76ba6d7653e5b7b079b368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 12:16:17 np0005464891 systemd[1]: var-lib-containers-storage-overlay-31cec55ec6618bb1f1befee72310ab6605050f31dc0a6e0c121b49acc98cb8fe-merged.mount: Deactivated successfully.
Oct  1 12:16:17 np0005464891 podman[96865]: 2025-10-01 16:16:17.467886749 +0000 UTC m=+0.210766081 container remove 1012f693aa332a37fa011930982a9ecb26b41521db76ba6d7653e5b7b079b368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:17 np0005464891 systemd[1]: libpod-conmon-1012f693aa332a37fa011930982a9ecb26b41521db76ba6d7653e5b7b079b368.scope: Deactivated successfully.
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:17 np0005464891 sweet_varahamihira[96821]: Scheduled rgw.rgw update...
Oct  1 12:16:17 np0005464891 systemd[1]: libpod-9fc97b63dc4b5520a7b688d2ecaa611e096d3ac3fd243f535ea268e6104e1b3f.scope: Deactivated successfully.
Oct  1 12:16:17 np0005464891 podman[96789]: 2025-10-01 16:16:17.61238063 +0000 UTC m=+0.745632730 container died 9fc97b63dc4b5520a7b688d2ecaa611e096d3ac3fd243f535ea268e6104e1b3f (image=quay.io/ceph/ceph:v18, name=sweet_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct  1 12:16:17 np0005464891 systemd[1]: var-lib-containers-storage-overlay-19010585954d80c6cd684241a9fa9db368095f220a595430b4bac7a16c851cb5-merged.mount: Deactivated successfully.
Oct  1 12:16:17 np0005464891 podman[96923]: 2025-10-01 16:16:17.654046743 +0000 UTC m=+0.063252397 container create b26e3da8fb895622b45ab24cbd5bd8495a960da15e4604f7a864050fc69036d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:17 np0005464891 podman[96789]: 2025-10-01 16:16:17.658954421 +0000 UTC m=+0.792206531 container remove 9fc97b63dc4b5520a7b688d2ecaa611e096d3ac3fd243f535ea268e6104e1b3f (image=quay.io/ceph/ceph:v18, name=sweet_varahamihira, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:17 np0005464891 systemd[1]: libpod-conmon-9fc97b63dc4b5520a7b688d2ecaa611e096d3ac3fd243f535ea268e6104e1b3f.scope: Deactivated successfully.
Oct  1 12:16:17 np0005464891 systemd[1]: Started libpod-conmon-b26e3da8fb895622b45ab24cbd5bd8495a960da15e4604f7a864050fc69036d0.scope.
Oct  1 12:16:17 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da9634f46d2fbcf61feacf4a2c6939b0c76cd5870f24bc17f385135ddbde23b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da9634f46d2fbcf61feacf4a2c6939b0c76cd5870f24bc17f385135ddbde23b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da9634f46d2fbcf61feacf4a2c6939b0c76cd5870f24bc17f385135ddbde23b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da9634f46d2fbcf61feacf4a2c6939b0c76cd5870f24bc17f385135ddbde23b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da9634f46d2fbcf61feacf4a2c6939b0c76cd5870f24bc17f385135ddbde23b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:17 np0005464891 podman[96923]: 2025-10-01 16:16:17.627487429 +0000 UTC m=+0.036693103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:17 np0005464891 podman[96923]: 2025-10-01 16:16:17.726069726 +0000 UTC m=+0.135275430 container init b26e3da8fb895622b45ab24cbd5bd8495a960da15e4604f7a864050fc69036d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keldysh, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:16:17 np0005464891 podman[96923]: 2025-10-01 16:16:17.736113941 +0000 UTC m=+0.145319605 container start b26e3da8fb895622b45ab24cbd5bd8495a960da15e4604f7a864050fc69036d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 12:16:17 np0005464891 podman[96923]: 2025-10-01 16:16:17.738845049 +0000 UTC m=+0.148050723 container attach b26e3da8fb895622b45ab24cbd5bd8495a960da15e4604f7a864050fc69036d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keldysh, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v94: 131 pgs: 32 peering, 99 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 12:16:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct  1 12:16:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:16:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:16:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct  1 12:16:18 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=11.684944153s) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active pruub 73.912246704s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=11.684944153s) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown pruub 73.912246704s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.17( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.15( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.c( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.0( empty local-lis/les=39/41 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.16( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.b( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.1f( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 41 pg[4.1e( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [0] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:18 np0005464891 ceph-mon[74303]: Saving service rgw.rgw spec with placement compute-0
Oct  1 12:16:18 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:18 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:18 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:18 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:16:18 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:16:18 np0005464891 python3[97042]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 12:16:18 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=41 pruub=13.446842194s) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active pruub 70.681152344s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:18 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=41 pruub=13.446842194s) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown pruub 70.681152344s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:18 np0005464891 pensive_keldysh[96956]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:16:18 np0005464891 pensive_keldysh[96956]: --> relative data size: 1.0
Oct  1 12:16:18 np0005464891 pensive_keldysh[96956]: --> All data devices are unavailable
Oct  1 12:16:18 np0005464891 systemd[1]: libpod-b26e3da8fb895622b45ab24cbd5bd8495a960da15e4604f7a864050fc69036d0.scope: Deactivated successfully.
Oct  1 12:16:18 np0005464891 systemd[1]: libpod-b26e3da8fb895622b45ab24cbd5bd8495a960da15e4604f7a864050fc69036d0.scope: Consumed 1.052s CPU time.
Oct  1 12:16:18 np0005464891 podman[96923]: 2025-10-01 16:16:18.875361012 +0000 UTC m=+1.284566686 container died b26e3da8fb895622b45ab24cbd5bd8495a960da15e4604f7a864050fc69036d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 12:16:18 np0005464891 systemd[1]: var-lib-containers-storage-overlay-7da9634f46d2fbcf61feacf4a2c6939b0c76cd5870f24bc17f385135ddbde23b-merged.mount: Deactivated successfully.
Oct  1 12:16:18 np0005464891 systemd[75922]: Starting Mark boot as successful...
Oct  1 12:16:18 np0005464891 systemd[75922]: Finished Mark boot as successful.
Oct  1 12:16:18 np0005464891 podman[96923]: 2025-10-01 16:16:18.950864614 +0000 UTC m=+1.360070298 container remove b26e3da8fb895622b45ab24cbd5bd8495a960da15e4604f7a864050fc69036d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 12:16:18 np0005464891 systemd[1]: libpod-conmon-b26e3da8fb895622b45ab24cbd5bd8495a960da15e4604f7a864050fc69036d0.scope: Deactivated successfully.
Oct  1 12:16:19 np0005464891 python3[97131]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759335378.3158865-33368-265971766144391/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:16:19 np0005464891 ceph-mgr[74592]: [progress INFO root] Writing back 9 completed events
Oct  1 12:16:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  1 12:16:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct  1 12:16:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct  1 12:16:19 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.12( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.16( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.17( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.7( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.d( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.19( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.10( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.0( empty local-lis/les=41/42 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.1b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [0] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.12( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.16( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.0( empty local-lis/les=41/42 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.7( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.d( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.19( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.17( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [1] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Oct  1 12:16:19 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Oct  1 12:16:19 np0005464891 python3[97294]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:19 np0005464891 podman[97316]: 2025-10-01 16:16:19.624257484 +0000 UTC m=+0.074056622 container create 7bdf6b8c1a4fc6243edf5c16be6ea60c6b07dc3f66e6eb083615023ed3739a79 (image=quay.io/ceph/ceph:v18, name=trusting_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:19 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:19 np0005464891 systemd[1]: Started libpod-conmon-7bdf6b8c1a4fc6243edf5c16be6ea60c6b07dc3f66e6eb083615023ed3739a79.scope.
Oct  1 12:16:19 np0005464891 podman[97316]: 2025-10-01 16:16:19.592185704 +0000 UTC m=+0.041984892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:19 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9182db7b7c2938bc5672d73f719822e937af0bf6eb728c1f6c332454fa425c10/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9182db7b7c2938bc5672d73f719822e937af0bf6eb728c1f6c332454fa425c10/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9182db7b7c2938bc5672d73f719822e937af0bf6eb728c1f6c332454fa425c10/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:19 np0005464891 podman[97316]: 2025-10-01 16:16:19.730515009 +0000 UTC m=+0.180314197 container init 7bdf6b8c1a4fc6243edf5c16be6ea60c6b07dc3f66e6eb083615023ed3739a79 (image=quay.io/ceph/ceph:v18, name=trusting_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 12:16:19 np0005464891 podman[97316]: 2025-10-01 16:16:19.743612221 +0000 UTC m=+0.193411349 container start 7bdf6b8c1a4fc6243edf5c16be6ea60c6b07dc3f66e6eb083615023ed3739a79 (image=quay.io/ceph/ceph:v18, name=trusting_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 12:16:19 np0005464891 podman[97316]: 2025-10-01 16:16:19.747608574 +0000 UTC m=+0.197407702 container attach 7bdf6b8c1a4fc6243edf5c16be6ea60c6b07dc3f66e6eb083615023ed3739a79 (image=quay.io/ceph/ceph:v18, name=trusting_mcnulty, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:19 np0005464891 podman[97356]: 2025-10-01 16:16:19.863182524 +0000 UTC m=+0.072589091 container create a10179a5cc960765b9bb5a587b9b663c23f072487ab2386204a1d5e8b099ee2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 12:16:19 np0005464891 systemd[1]: Started libpod-conmon-a10179a5cc960765b9bb5a587b9b663c23f072487ab2386204a1d5e8b099ee2c.scope.
Oct  1 12:16:19 np0005464891 podman[97356]: 2025-10-01 16:16:19.827767429 +0000 UTC m=+0.037174066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:19 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 62 unknown, 32 peering, 99 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:20 np0005464891 podman[97356]: 2025-10-01 16:16:20.021383604 +0000 UTC m=+0.230790201 container init a10179a5cc960765b9bb5a587b9b663c23f072487ab2386204a1d5e8b099ee2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 12:16:20 np0005464891 podman[97356]: 2025-10-01 16:16:20.033206829 +0000 UTC m=+0.242613396 container start a10179a5cc960765b9bb5a587b9b663c23f072487ab2386204a1d5e8b099ee2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilbur, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:16:20 np0005464891 podman[97356]: 2025-10-01 16:16:20.037302365 +0000 UTC m=+0.246708962 container attach a10179a5cc960765b9bb5a587b9b663c23f072487ab2386204a1d5e8b099ee2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 12:16:20 np0005464891 distracted_wilbur[97372]: 167 167
Oct  1 12:16:20 np0005464891 systemd[1]: libpod-a10179a5cc960765b9bb5a587b9b663c23f072487ab2386204a1d5e8b099ee2c.scope: Deactivated successfully.
Oct  1 12:16:20 np0005464891 podman[97356]: 2025-10-01 16:16:20.040592589 +0000 UTC m=+0.249999176 container died a10179a5cc960765b9bb5a587b9b663c23f072487ab2386204a1d5e8b099ee2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:20 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ddf67132fca74f8859019d8e736c88d735b74beada954a90ee05f9aee2ad8f68-merged.mount: Deactivated successfully.
Oct  1 12:16:20 np0005464891 podman[97356]: 2025-10-01 16:16:20.094847918 +0000 UTC m=+0.304254475 container remove a10179a5cc960765b9bb5a587b9b663c23f072487ab2386204a1d5e8b099ee2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:20 np0005464891 systemd[1]: libpod-conmon-a10179a5cc960765b9bb5a587b9b663c23f072487ab2386204a1d5e8b099ee2c.scope: Deactivated successfully.
Oct  1 12:16:20 np0005464891 podman[97414]: 2025-10-01 16:16:20.288538905 +0000 UTC m=+0.043021872 container create 2a1797f03edd3d1a2c888dc9ba3c750b635d50cbc13b0040d91dd0ce171c4abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:16:20 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Oct  1 12:16:20 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Oct  1 12:16:20 np0005464891 systemd[1]: Started libpod-conmon-2a1797f03edd3d1a2c888dc9ba3c750b635d50cbc13b0040d91dd0ce171c4abf.scope.
Oct  1 12:16:20 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 12:16:20 np0005464891 ceph-mgr[74592]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  1 12:16:20 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0[74299]: 2025-10-01T16:16:20.349+0000 7f9fe551c640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).mds e2 new map
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-01T16:16:20.350399+0000#012modified#0112025-10-01T16:16:20.350481+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct  1 12:16:20 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:20 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct  1 12:16:20 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  1 12:16:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457f2136c760e66e0a8c5a3bd7a5ed9b56aefc91d1000e47c802ce35e6799295/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:20 np0005464891 podman[97414]: 2025-10-01 16:16:20.271263295 +0000 UTC m=+0.025746282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457f2136c760e66e0a8c5a3bd7a5ed9b56aefc91d1000e47c802ce35e6799295/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457f2136c760e66e0a8c5a3bd7a5ed9b56aefc91d1000e47c802ce35e6799295/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457f2136c760e66e0a8c5a3bd7a5ed9b56aefc91d1000e47c802ce35e6799295/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:20 np0005464891 ceph-mgr[74592]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct  1 12:16:20 np0005464891 podman[97414]: 2025-10-01 16:16:20.380218997 +0000 UTC m=+0.134702024 container init 2a1797f03edd3d1a2c888dc9ba3c750b635d50cbc13b0040d91dd0ce171c4abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:20 np0005464891 podman[97414]: 2025-10-01 16:16:20.395932762 +0000 UTC m=+0.150415769 container start 2a1797f03edd3d1a2c888dc9ba3c750b635d50cbc13b0040d91dd0ce171c4abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:20 np0005464891 podman[97414]: 2025-10-01 16:16:20.400904203 +0000 UTC m=+0.155387220 container attach 2a1797f03edd3d1a2c888dc9ba3c750b635d50cbc13b0040d91dd0ce171c4abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:20 np0005464891 systemd[1]: libpod-7bdf6b8c1a4fc6243edf5c16be6ea60c6b07dc3f66e6eb083615023ed3739a79.scope: Deactivated successfully.
Oct  1 12:16:20 np0005464891 podman[97316]: 2025-10-01 16:16:20.405086313 +0000 UTC m=+0.854885411 container died 7bdf6b8c1a4fc6243edf5c16be6ea60c6b07dc3f66e6eb083615023ed3739a79 (image=quay.io/ceph/ceph:v18, name=trusting_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:20 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9182db7b7c2938bc5672d73f719822e937af0bf6eb728c1f6c332454fa425c10-merged.mount: Deactivated successfully.
Oct  1 12:16:20 np0005464891 podman[97316]: 2025-10-01 16:16:20.445425237 +0000 UTC m=+0.895224335 container remove 7bdf6b8c1a4fc6243edf5c16be6ea60c6b07dc3f66e6eb083615023ed3739a79 (image=quay.io/ceph/ceph:v18, name=trusting_mcnulty, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:16:20 np0005464891 systemd[1]: libpod-conmon-7bdf6b8c1a4fc6243edf5c16be6ea60c6b07dc3f66e6eb083615023ed3739a79.scope: Deactivated successfully.
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:20 np0005464891 python3[97473]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:20 np0005464891 podman[97474]: 2025-10-01 16:16:20.849645048 +0000 UTC m=+0.050983608 container create d0263ff5bc7c765267cf74feffc27ccf170d30c5adb97d6145877bac87d0f3c6 (image=quay.io/ceph/ceph:v18, name=priceless_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:16:20 np0005464891 systemd[1]: Started libpod-conmon-d0263ff5bc7c765267cf74feffc27ccf170d30c5adb97d6145877bac87d0f3c6.scope.
Oct  1 12:16:20 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f315f2180d426159a00df205ba521fd7c5dd3a1203b52848c74575d442ae81/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f315f2180d426159a00df205ba521fd7c5dd3a1203b52848c74575d442ae81/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f315f2180d426159a00df205ba521fd7c5dd3a1203b52848c74575d442ae81/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:20 np0005464891 podman[97474]: 2025-10-01 16:16:20.831700789 +0000 UTC m=+0.033039349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:20 np0005464891 podman[97474]: 2025-10-01 16:16:20.926311844 +0000 UTC m=+0.127650394 container init d0263ff5bc7c765267cf74feffc27ccf170d30c5adb97d6145877bac87d0f3c6 (image=quay.io/ceph/ceph:v18, name=priceless_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 12:16:20 np0005464891 podman[97474]: 2025-10-01 16:16:20.932131669 +0000 UTC m=+0.133470219 container start d0263ff5bc7c765267cf74feffc27ccf170d30c5adb97d6145877bac87d0f3c6 (image=quay.io/ceph/ceph:v18, name=priceless_haibt, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 12:16:20 np0005464891 podman[97474]: 2025-10-01 16:16:20.935830284 +0000 UTC m=+0.137168864 container attach d0263ff5bc7c765267cf74feffc27ccf170d30c5adb97d6145877bac87d0f3c6 (image=quay.io/ceph/ceph:v18, name=priceless_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:16:21 np0005464891 objective_carson[97431]: {
Oct  1 12:16:21 np0005464891 objective_carson[97431]:    "0": [
Oct  1 12:16:21 np0005464891 objective_carson[97431]:        {
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "devices": [
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "/dev/loop3"
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            ],
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "lv_name": "ceph_lv0",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "lv_size": "21470642176",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "name": "ceph_lv0",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "tags": {
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.cluster_name": "ceph",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.crush_device_class": "",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.encrypted": "0",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.osd_id": "0",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.type": "block",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.vdo": "0"
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            },
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "type": "block",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "vg_name": "ceph_vg0"
Oct  1 12:16:21 np0005464891 objective_carson[97431]:        }
Oct  1 12:16:21 np0005464891 objective_carson[97431]:    ],
Oct  1 12:16:21 np0005464891 objective_carson[97431]:    "1": [
Oct  1 12:16:21 np0005464891 objective_carson[97431]:        {
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "devices": [
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "/dev/loop4"
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            ],
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "lv_name": "ceph_lv1",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "lv_size": "21470642176",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "name": "ceph_lv1",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "tags": {
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.cluster_name": "ceph",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.crush_device_class": "",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.encrypted": "0",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.osd_id": "1",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.type": "block",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.vdo": "0"
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            },
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "type": "block",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "vg_name": "ceph_vg1"
Oct  1 12:16:21 np0005464891 objective_carson[97431]:        }
Oct  1 12:16:21 np0005464891 objective_carson[97431]:    ],
Oct  1 12:16:21 np0005464891 objective_carson[97431]:    "2": [
Oct  1 12:16:21 np0005464891 objective_carson[97431]:        {
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "devices": [
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "/dev/loop5"
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            ],
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "lv_name": "ceph_lv2",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "lv_size": "21470642176",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "name": "ceph_lv2",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "tags": {
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.cluster_name": "ceph",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.crush_device_class": "",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.encrypted": "0",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.osd_id": "2",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.type": "block",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:                "ceph.vdo": "0"
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            },
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "type": "block",
Oct  1 12:16:21 np0005464891 objective_carson[97431]:            "vg_name": "ceph_vg2"
Oct  1 12:16:21 np0005464891 objective_carson[97431]:        }
Oct  1 12:16:21 np0005464891 objective_carson[97431]:    ]
Oct  1 12:16:21 np0005464891 objective_carson[97431]: }
Oct  1 12:16:21 np0005464891 systemd[1]: libpod-2a1797f03edd3d1a2c888dc9ba3c750b635d50cbc13b0040d91dd0ce171c4abf.scope: Deactivated successfully.
Oct  1 12:16:21 np0005464891 podman[97414]: 2025-10-01 16:16:21.211017993 +0000 UTC m=+0.965500960 container died 2a1797f03edd3d1a2c888dc9ba3c750b635d50cbc13b0040d91dd0ce171c4abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:16:21 np0005464891 systemd[1]: var-lib-containers-storage-overlay-457f2136c760e66e0a8c5a3bd7a5ed9b56aefc91d1000e47c802ce35e6799295-merged.mount: Deactivated successfully.
Oct  1 12:16:21 np0005464891 podman[97414]: 2025-10-01 16:16:21.265830099 +0000 UTC m=+1.020313066 container remove 2a1797f03edd3d1a2c888dc9ba3c750b635d50cbc13b0040d91dd0ce171c4abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:16:21 np0005464891 systemd[1]: libpod-conmon-2a1797f03edd3d1a2c888dc9ba3c750b635d50cbc13b0040d91dd0ce171c4abf.scope: Deactivated successfully.
Oct  1 12:16:21 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 12:16:21 np0005464891 ceph-mgr[74592]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct  1 12:16:21 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct  1 12:16:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  1 12:16:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:21 np0005464891 priceless_haibt[97490]: Scheduled mds.cephfs update...
Oct  1 12:16:21 np0005464891 systemd[1]: libpod-d0263ff5bc7c765267cf74feffc27ccf170d30c5adb97d6145877bac87d0f3c6.scope: Deactivated successfully.
Oct  1 12:16:21 np0005464891 podman[97474]: 2025-10-01 16:16:21.565268246 +0000 UTC m=+0.766606786 container died d0263ff5bc7c765267cf74feffc27ccf170d30c5adb97d6145877bac87d0f3c6 (image=quay.io/ceph/ceph:v18, name=priceless_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 12:16:21 np0005464891 systemd[1]: var-lib-containers-storage-overlay-90f315f2180d426159a00df205ba521fd7c5dd3a1203b52848c74575d442ae81-merged.mount: Deactivated successfully.
Oct  1 12:16:21 np0005464891 podman[97474]: 2025-10-01 16:16:21.603891733 +0000 UTC m=+0.805230273 container remove d0263ff5bc7c765267cf74feffc27ccf170d30c5adb97d6145877bac87d0f3c6 (image=quay.io/ceph/ceph:v18, name=priceless_haibt, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:21 np0005464891 systemd[1]: libpod-conmon-d0263ff5bc7c765267cf74feffc27ccf170d30c5adb97d6145877bac87d0f3c6.scope: Deactivated successfully.
Oct  1 12:16:21 np0005464891 ceph-mon[74303]: Saving service mds.cephfs spec with placement compute-0
Oct  1 12:16:21 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 62 unknown, 32 peering, 99 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:21 np0005464891 podman[97685]: 2025-10-01 16:16:21.979947504 +0000 UTC m=+0.044983767 container create 3bb2f33ddb1bf20433a724446b3983280606068da447b7c851edb48611ee0f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_feynman, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  1 12:16:22 np0005464891 systemd[1]: Started libpod-conmon-3bb2f33ddb1bf20433a724446b3983280606068da447b7c851edb48611ee0f8b.scope.
Oct  1 12:16:22 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:22 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Oct  1 12:16:22 np0005464891 podman[97685]: 2025-10-01 16:16:21.959883335 +0000 UTC m=+0.024919708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:22 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Oct  1 12:16:22 np0005464891 podman[97685]: 2025-10-01 16:16:22.066901722 +0000 UTC m=+0.131938075 container init 3bb2f33ddb1bf20433a724446b3983280606068da447b7c851edb48611ee0f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_feynman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 12:16:22 np0005464891 podman[97685]: 2025-10-01 16:16:22.082264357 +0000 UTC m=+0.147300620 container start 3bb2f33ddb1bf20433a724446b3983280606068da447b7c851edb48611ee0f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_feynman, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:22 np0005464891 pensive_feynman[97746]: 167 167
Oct  1 12:16:22 np0005464891 podman[97685]: 2025-10-01 16:16:22.086101047 +0000 UTC m=+0.151137350 container attach 3bb2f33ddb1bf20433a724446b3983280606068da447b7c851edb48611ee0f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_feynman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 12:16:22 np0005464891 systemd[1]: libpod-3bb2f33ddb1bf20433a724446b3983280606068da447b7c851edb48611ee0f8b.scope: Deactivated successfully.
Oct  1 12:16:22 np0005464891 podman[97685]: 2025-10-01 16:16:22.087255599 +0000 UTC m=+0.152291902 container died 3bb2f33ddb1bf20433a724446b3983280606068da447b7c851edb48611ee0f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:22 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c76ebc953ad6356d550bb66d99f2a90a3a722de44a80bd67860dee20b51573cc-merged.mount: Deactivated successfully.
Oct  1 12:16:22 np0005464891 podman[97685]: 2025-10-01 16:16:22.152395128 +0000 UTC m=+0.217431391 container remove 3bb2f33ddb1bf20433a724446b3983280606068da447b7c851edb48611ee0f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:22 np0005464891 systemd[1]: libpod-conmon-3bb2f33ddb1bf20433a724446b3983280606068da447b7c851edb48611ee0f8b.scope: Deactivated successfully.
Oct  1 12:16:22 np0005464891 python3[97794]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 12:16:22 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct  1 12:16:22 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct  1 12:16:22 np0005464891 podman[97803]: 2025-10-01 16:16:22.428845303 +0000 UTC m=+0.104914898 container create 39f31f0f0773da6a10121bc12f5b3a74b41daac6bc8152a8beb34ed41911835e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brown, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:22 np0005464891 podman[97803]: 2025-10-01 16:16:22.348931946 +0000 UTC m=+0.025001561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:22 np0005464891 systemd[1]: Started libpod-conmon-39f31f0f0773da6a10121bc12f5b3a74b41daac6bc8152a8beb34ed41911835e.scope.
Oct  1 12:16:22 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0f2f8d88b66f967cbb22ea45db294645f0a93b78d5b86291035417f67bd4a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0f2f8d88b66f967cbb22ea45db294645f0a93b78d5b86291035417f67bd4a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0f2f8d88b66f967cbb22ea45db294645f0a93b78d5b86291035417f67bd4a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0f2f8d88b66f967cbb22ea45db294645f0a93b78d5b86291035417f67bd4a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:22 np0005464891 podman[97803]: 2025-10-01 16:16:22.593718472 +0000 UTC m=+0.269788157 container init 39f31f0f0773da6a10121bc12f5b3a74b41daac6bc8152a8beb34ed41911835e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brown, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 12:16:22 np0005464891 podman[97803]: 2025-10-01 16:16:22.601857643 +0000 UTC m=+0.277927268 container start 39f31f0f0773da6a10121bc12f5b3a74b41daac6bc8152a8beb34ed41911835e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct  1 12:16:22 np0005464891 podman[97803]: 2025-10-01 16:16:22.605545708 +0000 UTC m=+0.281615303 container attach 39f31f0f0773da6a10121bc12f5b3a74b41daac6bc8152a8beb34ed41911835e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 12:16:22 np0005464891 ceph-mon[74303]: Saving service mds.cephfs spec with placement compute-0
Oct  1 12:16:22 np0005464891 python3[97892]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759335381.9635804-33398-209220662440634/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=481d67d46ef630aeafdb22315b77310ef59269d0 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:16:23 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct  1 12:16:23 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct  1 12:16:23 np0005464891 python3[97946]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:23 np0005464891 podman[97947]: 2025-10-01 16:16:23.279686259 +0000 UTC m=+0.059070158 container create 24634a8273560d7716733ca3b139a7bc5fedfd719061d5f74a98fb744608641b (image=quay.io/ceph/ceph:v18, name=eloquent_buck, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 12:16:23 np0005464891 systemd[1]: Started libpod-conmon-24634a8273560d7716733ca3b139a7bc5fedfd719061d5f74a98fb744608641b.scope.
Oct  1 12:16:23 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d120906bbb2b6580ffc62d01132a21b44e8eaf13ff27dbe7bc36a06acc402283/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d120906bbb2b6580ffc62d01132a21b44e8eaf13ff27dbe7bc36a06acc402283/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:23 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct  1 12:16:23 np0005464891 podman[97947]: 2025-10-01 16:16:23.260910326 +0000 UTC m=+0.040294205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:23 np0005464891 podman[97947]: 2025-10-01 16:16:23.358700431 +0000 UTC m=+0.138084380 container init 24634a8273560d7716733ca3b139a7bc5fedfd719061d5f74a98fb744608641b (image=quay.io/ceph/ceph:v18, name=eloquent_buck, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 12:16:23 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct  1 12:16:23 np0005464891 podman[97947]: 2025-10-01 16:16:23.371178475 +0000 UTC m=+0.150562334 container start 24634a8273560d7716733ca3b139a7bc5fedfd719061d5f74a98fb744608641b (image=quay.io/ceph/ceph:v18, name=eloquent_buck, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:23 np0005464891 podman[97947]: 2025-10-01 16:16:23.374457208 +0000 UTC m=+0.153841067 container attach 24634a8273560d7716733ca3b139a7bc5fedfd719061d5f74a98fb744608641b (image=quay.io/ceph/ceph:v18, name=eloquent_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 12:16:23 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct  1 12:16:23 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]: {
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:        "osd_id": 2,
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:        "type": "bluestore"
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:    },
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:        "osd_id": 0,
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:        "type": "bluestore"
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:    },
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:        "osd_id": 1,
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:        "type": "bluestore"
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]:    }
Oct  1 12:16:23 np0005464891 thirsty_brown[97891]: }
Oct  1 12:16:23 np0005464891 systemd[1]: libpod-39f31f0f0773da6a10121bc12f5b3a74b41daac6bc8152a8beb34ed41911835e.scope: Deactivated successfully.
Oct  1 12:16:23 np0005464891 systemd[1]: libpod-39f31f0f0773da6a10121bc12f5b3a74b41daac6bc8152a8beb34ed41911835e.scope: Consumed 1.087s CPU time.
Oct  1 12:16:23 np0005464891 podman[97803]: 2025-10-01 16:16:23.704348229 +0000 UTC m=+1.380417934 container died 39f31f0f0773da6a10121bc12f5b3a74b41daac6bc8152a8beb34ed41911835e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brown, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:23 np0005464891 systemd[1]: var-lib-containers-storage-overlay-0d0f2f8d88b66f967cbb22ea45db294645f0a93b78d5b86291035417f67bd4a5-merged.mount: Deactivated successfully.
Oct  1 12:16:23 np0005464891 podman[97803]: 2025-10-01 16:16:23.803460313 +0000 UTC m=+1.479529938 container remove 39f31f0f0773da6a10121bc12f5b3a74b41daac6bc8152a8beb34ed41911835e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brown, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:16:23 np0005464891 systemd[1]: libpod-conmon-39f31f0f0773da6a10121bc12f5b3a74b41daac6bc8152a8beb34ed41911835e.scope: Deactivated successfully.
Oct  1 12:16:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:16:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:16:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 12:16:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 12:16:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 12:16:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 12:16:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 12:16:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 12:16:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/818966306' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/818966306' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  1 12:16:24 np0005464891 systemd[1]: libpod-24634a8273560d7716733ca3b139a7bc5fedfd719061d5f74a98fb744608641b.scope: Deactivated successfully.
Oct  1 12:16:24 np0005464891 podman[97947]: 2025-10-01 16:16:24.027398578 +0000 UTC m=+0.806782437 container died 24634a8273560d7716733ca3b139a7bc5fedfd719061d5f74a98fb744608641b (image=quay.io/ceph/ceph:v18, name=eloquent_buck, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 12:16:24 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d120906bbb2b6580ffc62d01132a21b44e8eaf13ff27dbe7bc36a06acc402283-merged.mount: Deactivated successfully.
Oct  1 12:16:24 np0005464891 podman[97947]: 2025-10-01 16:16:24.069347608 +0000 UTC m=+0.848731467 container remove 24634a8273560d7716733ca3b139a7bc5fedfd719061d5f74a98fb744608641b (image=quay.io/ceph/ceph:v18, name=eloquent_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 12:16:24 np0005464891 systemd[1]: libpod-conmon-24634a8273560d7716733ca3b139a7bc5fedfd719061d5f74a98fb744608641b.scope: Deactivated successfully.
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.2 deep-scrub starts
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.2 deep-scrub ok
Oct  1 12:16:24 np0005464891 python3[98255]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:24 np0005464891 podman[98289]: 2025-10-01 16:16:24.870421521 +0000 UTC m=+0.055731622 container create 227dd9e31129556139d72620915cd2c51f3763005f18d48ad4c2a6a06b55f4b2 (image=quay.io/ceph/ceph:v18, name=brave_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/818966306' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/818966306' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct  1 12:16:24 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.374958038s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.231430054s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.374900818s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.231430054s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.383758545s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.240386963s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.383596420s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.240257263s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.383705139s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.240386963s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.379199028s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236045837s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.379170418s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236045837s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.383419991s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.240325928s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.379181862s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236137390s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.379157066s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236137390s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.383358955s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.240325928s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.389803886s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.246879578s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.389783859s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.246879578s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.383541107s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.240257263s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.378816605s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236114502s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.378780365s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236114502s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.378565788s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236152649s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.378633499s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236251831s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.389328957s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.246994019s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.378515244s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236152649s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.378546715s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236251831s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.389142990s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.246994019s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.389134407s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.247177124s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.378237724s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236305237s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.389069557s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.247161865s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.378214836s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236305237s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.389024734s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.247161865s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.389046669s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.247360229s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377906799s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236236572s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.389020920s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.247360229s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.390163422s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.248535156s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377872467s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236236572s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.390143394s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.248535156s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.389102936s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.247177124s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.388869286s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.247367859s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.388843536s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.247367859s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377726555s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236282349s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377752304s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236335754s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377696991s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236282349s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377721786s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236335754s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.389205933s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.247894287s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.389180183s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.247894287s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.378380775s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.237159729s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.388632774s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.247413635s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.388580322s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.247413635s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377467155s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236366272s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.378266335s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236228943s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.378328323s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.237159729s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.388974190s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.247985840s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377245903s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236228943s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377326012s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236366272s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377302170s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236366272s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.389014244s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.248222351s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377236366s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236434937s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.388982773s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.248222351s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377104759s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236404419s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377076149s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236404419s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377078056s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236442566s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377195358s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236434937s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.388927460s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.248580933s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377437592s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236366272s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.388879776s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.248580933s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.377059937s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236442566s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.376565933s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236534119s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.376544952s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236534119s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.388287544s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.248298645s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.376538277s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236549377s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.376500130s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236549377s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.376481056s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 78.236602783s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.387873650s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.247985840s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=39/41 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.376464844s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.236602783s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.388112068s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.248352051s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.388257027s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.248298645s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.388084412s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.248352051s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.388215065s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.248542786s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.388185501s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.248542786s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.387853622s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 79.248504639s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.387830734s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.248504639s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[6.8( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[6.14( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[6.15( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[6.11( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[6.13( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[6.1e( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[6.1f( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.353844643s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.117431641s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.342955589s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.106582642s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.358590126s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122230530s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.342924118s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.106582642s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.358565331s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122230530s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[6.c( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.349176407s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.112998962s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.349151611s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.112998962s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.342733383s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.106597900s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.342709541s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.106597900s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.348748207s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.112800598s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.348720551s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.112800598s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.348629951s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.112754822s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.348606110s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.112754822s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.358191490s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122398376s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.358159065s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122398376s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.358109474s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122413635s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.348576546s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.112892151s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.358083725s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122413635s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.348543167s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.112892151s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.358076096s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122467041s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.358055115s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122467041s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.348344803s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.112800598s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357947350s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122428894s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.348303795s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.112800598s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357924461s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122428894s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357890129s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122444153s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357865334s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122444153s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.348385811s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.113006592s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357840538s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122467041s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.348365784s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.113006592s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357815742s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122467041s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.348196030s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.113006592s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357720375s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122558594s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.348170280s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.113006592s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357695580s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122558594s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.348122597s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.113037109s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.348083496s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.113037109s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357617378s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122650146s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347954750s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.113037109s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357593536s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122650146s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347931862s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.113037109s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347883224s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.113105774s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347858429s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.113105774s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357394218s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122695923s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357368469s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122695923s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347731590s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.113143921s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357258797s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122688293s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347708702s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.113143921s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357237816s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122688293s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347566605s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.113105774s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347542763s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.113105774s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347563744s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.113143921s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357107162s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122718811s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347491264s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.113143921s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357064247s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122718811s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357032776s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122726440s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.357008934s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122726440s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347300529s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.113105774s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.356953621s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122772217s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347319603s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.113159180s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347274780s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.113105774s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.356933594s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122772217s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347292900s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.113159180s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.356807709s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122749329s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347239494s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.113220215s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.356778145s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122749329s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347215652s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.113220215s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347200394s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.113220215s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347174644s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.113220215s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347120285s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.113220215s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.356720924s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122848511s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347098351s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.113220215s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.356696129s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122848511s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 systemd[1]: Started libpod-conmon-227dd9e31129556139d72620915cd2c51f3763005f18d48ad4c2a6a06b55f4b2.scope.
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.347023964s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.113258362s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.356530190s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122894287s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.356509209s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122894287s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.346796989s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.113288879s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.346776009s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.113288879s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.356345177s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122924805s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.356332779s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122924805s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.346243858s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 73.113227844s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.346194267s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.113227844s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=37/39 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=15.346082687s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.113258362s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.355640411s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 66.122879028s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.355595589s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.122879028s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.349695206s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.117431641s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[6.4( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[6.17( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[6.1d( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[6.1c( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.369619370s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.927925110s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.369585991s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.927925110s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 podman[98291]: 2025-10-01 16:16:24.930506516 +0000 UTC m=+0.098878437 container exec 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.336548805s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.895233154s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.336527824s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.895233154s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.336569786s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.895271301s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.336531639s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.895271301s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368700027s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.927505493s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368684769s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.927505493s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.336275101s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.895164490s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368680954s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.927589417s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368658066s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.927589417s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.336234093s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.895164490s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.335927963s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.894943237s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.335912704s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.894943237s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 podman[98289]: 2025-10-01 16:16:24.839271477 +0000 UTC m=+0.024581618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.335909843s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.894950867s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.335885048s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.894950867s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368894577s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928024292s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368879318s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928024292s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.335717201s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.894927979s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.335698128s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.894927979s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.335612297s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.894905090s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.335595131s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.894905090s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368738174s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928062439s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368721008s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928062439s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368722916s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928153992s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.335442543s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.894889832s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.335425377s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.894889832s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368634224s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928146362s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368616104s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928146362s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368588448s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928176880s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368571281s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928176880s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368639946s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928291321s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.336247444s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.895156860s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368514061s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928215027s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368433952s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928153992s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.335432053s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.895156860s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368477821s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928215027s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.368623734s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928291321s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.334081650s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.894798279s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.334045410s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.894798279s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.332931519s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.894836426s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.332911491s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.894836426s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.332806587s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.894805908s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.366438866s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928451538s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.366316795s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928382874s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.332752228s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.894805908s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.366246223s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928382874s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.330257416s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.892524719s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.366099358s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928382874s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.366349220s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928451538s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.366076469s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928382874s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.329759598s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.892501831s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.365663528s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928421021s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.365637779s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928421021s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.331971169s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.894851685s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.331952095s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.894851685s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.329729080s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.892501831s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.365505219s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928489685s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.365490913s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928489685s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.329276085s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.892402649s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.329259872s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.892402649s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.365277290s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928520203s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.365263939s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928520203s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.365293503s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928611755s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.365281105s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928611755s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.328989029s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.892395020s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.328975677s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.892395020s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.329202652s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.892395020s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.330227852s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.892524719s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.365035057s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928550720s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.364988327s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928550720s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.328783989s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.892417908s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.364894867s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928581238s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.364880562s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928581238s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.328752518s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.892417908s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.328585625s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.892333984s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.364903450s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 73.928680420s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.328562737s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.892333984s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.364888191s) [0] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.928680420s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.328241348s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active pruub 77.892372131s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.328262329s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.892395020s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=14.328207016s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.892372131s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b1994315d68555f6d862c780e2b778b63f8ca1ee2892ae6c7e64dee00593f0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b1994315d68555f6d862c780e2b778b63f8ca1ee2892ae6c7e64dee00593f0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:24 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:24 np0005464891 podman[98289]: 2025-10-01 16:16:24.97715774 +0000 UTC m=+0.162467801 container init 227dd9e31129556139d72620915cd2c51f3763005f18d48ad4c2a6a06b55f4b2 (image=quay.io/ceph/ceph:v18, name=brave_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:16:24 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 12:16:24 np0005464891 podman[98289]: 2025-10-01 16:16:24.986504955 +0000 UTC m=+0.171815006 container start 227dd9e31129556139d72620915cd2c51f3763005f18d48ad4c2a6a06b55f4b2 (image=quay.io/ceph/ceph:v18, name=brave_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 12:16:24 np0005464891 podman[98289]: 2025-10-01 16:16:24.99159572 +0000 UTC m=+0.176905781 container attach 227dd9e31129556139d72620915cd2c51f3763005f18d48ad4c2a6a06b55f4b2 (image=quay.io/ceph/ceph:v18, name=brave_ishizaka, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:25 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct  1 12:16:25 np0005464891 podman[98291]: 2025-10-01 16:16:25.027734385 +0000 UTC m=+0.196106316 container exec_died 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:25 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:25 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 8eba3766-4b1b-4277-8b45-10ef11ee4f30 does not exist
Oct  1 12:16:25 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 74453bd7-61e0-46cf-998f-10ecc07dcc25 does not exist
Oct  1 12:16:25 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 467449f4-98d8-40a6-b997-5d3add43c2dd does not exist
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2755061518' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  1 12:16:25 np0005464891 brave_ishizaka[98326]: 
Oct  1 12:16:25 np0005464891 brave_ishizaka[98326]: {"fsid":"6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":179,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":44,"num_osds":3,"num_up_osds":3,"osd_up_since":1759335331,"num_in_osds":3,"osd_in_since":1759335303,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84230144,"bytes_avail":64327696384,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2025-10-01T16:16:23.970109+0000","services":{"osd":{"daemons":{"summary":"","1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct  1 12:16:25 np0005464891 systemd[1]: libpod-227dd9e31129556139d72620915cd2c51f3763005f18d48ad4c2a6a06b55f4b2.scope: Deactivated successfully.
Oct  1 12:16:25 np0005464891 podman[98289]: 2025-10-01 16:16:25.641725159 +0000 UTC m=+0.827035280 container died 227dd9e31129556139d72620915cd2c51f3763005f18d48ad4c2a6a06b55f4b2 (image=quay.io/ceph/ceph:v18, name=brave_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 12:16:25 np0005464891 systemd[1]: var-lib-containers-storage-overlay-93b1994315d68555f6d862c780e2b778b63f8ca1ee2892ae6c7e64dee00593f0-merged.mount: Deactivated successfully.
Oct  1 12:16:25 np0005464891 podman[98289]: 2025-10-01 16:16:25.821048478 +0000 UTC m=+1.006358529 container remove 227dd9e31129556139d72620915cd2c51f3763005f18d48ad4c2a6a06b55f4b2 (image=quay.io/ceph/ceph:v18, name=brave_ishizaka, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 12:16:25 np0005464891 systemd[1]: libpod-conmon-227dd9e31129556139d72620915cd2c51f3763005f18d48ad4c2a6a06b55f4b2.scope: Deactivated successfully.
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:16:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct  1 12:16:26 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.c scrub starts
Oct  1 12:16:26 np0005464891 python3[98595]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 podman[98631]: 2025-10-01 16:16:26.297507429 +0000 UTC m=+0.042571019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:26 np0005464891 podman[98631]: 2025-10-01 16:16:26.399098392 +0000 UTC m=+0.144161932 container create ae4feca86d3aa85999b8129b9107ea3e81783c4b5e98e3c5d1aa3cf87173d6f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [2] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=44/45 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=44/45 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.c scrub ok
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=37/17 lis/c=37/37 les/c/f=39/39/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=39/21 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=44/45 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:26 np0005464891 podman[98646]: 2025-10-01 16:16:26.450242694 +0000 UTC m=+0.100729180 container create 9f6d4a4d6dded5136a6408e281a1db1e621244fcf830596435248399a2cbb8d5 (image=quay.io/ceph/ceph:v18, name=peaceful_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:16:26 np0005464891 systemd[1]: Started libpod-conmon-ae4feca86d3aa85999b8129b9107ea3e81783c4b5e98e3c5d1aa3cf87173d6f6.scope.
Oct  1 12:16:26 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:26 np0005464891 systemd[1]: Started libpod-conmon-9f6d4a4d6dded5136a6408e281a1db1e621244fcf830596435248399a2cbb8d5.scope.
Oct  1 12:16:26 np0005464891 podman[98631]: 2025-10-01 16:16:26.499637375 +0000 UTC m=+0.244700905 container init ae4feca86d3aa85999b8129b9107ea3e81783c4b5e98e3c5d1aa3cf87173d6f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jones, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:16:26 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:26 np0005464891 podman[98631]: 2025-10-01 16:16:26.507656173 +0000 UTC m=+0.252719673 container start ae4feca86d3aa85999b8129b9107ea3e81783c4b5e98e3c5d1aa3cf87173d6f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82736428f38af33411de868b38580c66183a4b3c42e10ca329fcf6a2034958c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82736428f38af33411de868b38580c66183a4b3c42e10ca329fcf6a2034958c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:26 np0005464891 agitated_jones[98659]: 167 167
Oct  1 12:16:26 np0005464891 podman[98646]: 2025-10-01 16:16:26.423713281 +0000 UTC m=+0.074199867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:26 np0005464891 systemd[1]: libpod-ae4feca86d3aa85999b8129b9107ea3e81783c4b5e98e3c5d1aa3cf87173d6f6.scope: Deactivated successfully.
Oct  1 12:16:26 np0005464891 podman[98631]: 2025-10-01 16:16:26.520295082 +0000 UTC m=+0.265358602 container attach ae4feca86d3aa85999b8129b9107ea3e81783c4b5e98e3c5d1aa3cf87173d6f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jones, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:26 np0005464891 podman[98646]: 2025-10-01 16:16:26.526119897 +0000 UTC m=+0.176606453 container init 9f6d4a4d6dded5136a6408e281a1db1e621244fcf830596435248399a2cbb8d5 (image=quay.io/ceph/ceph:v18, name=peaceful_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 12:16:26 np0005464891 podman[98631]: 2025-10-01 16:16:26.530182622 +0000 UTC m=+0.275246172 container died ae4feca86d3aa85999b8129b9107ea3e81783c4b5e98e3c5d1aa3cf87173d6f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:16:26 np0005464891 podman[98646]: 2025-10-01 16:16:26.53714168 +0000 UTC m=+0.187628156 container start 9f6d4a4d6dded5136a6408e281a1db1e621244fcf830596435248399a2cbb8d5 (image=quay.io/ceph/ceph:v18, name=peaceful_feynman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:26 np0005464891 podman[98646]: 2025-10-01 16:16:26.548668897 +0000 UTC m=+0.199155413 container attach 9f6d4a4d6dded5136a6408e281a1db1e621244fcf830596435248399a2cbb8d5 (image=quay.io/ceph/ceph:v18, name=peaceful_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:26 np0005464891 systemd[1]: var-lib-containers-storage-overlay-24c17172b87dca3c63c1a4d16300ae26889ba3e94033937a7f210974a1ded973-merged.mount: Deactivated successfully.
Oct  1 12:16:26 np0005464891 podman[98631]: 2025-10-01 16:16:26.578070691 +0000 UTC m=+0.323134191 container remove ae4feca86d3aa85999b8129b9107ea3e81783c4b5e98e3c5d1aa3cf87173d6f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jones, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:26 np0005464891 systemd[1]: libpod-conmon-ae4feca86d3aa85999b8129b9107ea3e81783c4b5e98e3c5d1aa3cf87173d6f6.scope: Deactivated successfully.
Oct  1 12:16:26 np0005464891 podman[98691]: 2025-10-01 16:16:26.757668277 +0000 UTC m=+0.048092135 container create d7b29e31557b85cabe990955da6ca3bb171c75c6e9adf33e79592c6f70add5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_varahamihira, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Oct  1 12:16:26 np0005464891 systemd[1]: Started libpod-conmon-d7b29e31557b85cabe990955da6ca3bb171c75c6e9adf33e79592c6f70add5c2.scope.
Oct  1 12:16:26 np0005464891 podman[98691]: 2025-10-01 16:16:26.73520732 +0000 UTC m=+0.025631158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:26 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9faccaedd96a1e02d42490fed124b66d67f1adf5b40b55d5ffa2ac67e23044b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9faccaedd96a1e02d42490fed124b66d67f1adf5b40b55d5ffa2ac67e23044b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9faccaedd96a1e02d42490fed124b66d67f1adf5b40b55d5ffa2ac67e23044b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9faccaedd96a1e02d42490fed124b66d67f1adf5b40b55d5ffa2ac67e23044b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9faccaedd96a1e02d42490fed124b66d67f1adf5b40b55d5ffa2ac67e23044b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:26 np0005464891 podman[98691]: 2025-10-01 16:16:26.863016847 +0000 UTC m=+0.153440755 container init d7b29e31557b85cabe990955da6ca3bb171c75c6e9adf33e79592c6f70add5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:16:26 np0005464891 podman[98691]: 2025-10-01 16:16:26.878334302 +0000 UTC m=+0.168758150 container start d7b29e31557b85cabe990955da6ca3bb171c75c6e9adf33e79592c6f70add5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 12:16:26 np0005464891 podman[98691]: 2025-10-01 16:16:26.882536031 +0000 UTC m=+0.172959879 container attach d7b29e31557b85cabe990955da6ca3bb171c75c6e9adf33e79592c6f70add5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 12:16:27 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct  1 12:16:27 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct  1 12:16:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:16:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/9174447' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:16:27 np0005464891 peaceful_feynman[98664]: 
Oct  1 12:16:27 np0005464891 peaceful_feynman[98664]: {"epoch":1,"fsid":"6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5","modified":"2025-10-01T16:13:20.656311Z","created":"2025-10-01T16:13:20.656311Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Oct  1 12:16:27 np0005464891 peaceful_feynman[98664]: dumped monmap epoch 1
Oct  1 12:16:27 np0005464891 systemd[1]: libpod-9f6d4a4d6dded5136a6408e281a1db1e621244fcf830596435248399a2cbb8d5.scope: Deactivated successfully.
Oct  1 12:16:27 np0005464891 podman[98733]: 2025-10-01 16:16:27.239617424 +0000 UTC m=+0.028661994 container died 9f6d4a4d6dded5136a6408e281a1db1e621244fcf830596435248399a2cbb8d5 (image=quay.io/ceph/ceph:v18, name=peaceful_feynman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:27 np0005464891 systemd[1]: var-lib-containers-storage-overlay-a82736428f38af33411de868b38580c66183a4b3c42e10ca329fcf6a2034958c-merged.mount: Deactivated successfully.
Oct  1 12:16:27 np0005464891 podman[98733]: 2025-10-01 16:16:27.32086071 +0000 UTC m=+0.109905190 container remove 9f6d4a4d6dded5136a6408e281a1db1e621244fcf830596435248399a2cbb8d5 (image=quay.io/ceph/ceph:v18, name=peaceful_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 12:16:27 np0005464891 systemd[1]: libpod-conmon-9f6d4a4d6dded5136a6408e281a1db1e621244fcf830596435248399a2cbb8d5.scope: Deactivated successfully.
Oct  1 12:16:27 np0005464891 elegant_varahamihira[98707]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:16:27 np0005464891 elegant_varahamihira[98707]: --> relative data size: 1.0
Oct  1 12:16:27 np0005464891 elegant_varahamihira[98707]: --> All data devices are unavailable
Oct  1 12:16:27 np0005464891 python3[98790]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:27 np0005464891 systemd[1]: libpod-d7b29e31557b85cabe990955da6ca3bb171c75c6e9adf33e79592c6f70add5c2.scope: Deactivated successfully.
Oct  1 12:16:27 np0005464891 conmon[98707]: conmon d7b29e31557b85cabe99 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7b29e31557b85cabe990955da6ca3bb171c75c6e9adf33e79592c6f70add5c2.scope/container/memory.events
Oct  1 12:16:27 np0005464891 podman[98797]: 2025-10-01 16:16:27.959532604 +0000 UTC m=+0.057677677 container create dad5bc232171f3c036cd4eec2e6169889d0ccf77a2a8ff4ac3ef0c68640ea0e7 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 12:16:27 np0005464891 podman[98803]: 2025-10-01 16:16:27.963730484 +0000 UTC m=+0.044284128 container died d7b29e31557b85cabe990955da6ca3bb171c75c6e9adf33e79592c6f70add5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_varahamihira, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:27 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9faccaedd96a1e02d42490fed124b66d67f1adf5b40b55d5ffa2ac67e23044b6-merged.mount: Deactivated successfully.
Oct  1 12:16:28 np0005464891 systemd[1]: Started libpod-conmon-dad5bc232171f3c036cd4eec2e6169889d0ccf77a2a8ff4ac3ef0c68640ea0e7.scope.
Oct  1 12:16:28 np0005464891 podman[98797]: 2025-10-01 16:16:27.932759565 +0000 UTC m=+0.030904668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:28 np0005464891 podman[98803]: 2025-10-01 16:16:28.032673761 +0000 UTC m=+0.113227385 container remove d7b29e31557b85cabe990955da6ca3bb171c75c6e9adf33e79592c6f70add5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 12:16:28 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:28 np0005464891 systemd[1]: libpod-conmon-d7b29e31557b85cabe990955da6ca3bb171c75c6e9adf33e79592c6f70add5c2.scope: Deactivated successfully.
Oct  1 12:16:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33a5d66c3906e9bf343482821bd0d9a5451f2cd9018269afe7466fb7a15386e2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33a5d66c3906e9bf343482821bd0d9a5451f2cd9018269afe7466fb7a15386e2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:28 np0005464891 podman[98797]: 2025-10-01 16:16:28.074237589 +0000 UTC m=+0.172382702 container init dad5bc232171f3c036cd4eec2e6169889d0ccf77a2a8ff4ac3ef0c68640ea0e7 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct  1 12:16:28 np0005464891 podman[98797]: 2025-10-01 16:16:28.084563853 +0000 UTC m=+0.182708946 container start dad5bc232171f3c036cd4eec2e6169889d0ccf77a2a8ff4ac3ef0c68640ea0e7 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:28 np0005464891 podman[98797]: 2025-10-01 16:16:28.08974516 +0000 UTC m=+0.187890323 container attach dad5bc232171f3c036cd4eec2e6169889d0ccf77a2a8ff4ac3ef0c68640ea0e7 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Oct  1 12:16:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2818283261' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  1 12:16:28 np0005464891 compassionate_ritchie[98829]: [client.openstack]
Oct  1 12:16:28 np0005464891 compassionate_ritchie[98829]: #011key = AQAHU91oAAAAABAA2khdgPPi62xzROiSemYxFg==
Oct  1 12:16:28 np0005464891 compassionate_ritchie[98829]: #011caps mgr = "allow *"
Oct  1 12:16:28 np0005464891 compassionate_ritchie[98829]: #011caps mon = "profile rbd"
Oct  1 12:16:28 np0005464891 compassionate_ritchie[98829]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct  1 12:16:28 np0005464891 systemd[1]: libpod-dad5bc232171f3c036cd4eec2e6169889d0ccf77a2a8ff4ac3ef0c68640ea0e7.scope: Deactivated successfully.
Oct  1 12:16:28 np0005464891 podman[98797]: 2025-10-01 16:16:28.698360051 +0000 UTC m=+0.796505134 container died dad5bc232171f3c036cd4eec2e6169889d0ccf77a2a8ff4ac3ef0c68640ea0e7 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 12:16:28 np0005464891 systemd[1]: var-lib-containers-storage-overlay-33a5d66c3906e9bf343482821bd0d9a5451f2cd9018269afe7466fb7a15386e2-merged.mount: Deactivated successfully.
Oct  1 12:16:28 np0005464891 podman[98797]: 2025-10-01 16:16:28.757942492 +0000 UTC m=+0.856087615 container remove dad5bc232171f3c036cd4eec2e6169889d0ccf77a2a8ff4ac3ef0c68640ea0e7 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:28 np0005464891 systemd[1]: libpod-conmon-dad5bc232171f3c036cd4eec2e6169889d0ccf77a2a8ff4ac3ef0c68640ea0e7.scope: Deactivated successfully.
Oct  1 12:16:28 np0005464891 podman[99008]: 2025-10-01 16:16:28.837739847 +0000 UTC m=+0.050360280 container create 79c9d708588ff7bde72a1f665cc0f3e9c4524a7ec54c6321b8a57ebf613c5f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:28 np0005464891 systemd[1]: Started libpod-conmon-79c9d708588ff7bde72a1f665cc0f3e9c4524a7ec54c6321b8a57ebf613c5f13.scope.
Oct  1 12:16:28 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:28 np0005464891 podman[99008]: 2025-10-01 16:16:28.814627431 +0000 UTC m=+0.027247864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:28 np0005464891 podman[99008]: 2025-10-01 16:16:28.914897086 +0000 UTC m=+0.127517559 container init 79c9d708588ff7bde72a1f665cc0f3e9c4524a7ec54c6321b8a57ebf613c5f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:16:28 np0005464891 podman[99008]: 2025-10-01 16:16:28.925885838 +0000 UTC m=+0.138506271 container start 79c9d708588ff7bde72a1f665cc0f3e9c4524a7ec54c6321b8a57ebf613c5f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wiles, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:28 np0005464891 podman[99008]: 2025-10-01 16:16:28.930020166 +0000 UTC m=+0.142640659 container attach 79c9d708588ff7bde72a1f665cc0f3e9c4524a7ec54c6321b8a57ebf613c5f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wiles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:28 np0005464891 wizardly_wiles[99024]: 167 167
Oct  1 12:16:28 np0005464891 systemd[1]: libpod-79c9d708588ff7bde72a1f665cc0f3e9c4524a7ec54c6321b8a57ebf613c5f13.scope: Deactivated successfully.
Oct  1 12:16:28 np0005464891 conmon[99024]: conmon 79c9d708588ff7bde72a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79c9d708588ff7bde72a1f665cc0f3e9c4524a7ec54c6321b8a57ebf613c5f13.scope/container/memory.events
Oct  1 12:16:28 np0005464891 podman[99008]: 2025-10-01 16:16:28.934306987 +0000 UTC m=+0.146927420 container died 79c9d708588ff7bde72a1f665cc0f3e9c4524a7ec54c6321b8a57ebf613c5f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wiles, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 12:16:28 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6cd21588f92f43b02489d9b27dd6803d942a230bcfdf37bb7b8da3312ad13d93-merged.mount: Deactivated successfully.
Oct  1 12:16:28 np0005464891 podman[99008]: 2025-10-01 16:16:28.977728199 +0000 UTC m=+0.190348592 container remove 79c9d708588ff7bde72a1f665cc0f3e9c4524a7ec54c6321b8a57ebf613c5f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wiles, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:16:29 np0005464891 systemd[1]: libpod-conmon-79c9d708588ff7bde72a1f665cc0f3e9c4524a7ec54c6321b8a57ebf613c5f13.scope: Deactivated successfully.
Oct  1 12:16:29 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/2818283261' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  1 12:16:29 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.d deep-scrub starts
Oct  1 12:16:29 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.d deep-scrub ok
Oct  1 12:16:29 np0005464891 podman[99048]: 2025-10-01 16:16:29.20361499 +0000 UTC m=+0.057200545 container create 5f6fd20793c5cf57c7e53f3fd80fb748e0db2979a4f0eba987cbbce63bde38d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:16:29 np0005464891 systemd[1]: Started libpod-conmon-5f6fd20793c5cf57c7e53f3fd80fb748e0db2979a4f0eba987cbbce63bde38d3.scope.
Oct  1 12:16:29 np0005464891 podman[99048]: 2025-10-01 16:16:29.18248889 +0000 UTC m=+0.036074445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:29 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053ad4c15afcf54526e026e9eb3f2cc22dc0dce1fd12129e12973ae9800a67e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053ad4c15afcf54526e026e9eb3f2cc22dc0dce1fd12129e12973ae9800a67e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053ad4c15afcf54526e026e9eb3f2cc22dc0dce1fd12129e12973ae9800a67e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053ad4c15afcf54526e026e9eb3f2cc22dc0dce1fd12129e12973ae9800a67e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:29 np0005464891 podman[99048]: 2025-10-01 16:16:29.296707692 +0000 UTC m=+0.150293227 container init 5f6fd20793c5cf57c7e53f3fd80fb748e0db2979a4f0eba987cbbce63bde38d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shockley, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:16:29 np0005464891 podman[99048]: 2025-10-01 16:16:29.313779306 +0000 UTC m=+0.167364861 container start 5f6fd20793c5cf57c7e53f3fd80fb748e0db2979a4f0eba987cbbce63bde38d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shockley, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 12:16:29 np0005464891 podman[99048]: 2025-10-01 16:16:29.317341207 +0000 UTC m=+0.170926732 container attach 5f6fd20793c5cf57c7e53f3fd80fb748e0db2979a4f0eba987cbbce63bde38d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shockley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:30 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct  1 12:16:30 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct  1 12:16:30 np0005464891 serene_shockley[99064]: {
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:    "0": [
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:        {
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "devices": [
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "/dev/loop3"
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            ],
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "lv_name": "ceph_lv0",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "lv_size": "21470642176",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "name": "ceph_lv0",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "tags": {
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.cluster_name": "ceph",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.crush_device_class": "",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.encrypted": "0",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.osd_id": "0",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.type": "block",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.vdo": "0"
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            },
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "type": "block",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "vg_name": "ceph_vg0"
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:        }
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:    ],
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:    "1": [
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:        {
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "devices": [
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "/dev/loop4"
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            ],
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "lv_name": "ceph_lv1",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "lv_size": "21470642176",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "name": "ceph_lv1",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "tags": {
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.cluster_name": "ceph",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.crush_device_class": "",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.encrypted": "0",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.osd_id": "1",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.type": "block",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.vdo": "0"
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            },
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "type": "block",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "vg_name": "ceph_vg1"
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:        }
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:    ],
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:    "2": [
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:        {
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "devices": [
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "/dev/loop5"
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            ],
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "lv_name": "ceph_lv2",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "lv_size": "21470642176",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "name": "ceph_lv2",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "tags": {
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.cluster_name": "ceph",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.crush_device_class": "",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.encrypted": "0",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.osd_id": "2",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.type": "block",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:                "ceph.vdo": "0"
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            },
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "type": "block",
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:            "vg_name": "ceph_vg2"
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:        }
Oct  1 12:16:30 np0005464891 serene_shockley[99064]:    ]
Oct  1 12:16:30 np0005464891 serene_shockley[99064]: }
Oct  1 12:16:30 np0005464891 systemd[1]: libpod-5f6fd20793c5cf57c7e53f3fd80fb748e0db2979a4f0eba987cbbce63bde38d3.scope: Deactivated successfully.
Oct  1 12:16:30 np0005464891 podman[99048]: 2025-10-01 16:16:30.142836873 +0000 UTC m=+0.996422408 container died 5f6fd20793c5cf57c7e53f3fd80fb748e0db2979a4f0eba987cbbce63bde38d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:30 np0005464891 systemd[1]: var-lib-containers-storage-overlay-053ad4c15afcf54526e026e9eb3f2cc22dc0dce1fd12129e12973ae9800a67e8-merged.mount: Deactivated successfully.
Oct  1 12:16:30 np0005464891 podman[99048]: 2025-10-01 16:16:30.205196493 +0000 UTC m=+1.058782008 container remove 5f6fd20793c5cf57c7e53f3fd80fb748e0db2979a4f0eba987cbbce63bde38d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shockley, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 12:16:30 np0005464891 systemd[1]: libpod-conmon-5f6fd20793c5cf57c7e53f3fd80fb748e0db2979a4f0eba987cbbce63bde38d3.scope: Deactivated successfully.
Oct  1 12:16:30 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct  1 12:16:30 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct  1 12:16:30 np0005464891 ansible-async_wrapper.py[99256]: Invoked with j288311866888 30 /home/zuul/.ansible/tmp/ansible-tmp-1759335389.8269322-33470-127704587020079/AnsiballZ_command.py _
Oct  1 12:16:30 np0005464891 ansible-async_wrapper.py[99289]: Starting module and watcher
Oct  1 12:16:30 np0005464891 ansible-async_wrapper.py[99289]: Start watching 99291 (30)
Oct  1 12:16:30 np0005464891 ansible-async_wrapper.py[99291]: Start module (99291)
Oct  1 12:16:30 np0005464891 ansible-async_wrapper.py[99256]: Return async_wrapper task started.
Oct  1 12:16:30 np0005464891 python3[99295]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:30 np0005464891 podman[99339]: 2025-10-01 16:16:30.650050337 +0000 UTC m=+0.065439679 container create cd184c608c666ed839b2b1b63a4e874a48ba521bddc180c7555d81f019a6ae70 (image=quay.io/ceph/ceph:v18, name=keen_montalcini, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 12:16:30 np0005464891 systemd[1]: Started libpod-conmon-cd184c608c666ed839b2b1b63a4e874a48ba521bddc180c7555d81f019a6ae70.scope.
Oct  1 12:16:30 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:30 np0005464891 podman[99339]: 2025-10-01 16:16:30.624356278 +0000 UTC m=+0.039745680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d3aa984e780a8920331d81594ca6ed450b49b156ff92477d6d1ffa45adf724/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d3aa984e780a8920331d81594ca6ed450b49b156ff92477d6d1ffa45adf724/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:30 np0005464891 podman[99339]: 2025-10-01 16:16:30.73472338 +0000 UTC m=+0.150112772 container init cd184c608c666ed839b2b1b63a4e874a48ba521bddc180c7555d81f019a6ae70 (image=quay.io/ceph/ceph:v18, name=keen_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 12:16:30 np0005464891 podman[99339]: 2025-10-01 16:16:30.741339987 +0000 UTC m=+0.156729329 container start cd184c608c666ed839b2b1b63a4e874a48ba521bddc180c7555d81f019a6ae70 (image=quay.io/ceph/ceph:v18, name=keen_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:16:30 np0005464891 podman[99339]: 2025-10-01 16:16:30.744977801 +0000 UTC m=+0.160367143 container attach cd184c608c666ed839b2b1b63a4e874a48ba521bddc180c7555d81f019a6ae70 (image=quay.io/ceph/ceph:v18, name=keen_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 12:16:30 np0005464891 podman[99399]: 2025-10-01 16:16:30.894496904 +0000 UTC m=+0.068272108 container create a25db4cc2abcf838976a4968868c0fed92e158297faa21d9b51a4e869dcad08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cartwright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:30 np0005464891 systemd[1]: Started libpod-conmon-a25db4cc2abcf838976a4968868c0fed92e158297faa21d9b51a4e869dcad08d.scope.
Oct  1 12:16:30 np0005464891 podman[99399]: 2025-10-01 16:16:30.869326849 +0000 UTC m=+0.043102083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:30 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:30 np0005464891 podman[99399]: 2025-10-01 16:16:30.977622183 +0000 UTC m=+0.151397447 container init a25db4cc2abcf838976a4968868c0fed92e158297faa21d9b51a4e869dcad08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:30 np0005464891 podman[99399]: 2025-10-01 16:16:30.990029995 +0000 UTC m=+0.163805199 container start a25db4cc2abcf838976a4968868c0fed92e158297faa21d9b51a4e869dcad08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:16:30 np0005464891 optimistic_cartwright[99414]: 167 167
Oct  1 12:16:30 np0005464891 podman[99399]: 2025-10-01 16:16:30.994331737 +0000 UTC m=+0.168107011 container attach a25db4cc2abcf838976a4968868c0fed92e158297faa21d9b51a4e869dcad08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:30 np0005464891 systemd[1]: libpod-a25db4cc2abcf838976a4968868c0fed92e158297faa21d9b51a4e869dcad08d.scope: Deactivated successfully.
Oct  1 12:16:30 np0005464891 podman[99399]: 2025-10-01 16:16:30.996774257 +0000 UTC m=+0.170549501 container died a25db4cc2abcf838976a4968868c0fed92e158297faa21d9b51a4e869dcad08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 12:16:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:16:31 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ab6d95ddae4ec89160c4c3557d4a7b6f5f2b1eb5a6f6d5b5549ce537720f8235-merged.mount: Deactivated successfully.
Oct  1 12:16:31 np0005464891 podman[99399]: 2025-10-01 16:16:31.045722395 +0000 UTC m=+0.219497579 container remove a25db4cc2abcf838976a4968868c0fed92e158297faa21d9b51a4e869dcad08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:31 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct  1 12:16:31 np0005464891 systemd[1]: libpod-conmon-a25db4cc2abcf838976a4968868c0fed92e158297faa21d9b51a4e869dcad08d.scope: Deactivated successfully.
Oct  1 12:16:31 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct  1 12:16:31 np0005464891 podman[99457]: 2025-10-01 16:16:31.252602406 +0000 UTC m=+0.054821637 container create 05c0406d168d683128b365a1369efbaae8d36abc9e3bb2aab7b97cb1cd0201bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:31 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  1 12:16:31 np0005464891 keen_montalcini[99367]: 
Oct  1 12:16:31 np0005464891 keen_montalcini[99367]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  1 12:16:31 np0005464891 systemd[1]: Started libpod-conmon-05c0406d168d683128b365a1369efbaae8d36abc9e3bb2aab7b97cb1cd0201bd.scope.
Oct  1 12:16:31 np0005464891 podman[99457]: 2025-10-01 16:16:31.225415195 +0000 UTC m=+0.027634486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:31 np0005464891 systemd[1]: libpod-cd184c608c666ed839b2b1b63a4e874a48ba521bddc180c7555d81f019a6ae70.scope: Deactivated successfully.
Oct  1 12:16:31 np0005464891 podman[99339]: 2025-10-01 16:16:31.321612055 +0000 UTC m=+0.737001437 container died cd184c608c666ed839b2b1b63a4e874a48ba521bddc180c7555d81f019a6ae70 (image=quay.io/ceph/ceph:v18, name=keen_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:31 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:31 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c2b64f100a1a45db21f2000eb32937d68c0bbd7a12312f4cb42351a8b017d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:31 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c2b64f100a1a45db21f2000eb32937d68c0bbd7a12312f4cb42351a8b017d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:31 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c2b64f100a1a45db21f2000eb32937d68c0bbd7a12312f4cb42351a8b017d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:31 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c2b64f100a1a45db21f2000eb32937d68c0bbd7a12312f4cb42351a8b017d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:31 np0005464891 podman[99457]: 2025-10-01 16:16:31.366285423 +0000 UTC m=+0.168504724 container init 05c0406d168d683128b365a1369efbaae8d36abc9e3bb2aab7b97cb1cd0201bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galois, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 12:16:31 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b4d3aa984e780a8920331d81594ca6ed450b49b156ff92477d6d1ffa45adf724-merged.mount: Deactivated successfully.
Oct  1 12:16:31 np0005464891 podman[99457]: 2025-10-01 16:16:31.373866567 +0000 UTC m=+0.176085768 container start 05c0406d168d683128b365a1369efbaae8d36abc9e3bb2aab7b97cb1cd0201bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galois, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:31 np0005464891 podman[99457]: 2025-10-01 16:16:31.386240629 +0000 UTC m=+0.188459870 container attach 05c0406d168d683128b365a1369efbaae8d36abc9e3bb2aab7b97cb1cd0201bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 12:16:31 np0005464891 podman[99339]: 2025-10-01 16:16:31.396393447 +0000 UTC m=+0.811782779 container remove cd184c608c666ed839b2b1b63a4e874a48ba521bddc180c7555d81f019a6ae70 (image=quay.io/ceph/ceph:v18, name=keen_montalcini, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 12:16:31 np0005464891 systemd[1]: libpod-conmon-cd184c608c666ed839b2b1b63a4e874a48ba521bddc180c7555d81f019a6ae70.scope: Deactivated successfully.
Oct  1 12:16:31 np0005464891 ansible-async_wrapper.py[99291]: Module complete (99291)
Oct  1 12:16:31 np0005464891 python3[99540]: ansible-ansible.legacy.async_status Invoked with jid=j288311866888.99256 mode=status _async_dir=/root/.ansible_async
Oct  1 12:16:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:32 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct  1 12:16:32 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct  1 12:16:32 np0005464891 python3[99589]: ansible-ansible.legacy.async_status Invoked with jid=j288311866888.99256 mode=cleanup _async_dir=/root/.ansible_async
Oct  1 12:16:32 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Oct  1 12:16:32 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Oct  1 12:16:32 np0005464891 admiring_galois[99475]: {
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:        "osd_id": 2,
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:        "type": "bluestore"
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:    },
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:        "osd_id": 0,
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:        "type": "bluestore"
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:    },
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:        "osd_id": 1,
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:        "type": "bluestore"
Oct  1 12:16:32 np0005464891 admiring_galois[99475]:    }
Oct  1 12:16:32 np0005464891 admiring_galois[99475]: }
Oct  1 12:16:32 np0005464891 systemd[1]: libpod-05c0406d168d683128b365a1369efbaae8d36abc9e3bb2aab7b97cb1cd0201bd.scope: Deactivated successfully.
Oct  1 12:16:32 np0005464891 podman[99457]: 2025-10-01 16:16:32.478614099 +0000 UTC m=+1.280833310 container died 05c0406d168d683128b365a1369efbaae8d36abc9e3bb2aab7b97cb1cd0201bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galois, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 12:16:32 np0005464891 systemd[1]: libpod-05c0406d168d683128b365a1369efbaae8d36abc9e3bb2aab7b97cb1cd0201bd.scope: Consumed 1.109s CPU time.
Oct  1 12:16:32 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e0c2b64f100a1a45db21f2000eb32937d68c0bbd7a12312f4cb42351a8b017d4-merged.mount: Deactivated successfully.
Oct  1 12:16:32 np0005464891 podman[99457]: 2025-10-01 16:16:32.542898993 +0000 UTC m=+1.345118234 container remove 05c0406d168d683128b365a1369efbaae8d36abc9e3bb2aab7b97cb1cd0201bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galois, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:32 np0005464891 systemd[1]: libpod-conmon-05c0406d168d683128b365a1369efbaae8d36abc9e3bb2aab7b97cb1cd0201bd.scope: Deactivated successfully.
Oct  1 12:16:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:16:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:16:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:32 np0005464891 ceph-mgr[74592]: [progress INFO root] update: starting ev 662bb640-17dd-44df-8524-6e64900ae503 (Updating rgw.rgw deployment (+1 -> 1))
Oct  1 12:16:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zdecaf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct  1 12:16:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zdecaf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  1 12:16:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zdecaf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  1 12:16:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct  1 12:16:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:16:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:16:32 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.zdecaf on compute-0
Oct  1 12:16:32 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.zdecaf on compute-0
Oct  1 12:16:32 np0005464891 python3[99679]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:32 np0005464891 podman[99737]: 2025-10-01 16:16:32.941655399 +0000 UTC m=+0.075255817 container create ba6f7e129e4c49d6f7bb8ef35cf8e06350f08f6dd31ecdfceb3f04f51e60c90c (image=quay.io/ceph/ceph:v18, name=adoring_pasteur, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:32 np0005464891 systemd[1]: Started libpod-conmon-ba6f7e129e4c49d6f7bb8ef35cf8e06350f08f6dd31ecdfceb3f04f51e60c90c.scope.
Oct  1 12:16:33 np0005464891 podman[99737]: 2025-10-01 16:16:32.912585423 +0000 UTC m=+0.046185891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:33 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c45f9c9b96a20580b34eaccb6c57e5441241fbd928f94641323d28a52572403/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c45f9c9b96a20580b34eaccb6c57e5441241fbd928f94641323d28a52572403/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:33 np0005464891 podman[99737]: 2025-10-01 16:16:33.037053616 +0000 UTC m=+0.170654014 container init ba6f7e129e4c49d6f7bb8ef35cf8e06350f08f6dd31ecdfceb3f04f51e60c90c (image=quay.io/ceph/ceph:v18, name=adoring_pasteur, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:16:33 np0005464891 podman[99737]: 2025-10-01 16:16:33.043414406 +0000 UTC m=+0.177014824 container start ba6f7e129e4c49d6f7bb8ef35cf8e06350f08f6dd31ecdfceb3f04f51e60c90c (image=quay.io/ceph/ceph:v18, name=adoring_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 12:16:33 np0005464891 podman[99737]: 2025-10-01 16:16:33.047135443 +0000 UTC m=+0.180735821 container attach ba6f7e129e4c49d6f7bb8ef35cf8e06350f08f6dd31ecdfceb3f04f51e60c90c (image=quay.io/ceph/ceph:v18, name=adoring_pasteur, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 12:16:33 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:33 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:33 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zdecaf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  1 12:16:33 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zdecaf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  1 12:16:33 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:33 np0005464891 ceph-mon[74303]: Deploying daemon rgw.rgw.compute-0.zdecaf on compute-0
Oct  1 12:16:33 np0005464891 podman[99817]: 2025-10-01 16:16:33.323040512 +0000 UTC m=+0.055358152 container create 68a06d23a74521c7ffb2bdd87b4211f12231c43de53994816a43b4ba8ae1c34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:33 np0005464891 systemd[1]: Started libpod-conmon-68a06d23a74521c7ffb2bdd87b4211f12231c43de53994816a43b4ba8ae1c34d.scope.
Oct  1 12:16:33 np0005464891 podman[99817]: 2025-10-01 16:16:33.293945717 +0000 UTC m=+0.026263447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:33 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:33 np0005464891 podman[99817]: 2025-10-01 16:16:33.431070628 +0000 UTC m=+0.163388288 container init 68a06d23a74521c7ffb2bdd87b4211f12231c43de53994816a43b4ba8ae1c34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_volhard, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 12:16:33 np0005464891 podman[99817]: 2025-10-01 16:16:33.442588125 +0000 UTC m=+0.174905795 container start 68a06d23a74521c7ffb2bdd87b4211f12231c43de53994816a43b4ba8ae1c34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:33 np0005464891 great_volhard[99852]: 167 167
Oct  1 12:16:33 np0005464891 podman[99817]: 2025-10-01 16:16:33.448994706 +0000 UTC m=+0.181312386 container attach 68a06d23a74521c7ffb2bdd87b4211f12231c43de53994816a43b4ba8ae1c34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_volhard, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:33 np0005464891 systemd[1]: libpod-68a06d23a74521c7ffb2bdd87b4211f12231c43de53994816a43b4ba8ae1c34d.scope: Deactivated successfully.
Oct  1 12:16:33 np0005464891 conmon[99852]: conmon 68a06d23a74521c7ffb2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-68a06d23a74521c7ffb2bdd87b4211f12231c43de53994816a43b4ba8ae1c34d.scope/container/memory.events
Oct  1 12:16:33 np0005464891 podman[99817]: 2025-10-01 16:16:33.451645191 +0000 UTC m=+0.183962851 container died 68a06d23a74521c7ffb2bdd87b4211f12231c43de53994816a43b4ba8ae1c34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_volhard, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct  1 12:16:33 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9eb24ff3e0379bf7055721efbeb0aba5ef83289a81be345498d3f83bd9e15b75-merged.mount: Deactivated successfully.
Oct  1 12:16:33 np0005464891 podman[99817]: 2025-10-01 16:16:33.503271806 +0000 UTC m=+0.235589456 container remove 68a06d23a74521c7ffb2bdd87b4211f12231c43de53994816a43b4ba8ae1c34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_volhard, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:16:33 np0005464891 systemd[1]: libpod-conmon-68a06d23a74521c7ffb2bdd87b4211f12231c43de53994816a43b4ba8ae1c34d.scope: Deactivated successfully.
Oct  1 12:16:33 np0005464891 systemd[1]: Reloading.
Oct  1 12:16:33 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  1 12:16:33 np0005464891 adoring_pasteur[99772]: 
Oct  1 12:16:33 np0005464891 adoring_pasteur[99772]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  1 12:16:33 np0005464891 podman[99737]: 2025-10-01 16:16:33.597978714 +0000 UTC m=+0.731579122 container died ba6f7e129e4c49d6f7bb8ef35cf8e06350f08f6dd31ecdfceb3f04f51e60c90c (image=quay.io/ceph/ceph:v18, name=adoring_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 12:16:33 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:16:33 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:16:33 np0005464891 systemd[1]: libpod-ba6f7e129e4c49d6f7bb8ef35cf8e06350f08f6dd31ecdfceb3f04f51e60c90c.scope: Deactivated successfully.
Oct  1 12:16:33 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9c45f9c9b96a20580b34eaccb6c57e5441241fbd928f94641323d28a52572403-merged.mount: Deactivated successfully.
Oct  1 12:16:33 np0005464891 podman[99737]: 2025-10-01 16:16:33.867155793 +0000 UTC m=+1.000756171 container remove ba6f7e129e4c49d6f7bb8ef35cf8e06350f08f6dd31ecdfceb3f04f51e60c90c (image=quay.io/ceph/ceph:v18, name=adoring_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:16:33 np0005464891 systemd[1]: libpod-conmon-ba6f7e129e4c49d6f7bb8ef35cf8e06350f08f6dd31ecdfceb3f04f51e60c90c.scope: Deactivated successfully.
Oct  1 12:16:33 np0005464891 systemd[1]: Reloading.
Oct  1 12:16:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:33 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:16:33 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:16:34 np0005464891 systemd[1]: Starting Ceph rgw.rgw.compute-0.zdecaf for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5...
Oct  1 12:16:34 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Oct  1 12:16:34 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Oct  1 12:16:34 np0005464891 podman[100013]: 2025-10-01 16:16:34.4410959 +0000 UTC m=+0.066858958 container create c23156508ca1df929573005f1f9600676af9f613fd0d435cb24cabbfabfa44c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-rgw-rgw-compute-0-zdecaf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 12:16:34 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c69658283111cc1bac5482d3c6bdea6d771c993e0a31176dfb3a5ac7fa8a93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:34 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c69658283111cc1bac5482d3c6bdea6d771c993e0a31176dfb3a5ac7fa8a93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:34 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c69658283111cc1bac5482d3c6bdea6d771c993e0a31176dfb3a5ac7fa8a93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:34 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c69658283111cc1bac5482d3c6bdea6d771c993e0a31176dfb3a5ac7fa8a93/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.zdecaf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:34 np0005464891 podman[100013]: 2025-10-01 16:16:34.415039271 +0000 UTC m=+0.040802389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:34 np0005464891 podman[100013]: 2025-10-01 16:16:34.52567566 +0000 UTC m=+0.151438748 container init c23156508ca1df929573005f1f9600676af9f613fd0d435cb24cabbfabfa44c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-rgw-rgw-compute-0-zdecaf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct  1 12:16:34 np0005464891 podman[100013]: 2025-10-01 16:16:34.534202442 +0000 UTC m=+0.159965490 container start c23156508ca1df929573005f1f9600676af9f613fd0d435cb24cabbfabfa44c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-rgw-rgw-compute-0-zdecaf, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:34 np0005464891 bash[100013]: c23156508ca1df929573005f1f9600676af9f613fd0d435cb24cabbfabfa44c6
Oct  1 12:16:34 np0005464891 systemd[1]: Started Ceph rgw.rgw.compute-0.zdecaf for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5.
Oct  1 12:16:34 np0005464891 radosgw[100033]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct  1 12:16:34 np0005464891 radosgw[100033]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct  1 12:16:34 np0005464891 radosgw[100033]: framework: beast
Oct  1 12:16:34 np0005464891 radosgw[100033]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct  1 12:16:34 np0005464891 radosgw[100033]: init_numa not setting numa affinity
Oct  1 12:16:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:16:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:16:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  1 12:16:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:34 np0005464891 ceph-mgr[74592]: [progress INFO root] complete: finished ev 662bb640-17dd-44df-8524-6e64900ae503 (Updating rgw.rgw deployment (+1 -> 1))
Oct  1 12:16:34 np0005464891 ceph-mgr[74592]: [progress INFO root] Completed event 662bb640-17dd-44df-8524-6e64900ae503 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Oct  1 12:16:34 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Oct  1 12:16:34 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct  1 12:16:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  1 12:16:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  1 12:16:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:34 np0005464891 ceph-mgr[74592]: [progress INFO root] update: starting ev 151a78b7-e75c-48dd-943e-5202b3993ecf (Updating mds.cephfs deployment (+1 -> 1))
Oct  1 12:16:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dnoypt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct  1 12:16:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dnoypt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  1 12:16:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dnoypt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  1 12:16:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:16:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:16:34 np0005464891 ceph-mgr[74592]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.dnoypt on compute-0
Oct  1 12:16:34 np0005464891 ceph-mgr[74592]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.dnoypt on compute-0
Oct  1 12:16:34 np0005464891 python3[100120]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:35 np0005464891 podman[100144]: 2025-10-01 16:16:35.003349356 +0000 UTC m=+0.070149091 container create 0dd20fc3ee75427b0102480fb8f55ff6abbd254288c6260fadb815daf2a425e6 (image=quay.io/ceph/ceph:v18, name=laughing_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:35 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct  1 12:16:35 np0005464891 systemd[1]: Started libpod-conmon-0dd20fc3ee75427b0102480fb8f55ff6abbd254288c6260fadb815daf2a425e6.scope.
Oct  1 12:16:35 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct  1 12:16:35 np0005464891 podman[100144]: 2025-10-01 16:16:34.977026329 +0000 UTC m=+0.043826144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:35 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17feb5f5d773b6ba84429312c73d36fd00641b3b6e28ba6d31c95e4fd8aa5312/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17feb5f5d773b6ba84429312c73d36fd00641b3b6e28ba6d31c95e4fd8aa5312/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:35 np0005464891 podman[100144]: 2025-10-01 16:16:35.10779181 +0000 UTC m=+0.174591575 container init 0dd20fc3ee75427b0102480fb8f55ff6abbd254288c6260fadb815daf2a425e6 (image=quay.io/ceph/ceph:v18, name=laughing_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:16:35 np0005464891 podman[100144]: 2025-10-01 16:16:35.117443704 +0000 UTC m=+0.184243439 container start 0dd20fc3ee75427b0102480fb8f55ff6abbd254288c6260fadb815daf2a425e6 (image=quay.io/ceph/ceph:v18, name=laughing_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:35 np0005464891 podman[100144]: 2025-10-01 16:16:35.120288135 +0000 UTC m=+0.187087870 container attach 0dd20fc3ee75427b0102480fb8f55ff6abbd254288c6260fadb815daf2a425e6 (image=quay.io/ceph/ceph:v18, name=laughing_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct  1 12:16:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct  1 12:16:35 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct  1 12:16:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Oct  1 12:16:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3965361240' entity='client.rgw.rgw.compute-0.zdecaf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  1 12:16:35 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 46 pg[8.0( empty local-lis/les=0/0 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [1] r=0 lpr=46 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:35 np0005464891 ansible-async_wrapper.py[99289]: Done in kid B.
Oct  1 12:16:35 np0005464891 podman[100283]: 2025-10-01 16:16:35.489004768 +0000 UTC m=+0.042860717 container create 010c2653e761ce084c2dde05f152d6fa5f4501d8e3025a177d3274d3af03e09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 12:16:35 np0005464891 systemd[1]: Started libpod-conmon-010c2653e761ce084c2dde05f152d6fa5f4501d8e3025a177d3274d3af03e09f.scope.
Oct  1 12:16:35 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:35 np0005464891 podman[100283]: 2025-10-01 16:16:35.469258098 +0000 UTC m=+0.023114097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:35 np0005464891 podman[100283]: 2025-10-01 16:16:35.568500045 +0000 UTC m=+0.122356004 container init 010c2653e761ce084c2dde05f152d6fa5f4501d8e3025a177d3274d3af03e09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_clarke, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 12:16:35 np0005464891 podman[100283]: 2025-10-01 16:16:35.574585307 +0000 UTC m=+0.128441256 container start 010c2653e761ce084c2dde05f152d6fa5f4501d8e3025a177d3274d3af03e09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_clarke, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:35 np0005464891 podman[100283]: 2025-10-01 16:16:35.57716151 +0000 UTC m=+0.131017459 container attach 010c2653e761ce084c2dde05f152d6fa5f4501d8e3025a177d3274d3af03e09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_clarke, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:35 np0005464891 sharp_clarke[100318]: 167 167
Oct  1 12:16:35 np0005464891 systemd[1]: libpod-010c2653e761ce084c2dde05f152d6fa5f4501d8e3025a177d3274d3af03e09f.scope: Deactivated successfully.
Oct  1 12:16:35 np0005464891 conmon[100318]: conmon 010c2653e761ce084c2d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-010c2653e761ce084c2dde05f152d6fa5f4501d8e3025a177d3274d3af03e09f.scope/container/memory.events
Oct  1 12:16:35 np0005464891 podman[100283]: 2025-10-01 16:16:35.581786351 +0000 UTC m=+0.135642310 container died 010c2653e761ce084c2dde05f152d6fa5f4501d8e3025a177d3274d3af03e09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_clarke, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:35 np0005464891 systemd[1]: var-lib-containers-storage-overlay-5dd9c7236773a6b741f6fc37e409d52452fc9e71dfc6dbd9d890576b68700144-merged.mount: Deactivated successfully.
Oct  1 12:16:35 np0005464891 podman[100283]: 2025-10-01 16:16:35.626654244 +0000 UTC m=+0.180510223 container remove 010c2653e761ce084c2dde05f152d6fa5f4501d8e3025a177d3274d3af03e09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_clarke, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:35 np0005464891 systemd[1]: libpod-conmon-010c2653e761ce084c2dde05f152d6fa5f4501d8e3025a177d3274d3af03e09f.scope: Deactivated successfully.
Oct  1 12:16:35 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  1 12:16:35 np0005464891 laughing_greider[100209]: 
Oct  1 12:16:35 np0005464891 laughing_greider[100209]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct  1 12:16:35 np0005464891 systemd[1]: Reloading.
Oct  1 12:16:35 np0005464891 podman[100144]: 2025-10-01 16:16:35.694142219 +0000 UTC m=+0.760941964 container died 0dd20fc3ee75427b0102480fb8f55ff6abbd254288c6260fadb815daf2a425e6 (image=quay.io/ceph/ceph:v18, name=laughing_greider, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 12:16:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:35 np0005464891 ceph-mon[74303]: Saving service rgw.rgw spec with placement compute-0
Oct  1 12:16:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dnoypt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  1 12:16:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dnoypt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  1 12:16:35 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/3965361240' entity='client.rgw.rgw.compute-0.zdecaf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  1 12:16:35 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:16:35 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:16:35 np0005464891 systemd[1]: libpod-0dd20fc3ee75427b0102480fb8f55ff6abbd254288c6260fadb815daf2a425e6.scope: Deactivated successfully.
Oct  1 12:16:35 np0005464891 systemd[1]: var-lib-containers-storage-overlay-17feb5f5d773b6ba84429312c73d36fd00641b3b6e28ba6d31c95e4fd8aa5312-merged.mount: Deactivated successfully.
Oct  1 12:16:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v109: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:35 np0005464891 podman[100144]: 2025-10-01 16:16:35.983701927 +0000 UTC m=+1.050501682 container remove 0dd20fc3ee75427b0102480fb8f55ff6abbd254288c6260fadb815daf2a425e6 (image=quay.io/ceph/ceph:v18, name=laughing_greider, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 12:16:35 np0005464891 systemd[1]: libpod-conmon-0dd20fc3ee75427b0102480fb8f55ff6abbd254288c6260fadb815daf2a425e6.scope: Deactivated successfully.
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:16:36 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Oct  1 12:16:36 np0005464891 systemd[1]: Reloading.
Oct  1 12:16:36 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Oct  1 12:16:36 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:16:36 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3965361240' entity='client.rgw.rgw.compute-0.zdecaf' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct  1 12:16:36 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 47 pg[8.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [1] r=0 lpr=46 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:36 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.b deep-scrub starts
Oct  1 12:16:36 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.b deep-scrub ok
Oct  1 12:16:36 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Oct  1 12:16:36 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Oct  1 12:16:36 np0005464891 systemd[1]: Starting Ceph mds.cephfs.compute-0.dnoypt for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5...
Oct  1 12:16:36 np0005464891 podman[100480]: 2025-10-01 16:16:36.642158823 +0000 UTC m=+0.071276594 container create 0e86b66f24a25c334884f04051d670f8e8220663bf3420d9b6b2a95a110b4849 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mds-cephfs-compute-0-dnoypt, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 12:16:36 np0005464891 podman[100480]: 2025-10-01 16:16:36.602169638 +0000 UTC m=+0.031287469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc06feaa2fe98248b0a478421fa5bf499bcac2b3e7c140dc2544c985ba92ad9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc06feaa2fe98248b0a478421fa5bf499bcac2b3e7c140dc2544c985ba92ad9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc06feaa2fe98248b0a478421fa5bf499bcac2b3e7c140dc2544c985ba92ad9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc06feaa2fe98248b0a478421fa5bf499bcac2b3e7c140dc2544c985ba92ad9/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.dnoypt supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 12:16:36 np0005464891 podman[100480]: 2025-10-01 16:16:36.725731985 +0000 UTC m=+0.154849796 container init 0e86b66f24a25c334884f04051d670f8e8220663bf3420d9b6b2a95a110b4849 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mds-cephfs-compute-0-dnoypt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: Deploying daemon mds.cephfs.compute-0.dnoypt on compute-0
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/3965361240' entity='client.rgw.rgw.compute-0.zdecaf' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  1 12:16:36 np0005464891 podman[100480]: 2025-10-01 16:16:36.738925858 +0000 UTC m=+0.168043619 container start 0e86b66f24a25c334884f04051d670f8e8220663bf3420d9b6b2a95a110b4849 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mds-cephfs-compute-0-dnoypt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:36 np0005464891 bash[100480]: 0e86b66f24a25c334884f04051d670f8e8220663bf3420d9b6b2a95a110b4849
Oct  1 12:16:36 np0005464891 systemd[1]: Started Ceph mds.cephfs.compute-0.dnoypt for 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5.
Oct  1 12:16:36 np0005464891 ceph-mds[100500]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 12:16:36 np0005464891 ceph-mds[100500]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct  1 12:16:36 np0005464891 ceph-mds[100500]: main not setting numa affinity
Oct  1 12:16:36 np0005464891 ceph-mds[100500]: pidfile_write: ignore empty --pid-file
Oct  1 12:16:36 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mds-cephfs-compute-0-dnoypt[100496]: starting mds.cephfs.compute-0.dnoypt at 
Oct  1 12:16:36 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt Updating MDS map to version 2 from mon.0
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:36 np0005464891 ceph-mgr[74592]: [progress INFO root] complete: finished ev 151a78b7-e75c-48dd-943e-5202b3993ecf (Updating mds.cephfs deployment (+1 -> 1))
Oct  1 12:16:36 np0005464891 ceph-mgr[74592]: [progress INFO root] Completed event 151a78b7-e75c-48dd-943e-5202b3993ecf (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  1 12:16:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:37 np0005464891 python3[100569]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:37 np0005464891 podman[100632]: 2025-10-01 16:16:37.143027297 +0000 UTC m=+0.049867637 container create 1137d1781553e3c15e6fc358d5725a7942998aa41cea11304edf181d6486ffcc (image=quay.io/ceph/ceph:v18, name=kind_agnesi, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct  1 12:16:37 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 48 pg[9.0( empty local-lis/les=0/0 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [1] r=0 lpr=48 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3965361240' entity='client.rgw.rgw.compute-0.zdecaf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  1 12:16:37 np0005464891 systemd[1]: Started libpod-conmon-1137d1781553e3c15e6fc358d5725a7942998aa41cea11304edf181d6486ffcc.scope.
Oct  1 12:16:37 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:37 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04cf7ad50a0ad61c70f375e93852f2e1e3e9a50bd106169e71deb3d8302354b3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:37 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04cf7ad50a0ad61c70f375e93852f2e1e3e9a50bd106169e71deb3d8302354b3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:37 np0005464891 podman[100632]: 2025-10-01 16:16:37.124447889 +0000 UTC m=+0.031288239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:37 np0005464891 podman[100632]: 2025-10-01 16:16:37.230982942 +0000 UTC m=+0.137823292 container init 1137d1781553e3c15e6fc358d5725a7942998aa41cea11304edf181d6486ffcc (image=quay.io/ceph/ceph:v18, name=kind_agnesi, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:37 np0005464891 podman[100632]: 2025-10-01 16:16:37.243760786 +0000 UTC m=+0.150601136 container start 1137d1781553e3c15e6fc358d5725a7942998aa41cea11304edf181d6486ffcc (image=quay.io/ceph/ceph:v18, name=kind_agnesi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 12:16:37 np0005464891 podman[100632]: 2025-10-01 16:16:37.246514574 +0000 UTC m=+0.153354954 container attach 1137d1781553e3c15e6fc358d5725a7942998aa41cea11304edf181d6486ffcc (image=quay.io/ceph/ceph:v18, name=kind_agnesi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 12:16:37 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.14 deep-scrub starts
Oct  1 12:16:37 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.14 deep-scrub ok
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/3965361240' entity='client.rgw.rgw.compute-0.zdecaf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).mds e3 new map
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-01T16:16:20.350399+0000#012modified#0112025-10-01T16:16:20.350481+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.dnoypt{-1:14269} state up:standby seq 1 addr [v2:192.168.122.100:6814/2168313721,v1:192.168.122.100:6815/2168313721] compat {c=[1],r=[1],i=[7ff]}]
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt Updating MDS map to version 3 from mon.0
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt Monitors have assigned me to become a standby.
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2168313721,v1:192.168.122.100:6815/2168313721] up:boot
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2168313721,v1:192.168.122.100:6815/2168313721] as mds.0
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.dnoypt assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.dnoypt"} v 0) v1
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.dnoypt"}]: dispatch
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).mds e3 all = 0
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).mds e4 new map
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-01T16:16:20.350399+0000#012modified#0112025-10-01T16:16:37.751790+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14269}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.dnoypt{0:14269} state up:creating seq 1 addr [v2:192.168.122.100:6814/2168313721,v1:192.168.122.100:6815/2168313721] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.dnoypt=up:creating}
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt Updating MDS map to version 4 from mon.0
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.0.4 handle_mds_map i am now mds.0.4
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.0.cache creating system inode with ino:0x1
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.0.cache creating system inode with ino:0x100
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.0.cache creating system inode with ino:0x600
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.0.cache creating system inode with ino:0x601
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.0.cache creating system inode with ino:0x602
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.0.cache creating system inode with ino:0x603
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.0.cache creating system inode with ino:0x604
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.0.cache creating system inode with ino:0x605
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.0.cache creating system inode with ino:0x606
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.0.cache creating system inode with ino:0x607
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.0.cache creating system inode with ino:0x608
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.0.cache creating system inode with ino:0x609
Oct  1 12:16:37 np0005464891 ceph-mds[100500]: mds.0.4 creating_done
Oct  1 12:16:37 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.dnoypt is now active in filesystem cephfs as rank 0
Oct  1 12:16:37 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  1 12:16:37 np0005464891 kind_agnesi[100683]: 
Oct  1 12:16:37 np0005464891 kind_agnesi[100683]: [{"container_id": "61e502216e83", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.40%", "created": "2025-10-01T16:14:40.031158Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-10-01T16:14:40.097061Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T16:16:25.507900Z", "memory_usage": 11628707, "ports": [], "service_name": "crash", "started": "2025-10-01T16:14:39.886592Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5@crash.compute-0", "version": "18.2.7"}, {"daemon_id": "cephfs.compute-0.dnoypt", "daemon_name": "mds.cephfs.compute-0.dnoypt", "daemon_type": "mds", "events": ["2025-10-01T16:16:36.809785Z daemon:mds.cephfs.compute-0.dnoypt [INFO] \"Deployed mds.cephfs.compute-0.dnoypt on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "fe2a13ced320", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "27.25%", "created": "2025-10-01T16:13:27.951331Z", "daemon_id": "compute-0.ieawdb", "daemon_name": "mgr.compute-0.ieawdb", "daemon_type": "mgr", "events": ["2025-10-01T16:15:43.763116Z daemon:mgr.compute-0.ieawdb [INFO] \"Reconfigured mgr.compute-0.ieawdb on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T16:16:25.507802Z", "memory_usage": 548719820, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-01T16:13:27.859170Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5@mgr.compute-0.ieawdb", "version": "18.2.7"}, {"container_id": "154be41beae4", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.07%", "created": "2025-10-01T16:13:22.724217Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-10-01T16:15:42.854492Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T16:16:25.507659Z", "memory_request": 2147483648, "memory_usage": 39636172, "ports": [], "service_name": "mon", "started": "2025-10-01T16:13:25.532274Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5@mon.compute-0", "version": "18.2.7"}, {"container_id": "599972a68d39", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.64%", "created": "2025-10-01T16:15:14.557060Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-10-01T16:15:14.631400Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T16:16:25.507980Z", "memory_request": 4294967296, "memory_usage": 58741227, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-01T16:15:14.427522Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5@osd.0", "version": "18.2.7"}, {"container_id": "0985aa5e05fa", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.88%", "created": "2025-10-01T16:15:20.165195Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-10-01T16:15:20.240160Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T16:16:25.508057Z", "memory_request": 4294967296, "memory_usage": 62390272, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-01T16:15:19.964901Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5@osd.1", "version": "18.2.7"}, {"container_id": "11b91835b856", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.00%", "created": "2025-10-01T16:15:25.669328Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-10-01T16:15:25.710123Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T16:16:25.508134Z", "memory_request": 4294967296, "memory_usage": 61268295, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-01T16:15:25.561040Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.zdecaf", "daemon_name": "rgw.rgw.compute-0.zdecaf", "daemon_type": "rgw", "events": ["2025-10-01T16:16:34.793725Z daemon:rgw.rgw.compute-0.zdecaf [INFO] \"Deployed rgw.rgw.compute-0.zdecaf on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Oct  1 12:16:37 np0005464891 systemd[1]: libpod-1137d1781553e3c15e6fc358d5725a7942998aa41cea11304edf181d6486ffcc.scope: Deactivated successfully.
Oct  1 12:16:37 np0005464891 podman[100632]: 2025-10-01 16:16:37.852683605 +0000 UTC m=+0.759523945 container died 1137d1781553e3c15e6fc358d5725a7942998aa41cea11304edf181d6486ffcc (image=quay.io/ceph/ceph:v18, name=kind_agnesi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:37 np0005464891 podman[100815]: 2025-10-01 16:16:37.879213378 +0000 UTC m=+0.077229782 container exec 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:16:37 np0005464891 systemd[1]: var-lib-containers-storage-overlay-04cf7ad50a0ad61c70f375e93852f2e1e3e9a50bd106169e71deb3d8302354b3-merged.mount: Deactivated successfully.
Oct  1 12:16:37 np0005464891 podman[100632]: 2025-10-01 16:16:37.922911618 +0000 UTC m=+0.829751948 container remove 1137d1781553e3c15e6fc358d5725a7942998aa41cea11304edf181d6486ffcc (image=quay.io/ceph/ceph:v18, name=kind_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 12:16:37 np0005464891 systemd[1]: libpod-conmon-1137d1781553e3c15e6fc358d5725a7942998aa41cea11304edf181d6486ffcc.scope: Deactivated successfully.
Oct  1 12:16:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v112: 195 pgs: 1 creating+peering, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 682 B/s wr, 1 op/s
Oct  1 12:16:37 np0005464891 podman[100815]: 2025-10-01 16:16:37.990740833 +0000 UTC m=+0.188757257 container exec_died 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3965361240' entity='client.rgw.rgw.compute-0.zdecaf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct  1 12:16:38 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 49 pg[9.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [1] r=0 lpr=48 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: daemon mds.cephfs.compute-0.dnoypt assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: daemon mds.cephfs.compute-0.dnoypt is now active in filesystem cephfs as rank 0
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/3965361240' entity='client.rgw.rgw.compute-0.zdecaf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).mds e5 new map
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-01T16:16:20.350399+0000#012modified#0112025-10-01T16:16:38.757056+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14269}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.dnoypt{0:14269} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2168313721,v1:192.168.122.100:6815/2168313721] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct  1 12:16:38 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt Updating MDS map to version 5 from mon.0
Oct  1 12:16:38 np0005464891 ceph-mds[100500]: mds.0.4 handle_mds_map i am now mds.0.4
Oct  1 12:16:38 np0005464891 ceph-mds[100500]: mds.0.4 handle_mds_map state change up:creating --> up:active
Oct  1 12:16:38 np0005464891 ceph-mds[100500]: mds.0.4 recovery_done -- successful recovery!
Oct  1 12:16:38 np0005464891 ceph-mds[100500]: mds.0.4 active_start
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2168313721,v1:192.168.122.100:6815/2168313721] up:active
Oct  1 12:16:38 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.dnoypt=up:active}
Oct  1 12:16:38 np0005464891 python3[101037]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:39 np0005464891 podman[101094]: 2025-10-01 16:16:39.045628789 +0000 UTC m=+0.043050653 container create d34d9ed22653359574d1adf14dc4eda0b0e85df4ded918d2d42a9c420ecabdd8 (image=quay.io/ceph/ceph:v18, name=busy_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:39 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct  1 12:16:39 np0005464891 systemd[1]: Started libpod-conmon-d34d9ed22653359574d1adf14dc4eda0b0e85df4ded918d2d42a9c420ecabdd8.scope.
Oct  1 12:16:39 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct  1 12:16:39 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2df3c61884340933fefbb96de416ea8289a620b02abdcfaef9bca3fe24652a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2df3c61884340933fefbb96de416ea8289a620b02abdcfaef9bca3fe24652a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:39 np0005464891 podman[101094]: 2025-10-01 16:16:39.026414444 +0000 UTC m=+0.023836338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:39 np0005464891 podman[101094]: 2025-10-01 16:16:39.149756744 +0000 UTC m=+0.147178618 container init d34d9ed22653359574d1adf14dc4eda0b0e85df4ded918d2d42a9c420ecabdd8 (image=quay.io/ceph/ceph:v18, name=busy_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 12:16:39 np0005464891 podman[101094]: 2025-10-01 16:16:39.161838307 +0000 UTC m=+0.159260151 container start d34d9ed22653359574d1adf14dc4eda0b0e85df4ded918d2d42a9c420ecabdd8 (image=quay.io/ceph/ceph:v18, name=busy_cannon, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct  1 12:16:39 np0005464891 podman[101094]: 2025-10-01 16:16:39.167078045 +0000 UTC m=+0.164499889 container attach d34d9ed22653359574d1adf14dc4eda0b0e85df4ded918d2d42a9c420ecabdd8 (image=quay.io/ceph/ceph:v18, name=busy_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3965361240' entity='client.rgw.rgw.compute-0.zdecaf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  1 12:16:39 np0005464891 ceph-mgr[74592]: [progress INFO root] Writing back 11 completed events
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:39 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.1a deep-scrub starts
Oct  1 12:16:39 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 50 pg[10.0( empty local-lis/les=0/0 n=0 ec=50/50 lis/c=0/0 les/c/f=0/0/0 sis=50) [2] r=0 lpr=50 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:39 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.1a deep-scrub ok
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:39 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 6c63de3b-cada-43c7-9315-1e4463787496 does not exist
Oct  1 12:16:39 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 74f2a0e9-d323-47b7-9559-5fd13bd1e7c9 does not exist
Oct  1 12:16:39 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev e734e740-9b24-49b8-aea6-e370f7a08a84 does not exist
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4256402671' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  1 12:16:39 np0005464891 busy_cannon[101134]: 
Oct  1 12:16:39 np0005464891 busy_cannon[101134]: {"fsid":"6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5","health":{"status":"HEALTH_WARN","checks":{"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":194,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":50,"num_osds":3,"num_up_osds":3,"osd_up_since":1759335331,"num_in_osds":3,"osd_in_since":1759335303,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":194},{"state_name":"creating+peering","count":1}],"num_pgs":195,"num_pools":9,"num_objects":6,"data_bytes":460666,"bytes_used":84365312,"bytes_avail":64327561216,"bytes_total":64411926528,"inactive_pgs_ratio":0.0051282052882015705,"read_bytes_sec":682,"write_bytes_sec":682,"read_op_per_sec":0,"write_op_per_sec":0},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.dnoypt","status":"up:active","gid":14269}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-10-01T16:16:33.973380+0000","services":{}},"progress_events":{}}
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/3965361240' entity='client.rgw.rgw.compute-0.zdecaf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:16:39 np0005464891 systemd[1]: libpod-d34d9ed22653359574d1adf14dc4eda0b0e85df4ded918d2d42a9c420ecabdd8.scope: Deactivated successfully.
Oct  1 12:16:39 np0005464891 podman[101094]: 2025-10-01 16:16:39.925065426 +0000 UTC m=+0.922487310 container died d34d9ed22653359574d1adf14dc4eda0b0e85df4ded918d2d42a9c420ecabdd8 (image=quay.io/ceph/ceph:v18, name=busy_cannon, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct  1 12:16:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v115: 196 pgs: 1 unknown, 1 creating+peering, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Oct  1 12:16:40 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ac2df3c61884340933fefbb96de416ea8289a620b02abdcfaef9bca3fe24652a-merged.mount: Deactivated successfully.
Oct  1 12:16:40 np0005464891 podman[101094]: 2025-10-01 16:16:40.065755638 +0000 UTC m=+1.063177502 container remove d34d9ed22653359574d1adf14dc4eda0b0e85df4ded918d2d42a9c420ecabdd8 (image=quay.io/ceph/ceph:v18, name=busy_cannon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:40 np0005464891 systemd[1]: libpod-conmon-d34d9ed22653359574d1adf14dc4eda0b0e85df4ded918d2d42a9c420ecabdd8.scope: Deactivated successfully.
Oct  1 12:16:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct  1 12:16:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3965361240' entity='client.rgw.rgw.compute-0.zdecaf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  1 12:16:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct  1 12:16:40 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct  1 12:16:40 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 51 pg[10.0( empty local-lis/les=50/51 n=0 ec=50/50 lis/c=0/0 les/c/f=0/0/0 sis=50) [2] r=0 lpr=50 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:40 np0005464891 podman[101344]: 2025-10-01 16:16:40.384841084 +0000 UTC m=+0.073231159 container create fe66fb4337a36de6c529c62992f1d8bbf465337d789f6a16d8046d457bc7a235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hodgkin, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:40 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.1e deep-scrub starts
Oct  1 12:16:40 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 2.1e deep-scrub ok
Oct  1 12:16:40 np0005464891 podman[101344]: 2025-10-01 16:16:40.340817924 +0000 UTC m=+0.029208059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:40 np0005464891 systemd[1]: Started libpod-conmon-fe66fb4337a36de6c529c62992f1d8bbf465337d789f6a16d8046d457bc7a235.scope.
Oct  1 12:16:40 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:40 np0005464891 podman[101344]: 2025-10-01 16:16:40.50167843 +0000 UTC m=+0.190068475 container init fe66fb4337a36de6c529c62992f1d8bbf465337d789f6a16d8046d457bc7a235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 12:16:40 np0005464891 podman[101344]: 2025-10-01 16:16:40.509411728 +0000 UTC m=+0.197801773 container start fe66fb4337a36de6c529c62992f1d8bbf465337d789f6a16d8046d457bc7a235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hodgkin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct  1 12:16:40 np0005464891 sweet_hodgkin[101360]: 167 167
Oct  1 12:16:40 np0005464891 systemd[1]: libpod-fe66fb4337a36de6c529c62992f1d8bbf465337d789f6a16d8046d457bc7a235.scope: Deactivated successfully.
Oct  1 12:16:40 np0005464891 podman[101344]: 2025-10-01 16:16:40.585207949 +0000 UTC m=+0.273597994 container attach fe66fb4337a36de6c529c62992f1d8bbf465337d789f6a16d8046d457bc7a235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 12:16:40 np0005464891 podman[101344]: 2025-10-01 16:16:40.586204298 +0000 UTC m=+0.274594363 container died fe66fb4337a36de6c529c62992f1d8bbf465337d789f6a16d8046d457bc7a235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct  1 12:16:40 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4bee197f6420aa6426b38d79e0a79f8e3a9ac370bb6bd06b796c0ec56ab427d7-merged.mount: Deactivated successfully.
Oct  1 12:16:40 np0005464891 podman[101344]: 2025-10-01 16:16:40.896977888 +0000 UTC m=+0.585367953 container remove fe66fb4337a36de6c529c62992f1d8bbf465337d789f6a16d8046d457bc7a235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hodgkin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct  1 12:16:40 np0005464891 systemd[1]: libpod-conmon-fe66fb4337a36de6c529c62992f1d8bbf465337d789f6a16d8046d457bc7a235.scope: Deactivated successfully.
Oct  1 12:16:40 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/3965361240' entity='client.rgw.rgw.compute-0.zdecaf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  1 12:16:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:16:41 np0005464891 podman[101425]: 2025-10-01 16:16:41.116498897 +0000 UTC m=+0.083454379 container create 54371b221da61553344310060ea4b3d9dae19c6f7c877f9ab33cdaa59e31c8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_antonelli, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:41 np0005464891 python3[101419]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:41 np0005464891 podman[101425]: 2025-10-01 16:16:41.07712356 +0000 UTC m=+0.044079122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct  1 12:16:41 np0005464891 systemd[1]: Started libpod-conmon-54371b221da61553344310060ea4b3d9dae19c6f7c877f9ab33cdaa59e31c8bc.scope.
Oct  1 12:16:41 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f970bd3e4292606b2941d8629e896141a952fe5f537aad83cb56d4d3006ff6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f970bd3e4292606b2941d8629e896141a952fe5f537aad83cb56d4d3006ff6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f970bd3e4292606b2941d8629e896141a952fe5f537aad83cb56d4d3006ff6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f970bd3e4292606b2941d8629e896141a952fe5f537aad83cb56d4d3006ff6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f970bd3e4292606b2941d8629e896141a952fe5f537aad83cb56d4d3006ff6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:41 np0005464891 podman[101439]: 2025-10-01 16:16:41.201218651 +0000 UTC m=+0.056951167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct  1 12:16:41 np0005464891 podman[101439]: 2025-10-01 16:16:41.458931054 +0000 UTC m=+0.314663560 container create 7d689a5b29f76cb6a3be4f7413b8885d42d4a12a3ba6c0317420c7e666ce4e1d (image=quay.io/ceph/ceph:v18, name=reverent_shamir, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:41 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct  1 12:16:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct  1 12:16:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/207909724' entity='client.rgw.rgw.compute-0.zdecaf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  1 12:16:41 np0005464891 podman[101425]: 2025-10-01 16:16:41.669931953 +0000 UTC m=+0.636887445 container init 54371b221da61553344310060ea4b3d9dae19c6f7c877f9ab33cdaa59e31c8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_antonelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 12:16:41 np0005464891 podman[101425]: 2025-10-01 16:16:41.676587571 +0000 UTC m=+0.643543043 container start 54371b221da61553344310060ea4b3d9dae19c6f7c877f9ab33cdaa59e31c8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:16:41 np0005464891 podman[101425]: 2025-10-01 16:16:41.733409094 +0000 UTC m=+0.700364586 container attach 54371b221da61553344310060ea4b3d9dae19c6f7c877f9ab33cdaa59e31c8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_antonelli, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:16:41 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=0/0 n=0 ec=52/52 lis/c=0/0 les/c/f=0/0/0 sis=52) [1] r=0 lpr=52 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:16:41 np0005464891 systemd[1]: Started libpod-conmon-7d689a5b29f76cb6a3be4f7413b8885d42d4a12a3ba6c0317420c7e666ce4e1d.scope.
Oct  1 12:16:41 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f56cd66c1b25dad1d61e9da2b73111536accaee07e8b28151f1672a66110c44/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f56cd66c1b25dad1d61e9da2b73111536accaee07e8b28151f1672a66110c44/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v118: 197 pgs: 2 unknown, 1 creating+peering, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:16:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:16:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:16:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:16:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:16:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:16:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:16:42 np0005464891 podman[101439]: 2025-10-01 16:16:42.068325068 +0000 UTC m=+0.924057634 container init 7d689a5b29f76cb6a3be4f7413b8885d42d4a12a3ba6c0317420c7e666ce4e1d (image=quay.io/ceph/ceph:v18, name=reverent_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:42 np0005464891 podman[101439]: 2025-10-01 16:16:42.07439037 +0000 UTC m=+0.930122916 container start 7d689a5b29f76cb6a3be4f7413b8885d42d4a12a3ba6c0317420c7e666ce4e1d (image=quay.io/ceph/ceph:v18, name=reverent_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 12:16:42 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/207909724' entity='client.rgw.rgw.compute-0.zdecaf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  1 12:16:42 np0005464891 podman[101439]: 2025-10-01 16:16:42.085937998 +0000 UTC m=+0.941670534 container attach 7d689a5b29f76cb6a3be4f7413b8885d42d4a12a3ba6c0317420c7e666ce4e1d (image=quay.io/ceph/ceph:v18, name=reverent_shamir, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:16:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct  1 12:16:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/207909724' entity='client.rgw.rgw.compute-0.zdecaf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  1 12:16:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct  1 12:16:42 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct  1 12:16:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct  1 12:16:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/207909724' entity='client.rgw.rgw.compute-0.zdecaf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  1 12:16:42 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 53 pg[11.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=0/0 les/c/f=0/0/0 sis=52) [1] r=0 lpr=52 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:16:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  1 12:16:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3678568004' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  1 12:16:42 np0005464891 reverent_shamir[101460]: 
Oct  1 12:16:42 np0005464891 reverent_shamir[101460]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.zdecaf","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct  1 12:16:42 np0005464891 systemd[1]: libpod-7d689a5b29f76cb6a3be4f7413b8885d42d4a12a3ba6c0317420c7e666ce4e1d.scope: Deactivated successfully.
Oct  1 12:16:42 np0005464891 podman[101439]: 2025-10-01 16:16:42.681706335 +0000 UTC m=+1.537438851 container died 7d689a5b29f76cb6a3be4f7413b8885d42d4a12a3ba6c0317420c7e666ce4e1d (image=quay.io/ceph/ceph:v18, name=reverent_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:42 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9f56cd66c1b25dad1d61e9da2b73111536accaee07e8b28151f1672a66110c44-merged.mount: Deactivated successfully.
Oct  1 12:16:42 np0005464891 podman[101439]: 2025-10-01 16:16:42.749567271 +0000 UTC m=+1.605299797 container remove 7d689a5b29f76cb6a3be4f7413b8885d42d4a12a3ba6c0317420c7e666ce4e1d (image=quay.io/ceph/ceph:v18, name=reverent_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 12:16:42 np0005464891 systemd[1]: libpod-conmon-7d689a5b29f76cb6a3be4f7413b8885d42d4a12a3ba6c0317420c7e666ce4e1d.scope: Deactivated successfully.
Oct  1 12:16:42 np0005464891 hardcore_antonelli[101453]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:16:42 np0005464891 hardcore_antonelli[101453]: --> relative data size: 1.0
Oct  1 12:16:42 np0005464891 hardcore_antonelli[101453]: --> All data devices are unavailable
Oct  1 12:16:42 np0005464891 systemd[1]: libpod-54371b221da61553344310060ea4b3d9dae19c6f7c877f9ab33cdaa59e31c8bc.scope: Deactivated successfully.
Oct  1 12:16:42 np0005464891 systemd[1]: libpod-54371b221da61553344310060ea4b3d9dae19c6f7c877f9ab33cdaa59e31c8bc.scope: Consumed 1.175s CPU time.
Oct  1 12:16:42 np0005464891 conmon[101453]: conmon 54371b221da615533443 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54371b221da61553344310060ea4b3d9dae19c6f7c877f9ab33cdaa59e31c8bc.scope/container/memory.events
Oct  1 12:16:42 np0005464891 podman[101425]: 2025-10-01 16:16:42.916489628 +0000 UTC m=+1.883445100 container died 54371b221da61553344310060ea4b3d9dae19c6f7c877f9ab33cdaa59e31c8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 12:16:42 np0005464891 systemd[1]: var-lib-containers-storage-overlay-19f970bd3e4292606b2941d8629e896141a952fe5f537aad83cb56d4d3006ff6-merged.mount: Deactivated successfully.
Oct  1 12:16:42 np0005464891 podman[101425]: 2025-10-01 16:16:42.96344395 +0000 UTC m=+1.930399422 container remove 54371b221da61553344310060ea4b3d9dae19c6f7c877f9ab33cdaa59e31c8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_antonelli, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:42 np0005464891 systemd[1]: libpod-conmon-54371b221da61553344310060ea4b3d9dae19c6f7c877f9ab33cdaa59e31c8bc.scope: Deactivated successfully.
Oct  1 12:16:43 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Oct  1 12:16:43 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Oct  1 12:16:43 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct  1 12:16:43 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct  1 12:16:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct  1 12:16:43 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/207909724' entity='client.rgw.rgw.compute-0.zdecaf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  1 12:16:43 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/207909724' entity='client.rgw.rgw.compute-0.zdecaf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  1 12:16:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/207909724' entity='client.rgw.rgw.compute-0.zdecaf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  1 12:16:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct  1 12:16:43 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct  1 12:16:43 np0005464891 radosgw[100033]: LDAP not started since no server URIs were provided in the configuration.
Oct  1 12:16:43 np0005464891 radosgw[100033]: framework: beast
Oct  1 12:16:43 np0005464891 radosgw[100033]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct  1 12:16:43 np0005464891 radosgw[100033]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct  1 12:16:43 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-rgw-rgw-compute-0-zdecaf[100029]: 2025-10-01T16:16:43.587+0000 7f4b6600c940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct  1 12:16:43 np0005464891 radosgw[100033]: starting handler: beast
Oct  1 12:16:43 np0005464891 radosgw[100033]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 12:16:43 np0005464891 radosgw[100033]: mgrc service_daemon_register rgw.14275 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.zdecaf,kernel_description=#1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025,kernel_version=5.14.0-617.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864116,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=6b8cee3f-5fda-44bf-9071-d68c3bf52fc1,zone_name=default,zonegroup_id=3a1e669c-ab47-4b3a-989b-0d352fc496ce,zonegroup_name=default}
Oct  1 12:16:43 np0005464891 podman[102046]: 2025-10-01 16:16:43.713718011 +0000 UTC m=+0.038846683 container create 6091594b6631470bd936d9a23553f4376ebf2bee379af092f0b0e8a8a9518b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 12:16:43 np0005464891 python3[101717]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:43 np0005464891 systemd[1]: Started libpod-conmon-6091594b6631470bd936d9a23553f4376ebf2bee379af092f0b0e8a8a9518b58.scope.
Oct  1 12:16:43 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:43 np0005464891 podman[102046]: 2025-10-01 16:16:43.697444149 +0000 UTC m=+0.022572841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:43 np0005464891 podman[102046]: 2025-10-01 16:16:43.800869925 +0000 UTC m=+0.125998617 container init 6091594b6631470bd936d9a23553f4376ebf2bee379af092f0b0e8a8a9518b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 12:16:43 np0005464891 podman[102046]: 2025-10-01 16:16:43.809351775 +0000 UTC m=+0.134480457 container start 6091594b6631470bd936d9a23553f4376ebf2bee379af092f0b0e8a8a9518b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brown, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:43 np0005464891 thirsty_brown[102267]: 167 167
Oct  1 12:16:43 np0005464891 podman[102046]: 2025-10-01 16:16:43.817083755 +0000 UTC m=+0.142212447 container attach 6091594b6631470bd936d9a23553f4376ebf2bee379af092f0b0e8a8a9518b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brown, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 12:16:43 np0005464891 systemd[1]: libpod-6091594b6631470bd936d9a23553f4376ebf2bee379af092f0b0e8a8a9518b58.scope: Deactivated successfully.
Oct  1 12:16:43 np0005464891 podman[102046]: 2025-10-01 16:16:43.818357331 +0000 UTC m=+0.143486003 container died 6091594b6631470bd936d9a23553f4376ebf2bee379af092f0b0e8a8a9518b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brown, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 12:16:43 np0005464891 podman[102262]: 2025-10-01 16:16:43.841139688 +0000 UTC m=+0.084335274 container create fa0efbc8ce02a85d456dc48d7dfe420cb5a2b499f37755e179d22041ab1f04aa (image=quay.io/ceph/ceph:v18, name=nervous_kowalevski, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:43 np0005464891 systemd[1]: var-lib-containers-storage-overlay-8d62822439f48331db42a67fb38ac1f5e647f6f4cbb1b720fb4779d93b6dbf41-merged.mount: Deactivated successfully.
Oct  1 12:16:43 np0005464891 podman[102046]: 2025-10-01 16:16:43.883406737 +0000 UTC m=+0.208535419 container remove 6091594b6631470bd936d9a23553f4376ebf2bee379af092f0b0e8a8a9518b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:16:43 np0005464891 podman[102262]: 2025-10-01 16:16:43.79015839 +0000 UTC m=+0.033354016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:43 np0005464891 systemd[1]: Started libpod-conmon-fa0efbc8ce02a85d456dc48d7dfe420cb5a2b499f37755e179d22041ab1f04aa.scope.
Oct  1 12:16:43 np0005464891 systemd[1]: libpod-conmon-6091594b6631470bd936d9a23553f4376ebf2bee379af092f0b0e8a8a9518b58.scope: Deactivated successfully.
Oct  1 12:16:43 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1142d9c8db3dd1414c14a12939c7b5f1040fca3e733f718e5118e4c4a4dc53e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1142d9c8db3dd1414c14a12939c7b5f1040fca3e733f718e5118e4c4a4dc53e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:43 np0005464891 podman[102262]: 2025-10-01 16:16:43.943030149 +0000 UTC m=+0.186225725 container init fa0efbc8ce02a85d456dc48d7dfe420cb5a2b499f37755e179d22041ab1f04aa (image=quay.io/ceph/ceph:v18, name=nervous_kowalevski, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:16:43 np0005464891 podman[102262]: 2025-10-01 16:16:43.95186178 +0000 UTC m=+0.195057336 container start fa0efbc8ce02a85d456dc48d7dfe420cb5a2b499f37755e179d22041ab1f04aa (image=quay.io/ceph/ceph:v18, name=nervous_kowalevski, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct  1 12:16:43 np0005464891 podman[102262]: 2025-10-01 16:16:43.956969945 +0000 UTC m=+0.200165501 container attach fa0efbc8ce02a85d456dc48d7dfe420cb5a2b499f37755e179d22041ab1f04aa (image=quay.io/ceph/ceph:v18, name=nervous_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 12:16:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v121: 197 pgs: 197 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.0 KiB/s wr, 12 op/s
Oct  1 12:16:44 np0005464891 podman[102308]: 2025-10-01 16:16:44.079640966 +0000 UTC m=+0.043661900 container create 558d97c30c3628c70756c11fcfafb23124c75be6e746468dd11f4f53ce6d68a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sanderson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 12:16:44 np0005464891 systemd[1]: Started libpod-conmon-558d97c30c3628c70756c11fcfafb23124c75be6e746468dd11f4f53ce6d68a1.scope.
Oct  1 12:16:44 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348fa8a1277d1990b0fdf84792570780d632706ce965fcea7131457178134e33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348fa8a1277d1990b0fdf84792570780d632706ce965fcea7131457178134e33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:44 np0005464891 podman[102308]: 2025-10-01 16:16:44.061534962 +0000 UTC m=+0.025555916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348fa8a1277d1990b0fdf84792570780d632706ce965fcea7131457178134e33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348fa8a1277d1990b0fdf84792570780d632706ce965fcea7131457178134e33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:44 np0005464891 podman[102308]: 2025-10-01 16:16:44.177891924 +0000 UTC m=+0.141912888 container init 558d97c30c3628c70756c11fcfafb23124c75be6e746468dd11f4f53ce6d68a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sanderson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 12:16:44 np0005464891 podman[102308]: 2025-10-01 16:16:44.193163137 +0000 UTC m=+0.157184081 container start 558d97c30c3628c70756c11fcfafb23124c75be6e746468dd11f4f53ce6d68a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sanderson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:44 np0005464891 podman[102308]: 2025-10-01 16:16:44.196920844 +0000 UTC m=+0.160941808 container attach 558d97c30c3628c70756c11fcfafb23124c75be6e746468dd11f4f53ce6d68a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sanderson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 12:16:44 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Oct  1 12:16:44 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Oct  1 12:16:44 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  1 12:16:44 np0005464891 ceph-mon[74303]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  1 12:16:44 np0005464891 ceph-mon[74303]: from='client.? 192.168.122.100:0/207909724' entity='client.rgw.rgw.compute-0.zdecaf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  1 12:16:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Oct  1 12:16:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/367779321' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct  1 12:16:44 np0005464891 nervous_kowalevski[102296]: mimic
Oct  1 12:16:44 np0005464891 systemd[1]: libpod-fa0efbc8ce02a85d456dc48d7dfe420cb5a2b499f37755e179d22041ab1f04aa.scope: Deactivated successfully.
Oct  1 12:16:44 np0005464891 podman[102262]: 2025-10-01 16:16:44.559864254 +0000 UTC m=+0.803059840 container died fa0efbc8ce02a85d456dc48d7dfe420cb5a2b499f37755e179d22041ab1f04aa (image=quay.io/ceph/ceph:v18, name=nervous_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 12:16:44 np0005464891 systemd[1]: var-lib-containers-storage-overlay-1142d9c8db3dd1414c14a12939c7b5f1040fca3e733f718e5118e4c4a4dc53e4-merged.mount: Deactivated successfully.
Oct  1 12:16:44 np0005464891 podman[102262]: 2025-10-01 16:16:44.624010774 +0000 UTC m=+0.867206370 container remove fa0efbc8ce02a85d456dc48d7dfe420cb5a2b499f37755e179d22041ab1f04aa (image=quay.io/ceph/ceph:v18, name=nervous_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:44 np0005464891 systemd[1]: libpod-conmon-fa0efbc8ce02a85d456dc48d7dfe420cb5a2b499f37755e179d22041ab1f04aa.scope: Deactivated successfully.
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]: {
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:    "0": [
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:        {
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "devices": [
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "/dev/loop3"
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            ],
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "lv_name": "ceph_lv0",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "lv_size": "21470642176",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "name": "ceph_lv0",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "tags": {
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.cluster_name": "ceph",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.crush_device_class": "",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.encrypted": "0",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.osd_id": "0",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.type": "block",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.vdo": "0"
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            },
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "type": "block",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "vg_name": "ceph_vg0"
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:        }
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:    ],
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:    "1": [
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:        {
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "devices": [
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "/dev/loop4"
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            ],
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "lv_name": "ceph_lv1",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "lv_size": "21470642176",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "name": "ceph_lv1",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "tags": {
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.cluster_name": "ceph",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.crush_device_class": "",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.encrypted": "0",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.osd_id": "1",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.type": "block",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.vdo": "0"
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            },
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "type": "block",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "vg_name": "ceph_vg1"
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:        }
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:    ],
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:    "2": [
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:        {
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "devices": [
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "/dev/loop5"
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            ],
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "lv_name": "ceph_lv2",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "lv_size": "21470642176",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "name": "ceph_lv2",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "tags": {
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.cluster_name": "ceph",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.crush_device_class": "",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.encrypted": "0",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.osd_id": "2",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.type": "block",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:                "ceph.vdo": "0"
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            },
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "type": "block",
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:            "vg_name": "ceph_vg2"
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:        }
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]:    ]
Oct  1 12:16:44 np0005464891 busy_sanderson[102325]: }
Oct  1 12:16:45 np0005464891 systemd[1]: libpod-558d97c30c3628c70756c11fcfafb23124c75be6e746468dd11f4f53ce6d68a1.scope: Deactivated successfully.
Oct  1 12:16:45 np0005464891 podman[102308]: 2025-10-01 16:16:45.008536266 +0000 UTC m=+0.972557270 container died 558d97c30c3628c70756c11fcfafb23124c75be6e746468dd11f4f53ce6d68a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:16:45 np0005464891 systemd[1]: var-lib-containers-storage-overlay-348fa8a1277d1990b0fdf84792570780d632706ce965fcea7131457178134e33-merged.mount: Deactivated successfully.
Oct  1 12:16:45 np0005464891 podman[102308]: 2025-10-01 16:16:45.061833358 +0000 UTC m=+1.025854292 container remove 558d97c30c3628c70756c11fcfafb23124c75be6e746468dd11f4f53ce6d68a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sanderson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:45 np0005464891 systemd[1]: libpod-conmon-558d97c30c3628c70756c11fcfafb23124c75be6e746468dd11f4f53ce6d68a1.scope: Deactivated successfully.
Oct  1 12:16:45 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct  1 12:16:45 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct  1 12:16:45 np0005464891 ceph-mon[74303]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  1 12:16:45 np0005464891 ceph-mon[74303]: Cluster is now healthy
Oct  1 12:16:45 np0005464891 python3[102505]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:16:45 np0005464891 podman[102513]: 2025-10-01 16:16:45.674111924 +0000 UTC m=+0.065950993 container create d90b9b4d70f69241e0b5fdff6fcbb438ea56644e3466b3ffa6433dc511f75124 (image=quay.io/ceph/ceph:v18, name=intelligent_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 12:16:45 np0005464891 podman[102513]: 2025-10-01 16:16:45.639713238 +0000 UTC m=+0.031552387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:16:45 np0005464891 systemd[1]: Started libpod-conmon-d90b9b4d70f69241e0b5fdff6fcbb438ea56644e3466b3ffa6433dc511f75124.scope.
Oct  1 12:16:45 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:45 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf19b83542b3229b58dc3fa566b0c495baad97e4740b5531db69150517cdbea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:45 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf19b83542b3229b58dc3fa566b0c495baad97e4740b5531db69150517cdbea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:45 np0005464891 podman[102513]: 2025-10-01 16:16:45.792070942 +0000 UTC m=+0.183910011 container init d90b9b4d70f69241e0b5fdff6fcbb438ea56644e3466b3ffa6433dc511f75124 (image=quay.io/ceph/ceph:v18, name=intelligent_ritchie, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:45 np0005464891 podman[102513]: 2025-10-01 16:16:45.801826029 +0000 UTC m=+0.193665088 container start d90b9b4d70f69241e0b5fdff6fcbb438ea56644e3466b3ffa6433dc511f75124 (image=quay.io/ceph/ceph:v18, name=intelligent_ritchie, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 12:16:45 np0005464891 podman[102513]: 2025-10-01 16:16:45.805623546 +0000 UTC m=+0.197462585 container attach d90b9b4d70f69241e0b5fdff6fcbb438ea56644e3466b3ffa6433dc511f75124 (image=quay.io/ceph/ceph:v18, name=intelligent_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:45 np0005464891 podman[102566]: 2025-10-01 16:16:45.970612979 +0000 UTC m=+0.052958534 container create e56af3e218e073c4931d9c4377f11098b906b3eb3c96110b758c88dc8015f694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 12:16:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v122: 197 pgs: 197 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 177 B/s rd, 2.8 KiB/s wr, 8 op/s
Oct  1 12:16:46 np0005464891 systemd[1]: Started libpod-conmon-e56af3e218e073c4931d9c4377f11098b906b3eb3c96110b758c88dc8015f694.scope.
Oct  1 12:16:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:16:46 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:46 np0005464891 podman[102566]: 2025-10-01 16:16:45.953905754 +0000 UTC m=+0.036251279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:46 np0005464891 podman[102566]: 2025-10-01 16:16:46.051122493 +0000 UTC m=+0.133468068 container init e56af3e218e073c4931d9c4377f11098b906b3eb3c96110b758c88dc8015f694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 12:16:46 np0005464891 podman[102566]: 2025-10-01 16:16:46.062850436 +0000 UTC m=+0.145195961 container start e56af3e218e073c4931d9c4377f11098b906b3eb3c96110b758c88dc8015f694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:46 np0005464891 hopeful_austin[102583]: 167 167
Oct  1 12:16:46 np0005464891 systemd[1]: libpod-e56af3e218e073c4931d9c4377f11098b906b3eb3c96110b758c88dc8015f694.scope: Deactivated successfully.
Oct  1 12:16:46 np0005464891 podman[102566]: 2025-10-01 16:16:46.068560018 +0000 UTC m=+0.150905583 container attach e56af3e218e073c4931d9c4377f11098b906b3eb3c96110b758c88dc8015f694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct  1 12:16:46 np0005464891 podman[102566]: 2025-10-01 16:16:46.069156175 +0000 UTC m=+0.151501710 container died e56af3e218e073c4931d9c4377f11098b906b3eb3c96110b758c88dc8015f694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 12:16:46 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c8e0bb7146ee66e27120a5ae67d3d4e13b3d20f174c6764314fa89451499f791-merged.mount: Deactivated successfully.
Oct  1 12:16:46 np0005464891 podman[102566]: 2025-10-01 16:16:46.1056376 +0000 UTC m=+0.187983135 container remove e56af3e218e073c4931d9c4377f11098b906b3eb3c96110b758c88dc8015f694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:46 np0005464891 systemd[1]: libpod-conmon-e56af3e218e073c4931d9c4377f11098b906b3eb3c96110b758c88dc8015f694.scope: Deactivated successfully.
Oct  1 12:16:46 np0005464891 podman[102625]: 2025-10-01 16:16:46.265378673 +0000 UTC m=+0.039977865 container create 4b41850774703d6fade73641c1b57100c60257ebe8cc2e9fb7a70096d712d272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yonath, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:46 np0005464891 systemd[1]: Started libpod-conmon-4b41850774703d6fade73641c1b57100c60257ebe8cc2e9fb7a70096d712d272.scope.
Oct  1 12:16:46 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/885c3884d6139ea55e6a092fe112c562b92c339eb301d398ce572a9aa7efc424/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/885c3884d6139ea55e6a092fe112c562b92c339eb301d398ce572a9aa7efc424/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/885c3884d6139ea55e6a092fe112c562b92c339eb301d398ce572a9aa7efc424/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/885c3884d6139ea55e6a092fe112c562b92c339eb301d398ce572a9aa7efc424/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:46 np0005464891 podman[102625]: 2025-10-01 16:16:46.24625552 +0000 UTC m=+0.020854752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:46 np0005464891 podman[102625]: 2025-10-01 16:16:46.350323514 +0000 UTC m=+0.124922726 container init 4b41850774703d6fade73641c1b57100c60257ebe8cc2e9fb7a70096d712d272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:16:46 np0005464891 podman[102625]: 2025-10-01 16:16:46.35686671 +0000 UTC m=+0.131465902 container start 4b41850774703d6fade73641c1b57100c60257ebe8cc2e9fb7a70096d712d272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yonath, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:16:46 np0005464891 podman[102625]: 2025-10-01 16:16:46.3614805 +0000 UTC m=+0.136079722 container attach 4b41850774703d6fade73641c1b57100c60257ebe8cc2e9fb7a70096d712d272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yonath, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Oct  1 12:16:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3882476398' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct  1 12:16:46 np0005464891 intelligent_ritchie[102548]: 
Oct  1 12:16:46 np0005464891 intelligent_ritchie[102548]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Oct  1 12:16:46 np0005464891 systemd[1]: libpod-d90b9b4d70f69241e0b5fdff6fcbb438ea56644e3466b3ffa6433dc511f75124.scope: Deactivated successfully.
Oct  1 12:16:46 np0005464891 podman[102513]: 2025-10-01 16:16:46.423287914 +0000 UTC m=+0.815126983 container died d90b9b4d70f69241e0b5fdff6fcbb438ea56644e3466b3ffa6433dc511f75124 (image=quay.io/ceph/ceph:v18, name=intelligent_ritchie, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 12:16:46 np0005464891 systemd[1]: var-lib-containers-storage-overlay-7bf19b83542b3229b58dc3fa566b0c495baad97e4740b5531db69150517cdbea-merged.mount: Deactivated successfully.
Oct  1 12:16:46 np0005464891 podman[102513]: 2025-10-01 16:16:46.477629367 +0000 UTC m=+0.869468406 container remove d90b9b4d70f69241e0b5fdff6fcbb438ea56644e3466b3ffa6433dc511f75124 (image=quay.io/ceph/ceph:v18, name=intelligent_ritchie, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 12:16:46 np0005464891 systemd[1]: libpod-conmon-d90b9b4d70f69241e0b5fdff6fcbb438ea56644e3466b3ffa6433dc511f75124.scope: Deactivated successfully.
Oct  1 12:16:47 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.c scrub starts
Oct  1 12:16:47 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.c scrub ok
Oct  1 12:16:47 np0005464891 clever_yonath[102641]: {
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:        "osd_id": 2,
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:        "type": "bluestore"
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:    },
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:        "osd_id": 0,
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:        "type": "bluestore"
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:    },
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:        "osd_id": 1,
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:        "type": "bluestore"
Oct  1 12:16:47 np0005464891 clever_yonath[102641]:    }
Oct  1 12:16:47 np0005464891 clever_yonath[102641]: }
Oct  1 12:16:47 np0005464891 systemd[1]: libpod-4b41850774703d6fade73641c1b57100c60257ebe8cc2e9fb7a70096d712d272.scope: Deactivated successfully.
Oct  1 12:16:47 np0005464891 podman[102625]: 2025-10-01 16:16:47.528411166 +0000 UTC m=+1.303010398 container died 4b41850774703d6fade73641c1b57100c60257ebe8cc2e9fb7a70096d712d272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yonath, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:47 np0005464891 systemd[1]: libpod-4b41850774703d6fade73641c1b57100c60257ebe8cc2e9fb7a70096d712d272.scope: Consumed 1.167s CPU time.
Oct  1 12:16:47 np0005464891 systemd[1]: var-lib-containers-storage-overlay-885c3884d6139ea55e6a092fe112c562b92c339eb301d398ce572a9aa7efc424-merged.mount: Deactivated successfully.
Oct  1 12:16:47 np0005464891 podman[102625]: 2025-10-01 16:16:47.604572968 +0000 UTC m=+1.379172160 container remove 4b41850774703d6fade73641c1b57100c60257ebe8cc2e9fb7a70096d712d272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yonath, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct  1 12:16:47 np0005464891 systemd[1]: libpod-conmon-4b41850774703d6fade73641c1b57100c60257ebe8cc2e9fb7a70096d712d272.scope: Deactivated successfully.
Oct  1 12:16:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:16:47 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:16:47 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:47 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 7528ea22-3bf2-4dc4-a496-ae42a7147174 does not exist
Oct  1 12:16:47 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 35b5ca14-2223-4dae-bdef-e2323bdbe242 does not exist
Oct  1 12:16:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v123: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 7.5 KiB/s wr, 179 op/s
Oct  1 12:16:48 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct  1 12:16:48 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct  1 12:16:48 np0005464891 podman[102925]: 2025-10-01 16:16:48.687735636 +0000 UTC m=+0.070161022 container exec 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 12:16:48 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:48 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:48 np0005464891 podman[102925]: 2025-10-01 16:16:48.798813957 +0000 UTC m=+0.181239243 container exec_died 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 12:16:49 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct  1 12:16:49 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:49 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 953d33a8-2d12-4f01-a93c-134727991ac4 does not exist
Oct  1 12:16:49 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 28f2b3a2-7c93-4e93-9a8d-e22930b1643b does not exist
Oct  1 12:16:49 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 87696a1c-e6d2-4de5-83e6-2eedd8f22892 does not exist
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:49 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:16:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v124: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 6.0 KiB/s wr, 143 op/s
Oct  1 12:16:50 np0005464891 podman[103226]: 2025-10-01 16:16:50.482417785 +0000 UTC m=+0.061615809 container create c36b06f9536d0247101bd7e0c192bd81a53e622bb819550c381f749701d0c79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_johnson, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:16:50 np0005464891 systemd[1]: Started libpod-conmon-c36b06f9536d0247101bd7e0c192bd81a53e622bb819550c381f749701d0c79b.scope.
Oct  1 12:16:50 np0005464891 podman[103226]: 2025-10-01 16:16:50.452125397 +0000 UTC m=+0.031323511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:50 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:50 np0005464891 podman[103226]: 2025-10-01 16:16:50.567792489 +0000 UTC m=+0.146990603 container init c36b06f9536d0247101bd7e0c192bd81a53e622bb819550c381f749701d0c79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:50 np0005464891 podman[103226]: 2025-10-01 16:16:50.577038591 +0000 UTC m=+0.156236655 container start c36b06f9536d0247101bd7e0c192bd81a53e622bb819550c381f749701d0c79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_johnson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:16:50 np0005464891 podman[103226]: 2025-10-01 16:16:50.58229732 +0000 UTC m=+0.161495384 container attach c36b06f9536d0247101bd7e0c192bd81a53e622bb819550c381f749701d0c79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_johnson, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 12:16:50 np0005464891 flamboyant_johnson[103242]: 167 167
Oct  1 12:16:50 np0005464891 systemd[1]: libpod-c36b06f9536d0247101bd7e0c192bd81a53e622bb819550c381f749701d0c79b.scope: Deactivated successfully.
Oct  1 12:16:50 np0005464891 podman[103226]: 2025-10-01 16:16:50.587966211 +0000 UTC m=+0.167164315 container died c36b06f9536d0247101bd7e0c192bd81a53e622bb819550c381f749701d0c79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_johnson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 12:16:50 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ccda782bba1e255817b3be2b0f135e866fc0cf43b6acd97f4ef3c173c2e84357-merged.mount: Deactivated successfully.
Oct  1 12:16:50 np0005464891 podman[103226]: 2025-10-01 16:16:50.634550763 +0000 UTC m=+0.213748797 container remove c36b06f9536d0247101bd7e0c192bd81a53e622bb819550c381f749701d0c79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:16:50 np0005464891 systemd[1]: libpod-conmon-c36b06f9536d0247101bd7e0c192bd81a53e622bb819550c381f749701d0c79b.scope: Deactivated successfully.
Oct  1 12:16:50 np0005464891 podman[103267]: 2025-10-01 16:16:50.832628854 +0000 UTC m=+0.060345493 container create dc1910da617d5dffbe586bacaaba462866ae57300d7f243f2ef817c9a5b9d4a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tharp, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 12:16:50 np0005464891 systemd[1]: Started libpod-conmon-dc1910da617d5dffbe586bacaaba462866ae57300d7f243f2ef817c9a5b9d4a1.scope.
Oct  1 12:16:50 np0005464891 podman[103267]: 2025-10-01 16:16:50.803212689 +0000 UTC m=+0.030929418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:50 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:50 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9fda22948408921224d672368347f91d0f1cb5cee10a9b49c0ab0b7a553398/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:50 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9fda22948408921224d672368347f91d0f1cb5cee10a9b49c0ab0b7a553398/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:50 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9fda22948408921224d672368347f91d0f1cb5cee10a9b49c0ab0b7a553398/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:50 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9fda22948408921224d672368347f91d0f1cb5cee10a9b49c0ab0b7a553398/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:50 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9fda22948408921224d672368347f91d0f1cb5cee10a9b49c0ab0b7a553398/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:50 np0005464891 podman[103267]: 2025-10-01 16:16:50.92727377 +0000 UTC m=+0.154990489 container init dc1910da617d5dffbe586bacaaba462866ae57300d7f243f2ef817c9a5b9d4a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:16:50 np0005464891 podman[103267]: 2025-10-01 16:16:50.941857093 +0000 UTC m=+0.169573762 container start dc1910da617d5dffbe586bacaaba462866ae57300d7f243f2ef817c9a5b9d4a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tharp, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:50 np0005464891 podman[103267]: 2025-10-01 16:16:50.94667856 +0000 UTC m=+0.174395239 container attach dc1910da617d5dffbe586bacaaba462866ae57300d7f243f2ef817c9a5b9d4a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tharp, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:16:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:16:51 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct  1 12:16:51 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct  1 12:16:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v125: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 5.0 KiB/s wr, 120 op/s
Oct  1 12:16:52 np0005464891 nifty_tharp[103283]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:16:52 np0005464891 nifty_tharp[103283]: --> relative data size: 1.0
Oct  1 12:16:52 np0005464891 nifty_tharp[103283]: --> All data devices are unavailable
Oct  1 12:16:52 np0005464891 systemd[1]: libpod-dc1910da617d5dffbe586bacaaba462866ae57300d7f243f2ef817c9a5b9d4a1.scope: Deactivated successfully.
Oct  1 12:16:52 np0005464891 podman[103267]: 2025-10-01 16:16:52.075923046 +0000 UTC m=+1.303639685 container died dc1910da617d5dffbe586bacaaba462866ae57300d7f243f2ef817c9a5b9d4a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tharp, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:52 np0005464891 systemd[1]: libpod-dc1910da617d5dffbe586bacaaba462866ae57300d7f243f2ef817c9a5b9d4a1.scope: Consumed 1.088s CPU time.
Oct  1 12:16:52 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ba9fda22948408921224d672368347f91d0f1cb5cee10a9b49c0ab0b7a553398-merged.mount: Deactivated successfully.
Oct  1 12:16:52 np0005464891 podman[103267]: 2025-10-01 16:16:52.132574204 +0000 UTC m=+1.360290853 container remove dc1910da617d5dffbe586bacaaba462866ae57300d7f243f2ef817c9a5b9d4a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tharp, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:16:52 np0005464891 systemd[1]: libpod-conmon-dc1910da617d5dffbe586bacaaba462866ae57300d7f243f2ef817c9a5b9d4a1.scope: Deactivated successfully.
Oct  1 12:16:52 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct  1 12:16:52 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct  1 12:16:52 np0005464891 podman[103465]: 2025-10-01 16:16:52.9545459 +0000 UTC m=+0.064013647 container create e8d40de7fdb2ad822b1bececffdc0d24db10f91dfb0bbeb1d5c7ccc7dd38d716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 12:16:53 np0005464891 systemd[1]: Started libpod-conmon-e8d40de7fdb2ad822b1bececffdc0d24db10f91dfb0bbeb1d5c7ccc7dd38d716.scope.
Oct  1 12:16:53 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:53 np0005464891 podman[103465]: 2025-10-01 16:16:52.93377928 +0000 UTC m=+0.043247047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:53 np0005464891 podman[103465]: 2025-10-01 16:16:53.055359081 +0000 UTC m=+0.164826808 container init e8d40de7fdb2ad822b1bececffdc0d24db10f91dfb0bbeb1d5c7ccc7dd38d716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:53 np0005464891 podman[103465]: 2025-10-01 16:16:53.062439282 +0000 UTC m=+0.171907009 container start e8d40de7fdb2ad822b1bececffdc0d24db10f91dfb0bbeb1d5c7ccc7dd38d716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:53 np0005464891 podman[103465]: 2025-10-01 16:16:53.065436157 +0000 UTC m=+0.174903894 container attach e8d40de7fdb2ad822b1bececffdc0d24db10f91dfb0bbeb1d5c7ccc7dd38d716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 12:16:53 np0005464891 pedantic_rhodes[103481]: 167 167
Oct  1 12:16:53 np0005464891 systemd[1]: libpod-e8d40de7fdb2ad822b1bececffdc0d24db10f91dfb0bbeb1d5c7ccc7dd38d716.scope: Deactivated successfully.
Oct  1 12:16:53 np0005464891 podman[103465]: 2025-10-01 16:16:53.067717022 +0000 UTC m=+0.177184769 container died e8d40de7fdb2ad822b1bececffdc0d24db10f91dfb0bbeb1d5c7ccc7dd38d716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:53 np0005464891 systemd[1]: var-lib-containers-storage-overlay-f45b1039ffcb7957984d296f17608bcfe3247c1b035002547b53b0f401276e8f-merged.mount: Deactivated successfully.
Oct  1 12:16:53 np0005464891 podman[103465]: 2025-10-01 16:16:53.111823293 +0000 UTC m=+0.221291010 container remove e8d40de7fdb2ad822b1bececffdc0d24db10f91dfb0bbeb1d5c7ccc7dd38d716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:16:53 np0005464891 systemd[1]: libpod-conmon-e8d40de7fdb2ad822b1bececffdc0d24db10f91dfb0bbeb1d5c7ccc7dd38d716.scope: Deactivated successfully.
Oct  1 12:16:53 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct  1 12:16:53 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct  1 12:16:53 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct  1 12:16:53 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct  1 12:16:53 np0005464891 podman[103505]: 2025-10-01 16:16:53.3890365 +0000 UTC m=+0.120372017 container create 8211855990ac150523b9dd4dbdcc892deb2933b355b0ddbd935fd18514f9d3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:16:53 np0005464891 podman[103505]: 2025-10-01 16:16:53.311588652 +0000 UTC m=+0.042924169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:53 np0005464891 systemd[1]: Started libpod-conmon-8211855990ac150523b9dd4dbdcc892deb2933b355b0ddbd935fd18514f9d3ae.scope.
Oct  1 12:16:53 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:53 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4da24b4c8222a08ba5a82b72da21fb09bc91dffb350bda08805af492f3bf0ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:53 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4da24b4c8222a08ba5a82b72da21fb09bc91dffb350bda08805af492f3bf0ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:53 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4da24b4c8222a08ba5a82b72da21fb09bc91dffb350bda08805af492f3bf0ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:53 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4da24b4c8222a08ba5a82b72da21fb09bc91dffb350bda08805af492f3bf0ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:53 np0005464891 podman[103505]: 2025-10-01 16:16:53.493039562 +0000 UTC m=+0.224375079 container init 8211855990ac150523b9dd4dbdcc892deb2933b355b0ddbd935fd18514f9d3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:16:53 np0005464891 podman[103505]: 2025-10-01 16:16:53.510437235 +0000 UTC m=+0.241772732 container start 8211855990ac150523b9dd4dbdcc892deb2933b355b0ddbd935fd18514f9d3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 12:16:53 np0005464891 podman[103505]: 2025-10-01 16:16:53.532166582 +0000 UTC m=+0.263502089 container attach 8211855990ac150523b9dd4dbdcc892deb2933b355b0ddbd935fd18514f9d3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v126: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.0 KiB/s wr, 104 op/s
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]: {
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:    "0": [
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:        {
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "devices": [
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "/dev/loop3"
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            ],
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "lv_name": "ceph_lv0",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "lv_size": "21470642176",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "name": "ceph_lv0",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "tags": {
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.cluster_name": "ceph",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.crush_device_class": "",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.encrypted": "0",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.osd_id": "0",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.type": "block",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.vdo": "0"
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            },
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "type": "block",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "vg_name": "ceph_vg0"
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:        }
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:    ],
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:    "1": [
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:        {
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "devices": [
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "/dev/loop4"
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            ],
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "lv_name": "ceph_lv1",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "lv_size": "21470642176",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "name": "ceph_lv1",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "tags": {
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.cluster_name": "ceph",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.crush_device_class": "",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.encrypted": "0",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.osd_id": "1",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.type": "block",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.vdo": "0"
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            },
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "type": "block",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "vg_name": "ceph_vg1"
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:        }
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:    ],
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:    "2": [
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:        {
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "devices": [
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "/dev/loop5"
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            ],
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "lv_name": "ceph_lv2",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "lv_size": "21470642176",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "name": "ceph_lv2",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "tags": {
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.cluster_name": "ceph",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.crush_device_class": "",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.encrypted": "0",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.osd_id": "2",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.type": "block",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:                "ceph.vdo": "0"
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            },
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "type": "block",
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:            "vg_name": "ceph_vg2"
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:        }
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]:    ]
Oct  1 12:16:54 np0005464891 quizzical_gould[103522]: }
Oct  1 12:16:54 np0005464891 systemd[1]: libpod-8211855990ac150523b9dd4dbdcc892deb2933b355b0ddbd935fd18514f9d3ae.scope: Deactivated successfully.
Oct  1 12:16:54 np0005464891 podman[103505]: 2025-10-01 16:16:54.304122899 +0000 UTC m=+1.035458416 container died 8211855990ac150523b9dd4dbdcc892deb2933b355b0ddbd935fd18514f9d3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 12:16:54 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c4da24b4c8222a08ba5a82b72da21fb09bc91dffb350bda08805af492f3bf0ab-merged.mount: Deactivated successfully.
Oct  1 12:16:54 np0005464891 podman[103505]: 2025-10-01 16:16:54.370973646 +0000 UTC m=+1.102309173 container remove 8211855990ac150523b9dd4dbdcc892deb2933b355b0ddbd935fd18514f9d3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:16:54 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.b scrub starts
Oct  1 12:16:54 np0005464891 systemd[1]: libpod-conmon-8211855990ac150523b9dd4dbdcc892deb2933b355b0ddbd935fd18514f9d3ae.scope: Deactivated successfully.
Oct  1 12:16:54 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.b scrub ok
Oct  1 12:16:55 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Oct  1 12:16:55 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Oct  1 12:16:55 np0005464891 podman[103685]: 2025-10-01 16:16:55.223413607 +0000 UTC m=+0.067626340 container create 1f2fbbf90d9413a213df3b2f6a60998fbf495467e7352ed74cfbaa7507846996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 12:16:55 np0005464891 systemd[1]: Started libpod-conmon-1f2fbbf90d9413a213df3b2f6a60998fbf495467e7352ed74cfbaa7507846996.scope.
Oct  1 12:16:55 np0005464891 podman[103685]: 2025-10-01 16:16:55.196119942 +0000 UTC m=+0.040332715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:55 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:55 np0005464891 podman[103685]: 2025-10-01 16:16:55.31019513 +0000 UTC m=+0.154407903 container init 1f2fbbf90d9413a213df3b2f6a60998fbf495467e7352ed74cfbaa7507846996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lovelace, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:16:55 np0005464891 podman[103685]: 2025-10-01 16:16:55.315669665 +0000 UTC m=+0.159882358 container start 1f2fbbf90d9413a213df3b2f6a60998fbf495467e7352ed74cfbaa7507846996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 12:16:55 np0005464891 podman[103685]: 2025-10-01 16:16:55.318682321 +0000 UTC m=+0.162895034 container attach 1f2fbbf90d9413a213df3b2f6a60998fbf495467e7352ed74cfbaa7507846996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lovelace, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:16:55 np0005464891 compassionate_lovelace[103701]: 167 167
Oct  1 12:16:55 np0005464891 systemd[1]: libpod-1f2fbbf90d9413a213df3b2f6a60998fbf495467e7352ed74cfbaa7507846996.scope: Deactivated successfully.
Oct  1 12:16:55 np0005464891 podman[103685]: 2025-10-01 16:16:55.322644403 +0000 UTC m=+0.166857096 container died 1f2fbbf90d9413a213df3b2f6a60998fbf495467e7352ed74cfbaa7507846996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lovelace, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 12:16:55 np0005464891 systemd[1]: var-lib-containers-storage-overlay-8170ab42fa95d27df49afd661be58fd921a2ea6dfeeb75d9a30af535742c8c08-merged.mount: Deactivated successfully.
Oct  1 12:16:55 np0005464891 podman[103685]: 2025-10-01 16:16:55.351588654 +0000 UTC m=+0.195801357 container remove 1f2fbbf90d9413a213df3b2f6a60998fbf495467e7352ed74cfbaa7507846996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:55 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.d scrub starts
Oct  1 12:16:55 np0005464891 systemd[1]: libpod-conmon-1f2fbbf90d9413a213df3b2f6a60998fbf495467e7352ed74cfbaa7507846996.scope: Deactivated successfully.
Oct  1 12:16:55 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.d scrub ok
Oct  1 12:16:55 np0005464891 podman[103725]: 2025-10-01 16:16:55.55828386 +0000 UTC m=+0.061902018 container create 19edd733d5004951b12ae5b002d258661ee96afdfb24c47b4967889f0696c8c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_burnell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:16:55 np0005464891 systemd[1]: Started libpod-conmon-19edd733d5004951b12ae5b002d258661ee96afdfb24c47b4967889f0696c8c3.scope.
Oct  1 12:16:55 np0005464891 podman[103725]: 2025-10-01 16:16:55.526754255 +0000 UTC m=+0.030372513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:16:55 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:16:55 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d28728b5ff2b71a44389f248db2d676985ad1b847c7d811ceb55c325818028d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:55 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d28728b5ff2b71a44389f248db2d676985ad1b847c7d811ceb55c325818028d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:55 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d28728b5ff2b71a44389f248db2d676985ad1b847c7d811ceb55c325818028d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:55 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d28728b5ff2b71a44389f248db2d676985ad1b847c7d811ceb55c325818028d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:16:55 np0005464891 podman[103725]: 2025-10-01 16:16:55.656674962 +0000 UTC m=+0.160293130 container init 19edd733d5004951b12ae5b002d258661ee96afdfb24c47b4967889f0696c8c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:16:55 np0005464891 podman[103725]: 2025-10-01 16:16:55.664440422 +0000 UTC m=+0.168058610 container start 19edd733d5004951b12ae5b002d258661ee96afdfb24c47b4967889f0696c8c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_burnell, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 12:16:55 np0005464891 podman[103725]: 2025-10-01 16:16:55.668366034 +0000 UTC m=+0.171984262 container attach 19edd733d5004951b12ae5b002d258661ee96afdfb24c47b4967889f0696c8c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 12:16:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 91 op/s
Oct  1 12:16:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]: {
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:        "osd_id": 2,
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:        "type": "bluestore"
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:    },
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:        "osd_id": 0,
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:        "type": "bluestore"
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:    },
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:        "osd_id": 1,
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:        "type": "bluestore"
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]:    }
Oct  1 12:16:56 np0005464891 gallant_burnell[103741]: }
Oct  1 12:16:56 np0005464891 systemd[1]: libpod-19edd733d5004951b12ae5b002d258661ee96afdfb24c47b4967889f0696c8c3.scope: Deactivated successfully.
Oct  1 12:16:56 np0005464891 podman[103725]: 2025-10-01 16:16:56.813235503 +0000 UTC m=+1.316853651 container died 19edd733d5004951b12ae5b002d258661ee96afdfb24c47b4967889f0696c8c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_burnell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 12:16:56 np0005464891 systemd[1]: libpod-19edd733d5004951b12ae5b002d258661ee96afdfb24c47b4967889f0696c8c3.scope: Consumed 1.153s CPU time.
Oct  1 12:16:56 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6d28728b5ff2b71a44389f248db2d676985ad1b847c7d811ceb55c325818028d-merged.mount: Deactivated successfully.
Oct  1 12:16:56 np0005464891 podman[103725]: 2025-10-01 16:16:56.881930333 +0000 UTC m=+1.385548491 container remove 19edd733d5004951b12ae5b002d258661ee96afdfb24c47b4967889f0696c8c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 12:16:56 np0005464891 systemd[1]: libpod-conmon-19edd733d5004951b12ae5b002d258661ee96afdfb24c47b4967889f0696c8c3.scope: Deactivated successfully.
Oct  1 12:16:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:16:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:16:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:56 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev cf39421c-3b94-42fc-a62c-e80869a1a24d does not exist
Oct  1 12:16:56 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 4e1174a1-f0d3-40d8-b600-6fe8db9ac73c does not exist
Oct  1 12:16:56 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:56 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:16:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 91 op/s
Oct  1 12:16:59 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct  1 12:16:59 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct  1 12:16:59 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct  1 12:16:59 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct  1 12:16:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:00 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct  1 12:17:00 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct  1 12:17:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:17:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:02 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct  1 12:17:02 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct  1 12:17:02 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct  1 12:17:02 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct  1 12:17:03 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct  1 12:17:03 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.1d deep-scrub starts
Oct  1 12:17:03 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct  1 12:17:03 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.1d deep-scrub ok
Oct  1 12:17:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:04 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct  1 12:17:04 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct  1 12:17:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:17:06 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct  1 12:17:06 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct  1 12:17:06 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct  1 12:17:06 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct  1 12:17:07 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct  1 12:17:07 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct  1 12:17:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:08 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct  1 12:17:08 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct  1 12:17:08 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct  1 12:17:08 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct  1 12:17:09 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct  1 12:17:09 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct  1 12:17:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:10 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct  1 12:17:10 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct  1 12:17:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:17:11 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Oct  1 12:17:11 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Oct  1 12:17:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:17:11
Oct  1 12:17:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:17:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:17:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct  1 12:17:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:17:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:17:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:17:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:17:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:17:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:17:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:17:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:17:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:17:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:17:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:17:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:17:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:17:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:17:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:17:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:17:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:17:13 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct  1 12:17:13 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct  1 12:17:13 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct  1 12:17:13 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct  1 12:17:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v136: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:14 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct  1 12:17:14 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct  1 12:17:15 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.1e deep-scrub starts
Oct  1 12:17:15 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.1e deep-scrub ok
Oct  1 12:17:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v137: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:17:16 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.d deep-scrub starts
Oct  1 12:17:16 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.d deep-scrub ok
Oct  1 12:17:16 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct  1 12:17:16 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Oct  1 12:17:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 12:17:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:17:17 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct  1 12:17:17 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct  1 12:17:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v138: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct  1 12:17:18 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:17:18 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct  1 12:17:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:17:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct  1 12:17:18 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct  1 12:17:18 np0005464891 ceph-mgr[74592]: [progress INFO root] update: starting ev 80e297c8-d59b-46b2-aacb-9d6fedaf5321 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct  1 12:17:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 12:17:18 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct  1 12:17:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:17:18 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct  1 12:17:18 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct  1 12:17:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct  1 12:17:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:17:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct  1 12:17:19 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct  1 12:17:19 np0005464891 ceph-mgr[74592]: [progress INFO root] update: starting ev d4087662-35a9-4aad-b8be-630a5344dc9e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct  1 12:17:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 12:17:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:17:19 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:17:19 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:17:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v141: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 12:17:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:17:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 12:17:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:17:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct  1 12:17:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:17:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:17:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:17:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct  1 12:17:20 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct  1 12:17:20 np0005464891 ceph-mgr[74592]: [progress INFO root] update: starting ev d7c53d81-c6a9-4eec-91c7-2fcf7a475645 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct  1 12:17:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 12:17:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:17:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:17:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:17:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:17:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:17:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:17:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:17:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:17:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct  1 12:17:21 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct  1 12:17:21 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct  1 12:17:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:17:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct  1 12:17:21 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct  1 12:17:21 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 57 pg[9.0( v 54'385 (0'0,54'385] local-lis/les=48/49 n=177 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=57 pruub=12.692100525s) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 54'384 mlcod 54'384 active pruub 132.813034058s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:21 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 57 pg[8.0( v 47'4 (0'0,47'4] local-lis/les=46/47 n=4 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=10.671419144s) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 47'3 mlcod 47'3 active pruub 130.792846680s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:21 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:17:21 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 12:17:21 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 57 pg[8.0( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=10.671419144s) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 47'3 mlcod 0'0 unknown pruub 130.792846680s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:21 np0005464891 ceph-mgr[74592]: [progress INFO root] update: starting ev 222b524b-b7b0-41b4-99c3-d7938a9b3ab8 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct  1 12:17:21 np0005464891 ceph-mgr[74592]: [progress INFO root] complete: finished ev 80e297c8-d59b-46b2-aacb-9d6fedaf5321 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct  1 12:17:21 np0005464891 ceph-mgr[74592]: [progress INFO root] Completed event 80e297c8-d59b-46b2-aacb-9d6fedaf5321 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Oct  1 12:17:21 np0005464891 ceph-mgr[74592]: [progress INFO root] complete: finished ev d4087662-35a9-4aad-b8be-630a5344dc9e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct  1 12:17:21 np0005464891 ceph-mgr[74592]: [progress INFO root] Completed event d4087662-35a9-4aad-b8be-630a5344dc9e (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Oct  1 12:17:21 np0005464891 ceph-mgr[74592]: [progress INFO root] complete: finished ev d7c53d81-c6a9-4eec-91c7-2fcf7a475645 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct  1 12:17:21 np0005464891 ceph-mgr[74592]: [progress INFO root] Completed event d7c53d81-c6a9-4eec-91c7-2fcf7a475645 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Oct  1 12:17:21 np0005464891 ceph-mgr[74592]: [progress INFO root] complete: finished ev 222b524b-b7b0-41b4-99c3-d7938a9b3ab8 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct  1 12:17:21 np0005464891 ceph-mgr[74592]: [progress INFO root] Completed event 222b524b-b7b0-41b4-99c3-d7938a9b3ab8 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Oct  1 12:17:21 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 57 pg[9.0( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=57 pruub=12.692100525s) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 54'384 mlcod 0'0 unknown pruub 132.813034058s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v144: 259 pgs: 62 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 12:17:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:17:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 12:17:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:17:22 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct  1 12:17:22 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct  1 12:17:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct  1 12:17:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:17:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:17:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct  1 12:17:22 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  1 12:17:22 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:17:22 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 12:17:22 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct  1 12:17:22 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 59 pg[10.0( v 51'16 (0'0,51'16] local-lis/les=50/51 n=8 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=13.710514069s) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 51'15 mlcod 51'15 active pruub 129.074310303s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:22 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 59 pg[10.0( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=13.710514069s) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 51'15 mlcod 0'0 unknown pruub 129.074310303s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.14( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.15( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.15( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.14( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.17( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.17( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.16( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.11( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.10( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.16( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.1( v 47'4 (0'0,47'4] local-lis/les=46/47 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.2( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.3( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.3( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[11.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=15.937201500s) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active pruub 137.093124390s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.c( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.2( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.d( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.d( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.c( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.f( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.e( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.8( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.9( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.a( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.b( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.e( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.b( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.f( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.a( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.9( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.8( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.1( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.6( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.7( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.6( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.7( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.4( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.5( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.4( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.1a( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.18( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.1b( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.5( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.19( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.19( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.18( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.1e( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.1f( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.1f( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.1e( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.1c( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.1d( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.1d( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.1c( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.13( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.12( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.12( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.13( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.11( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.10( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.1b( v 54'385 lc 0'0 (0'0,54'385] local-lis/les=48/49 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.14( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.1a( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[11.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=15.937201500s) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown pruub 137.093124390s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.14( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.17( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.0( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 54'384 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.1( v 47'4 (0'0,47'4] local-lis/les=57/59 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.10( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.2( v 47'4 (0'0,47'4] local-lis/les=57/59 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.3( v 47'4 (0'0,47'4] local-lis/les=57/59 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.15( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.c( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.d( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.2( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.8( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.16( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.e( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.a( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.b( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.9( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.0( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 47'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.f( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.a( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.7( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.6( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.4( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.1a( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.5( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.4( v 47'4 (0'0,47'4] local-lis/les=57/59 n=1 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.19( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.1b( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.18( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.1f( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.1e( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.1c( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.1d( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.13( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.12( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.12( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[9.10( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=48/48 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[48,57)/1 crt=54'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.11( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 59 pg[8.1a( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct  1 12:17:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct  1 12:17:23 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:17:23 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 12:17:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct  1 12:17:23 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.17( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.16( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.15( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.13( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.2( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.d( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.b( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.9( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.c( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.8( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.a( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.d( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.3( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.1e( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.5( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.b( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.e( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.a( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.6( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.7( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.13( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.18( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.11( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.12( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.1b( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.10( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.1f( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.1d( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.1d( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.1f( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.1a( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.10( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.11( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.1c( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.19( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.18( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.19( empty local-lis/les=52/53 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.7( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.5( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.6( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.4( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.8( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.f( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.9( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.c( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.e( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.1( v 51'16 (0'0,51'16] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.2( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.14( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.3( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.15( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.16( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.17( v 51'16 lc 0'0 (0'0,51'16] local-lis/les=50/51 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.d( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.13( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.15( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.2( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.0( empty local-lis/les=59/60 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.16( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.d( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.9( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.b( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.c( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.a( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.6( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.5( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.7( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.18( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.1d( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.a( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.1e( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.11( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.13( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.1f( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.b( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.11( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.12( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.10( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.1b( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 60 pg[11.10( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.1f( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.1a( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.19( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.7( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.1c( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.1d( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.18( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.5( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.6( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.8( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.4( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.e( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.0( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 51'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.9( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.f( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.c( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.14( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.1( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.3( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.15( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.16( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.17( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 60 pg[10.2( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=50/50 les/c/f=51/51/0 sis=59) [2] r=0 lpr=59 pi=[50,59)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:23 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct  1 12:17:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v147: 321 pgs: 2 peering, 62 unknown, 257 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:24 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct  1 12:17:24 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct  1 12:17:24 np0005464891 ceph-mgr[74592]: [progress INFO root] Writing back 15 completed events
Oct  1 12:17:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  1 12:17:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:17:24 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct  1 12:17:24 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct  1 12:17:24 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:17:25 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct  1 12:17:25 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct  1 12:17:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v148: 321 pgs: 2 peering, 62 unknown, 257 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:17:26 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct  1 12:17:26 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct  1 12:17:26 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct  1 12:17:26 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct  1 12:17:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v149: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 12:17:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:17:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 12:17:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:17:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct  1 12:17:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  1 12:17:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 12:17:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct  1 12:17:28 np0005464891 python3[103863]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Oct  1 12:17:28 np0005464891 podman[103864]: 2025-10-01 16:17:28.546322133 +0000 UTC m=+0.054280965 container create c0709745fc7cec3e68acc784481449751084803d5be7843de5bf826097816838 (image=quay.io/ceph/ceph:v18, name=vigorous_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Oct  1 12:17:28 np0005464891 systemd[1]: Started libpod-conmon-c0709745fc7cec3e68acc784481449751084803d5be7843de5bf826097816838.scope.
Oct  1 12:17:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct  1 12:17:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:17:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:17:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  1 12:17:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:17:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct  1 12:17:28 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct  1 12:17:28 np0005464891 podman[103864]: 2025-10-01 16:17:28.519915266 +0000 UTC m=+0.027874148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.939772606s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.183197021s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.939542770s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.183197021s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.924491882s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 137.168167114s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.14( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.918928146s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.162658691s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.924308777s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 137.168151855s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.15( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.924540520s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.168457031s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.15( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.947396278s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.191421509s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923236847s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168167114s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.14( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.917706490s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.162658691s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.15( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923456192s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168457031s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:17:28 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.946147919s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.191375732s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.15( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.946212769s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.191421509s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.946118355s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.191375732s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.922756195s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168151855s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.10( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.922872543s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.168380737s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.10( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.922839165s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168380737s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.2( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.945765495s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.191452026s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.2( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.945742607s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.191452026s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.922448158s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 137.168334961s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.922423363s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168334961s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.922411919s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 137.168426514s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.922357559s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168426514s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.945354462s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.191543579s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.945331573s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.191543579s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.945152283s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.191650391s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.2( v 47'4 (0'0,47'4] local-lis/les=57/59 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.921885490s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.168411255s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.945057869s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.191650391s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.c( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.921656609s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.168426514s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.c( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.921633720s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168426514s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.2( v 47'4 (0'0,47'4] local-lis/les=57/59 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.921779633s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168411255s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.921533585s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 137.168472290s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.921483994s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168472290s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.945019722s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.192123413s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.945004463s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.192123413s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.d( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.944370270s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.191650391s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.d( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.921222687s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.168518066s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.d( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.944349289s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.191650391s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.d( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.921195030s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168518066s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.e( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.921171188s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.168624878s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.e( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.921141624s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168624878s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.b( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.944065094s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.191696167s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.b( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.944034576s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.191696167s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.920863152s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 137.168624878s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.920842171s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168624878s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.9( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.943904877s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.191696167s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.9( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.943863869s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.191696167s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.920711517s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 137.168594360s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.920615196s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 137.168670654s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.f( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.920723915s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.168746948s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.920602798s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168670654s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.920603752s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168594360s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.f( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.920657158s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168746948s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.943602562s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.191741943s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.b( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.920415878s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.168685913s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.b( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.920395851s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168685913s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.9( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.920198441s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.168701172s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.9( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.920153618s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168701172s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.943461418s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.191741943s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.6( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.920055389s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.168792725s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.942989349s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.191818237s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.6( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.920026779s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168792725s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.942964554s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.191818237s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.6( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.942880630s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.191879272s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.6( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.942858696s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.191879272s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.942877769s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.191818237s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.942682266s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.191818237s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[8.10( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[8.b( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.919395447s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 137.168807983s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.919345856s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168807983s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.18( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.942294121s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.192001343s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.18( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.942265511s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.192001343s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.919120789s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 137.168884277s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.919093132s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.168884277s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.1b( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923987389s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.173965454s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.1b( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923958778s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.173965454s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.941866875s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.192001343s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.941835403s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.192001343s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.941783905s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.192062378s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923690796s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 137.173980713s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923667908s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.173980713s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.941754341s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.192062378s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.941970825s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.192459106s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.941952705s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.192459106s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.18( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923437119s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.173965454s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.18( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923408508s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.173965454s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.1f( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923420906s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.174057007s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.1f( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923404694s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.174057007s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923316956s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 137.174041748s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923565865s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 137.173904419s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923290253s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.174041748s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.941482544s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.192276001s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.941466331s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.192276001s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923106194s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.173904419s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.1d( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923379898s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.174301147s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.1f( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.941398621s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.192337036s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.1d( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923362732s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.174301147s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.1f( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.941370010s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.192337036s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923191071s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 137.174224854s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923163414s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.174224854s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.4( v 47'4 (0'0,47'4] local-lis/les=57/59 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.922667503s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.173873901s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.4( v 47'4 (0'0,47'4] local-lis/les=57/59 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.922641754s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.173873901s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.1c( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923057556s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.174255371s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.1c( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.922969818s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.174255371s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.11( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.941020966s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.192337036s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.11( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.940907478s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.192337036s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923211098s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 137.174667358s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923183441s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.174667358s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[9.b( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[11.14( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.940696716s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.192382812s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.11( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923059464s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.174819946s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.11( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.923020363s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.174819946s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.940573692s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.192382812s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[8.9( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[9.9( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.12( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.922561646s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.174667358s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.922677994s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 137.174743652s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.940378189s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.192413330s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.12( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.922542572s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.174667358s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.922605515s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.174743652s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.940257072s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.192413330s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.10( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.940168381s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 138.192352295s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[11.10( empty local-lis/les=59/60 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.940153122s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 138.192352295s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.1a( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.922541618s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active pruub 137.174880981s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[8.1a( v 47'4 (0'0,47'4] local-lis/les=57/59 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61 pruub=9.922514915s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.174880981s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[8.f( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[11.e( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[8.e( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[11.f( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[8.c( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[9.3( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[11.1( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[11.17( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[8.14( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[8.15( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[11.15( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[11.2( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[11.3( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[8.2( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e0815e5e982633ec3ece10103b835e316e8b0e9eac643c5beb2b89332b0941/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:17:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e0815e5e982633ec3ece10103b835e316e8b0e9eac643c5beb2b89332b0941/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[11.d( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[11.8( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[8.d( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[11.9( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[8.4( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[11.18( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[8.1b( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[8.6( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[11.6( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[11.4( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[9.1( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[9.5( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[8.18( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[8.1f( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[11.1b( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[8.1d( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[11.10( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[11.1c( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[10.1e( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[8.1a( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[11.1e( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[11.11( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[8.12( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[8.11( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[11.12( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[11.b( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[11.1a( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[11.19( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[11.1f( empty local-lis/les=0/0 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[8.1c( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.1e( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.931172371s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.405593872s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.1e( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.931138992s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405593872s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.d( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.922579765s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 51'16 active pruub 132.397232056s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.d( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.922539711s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 0'0 unknown NOTIFY pruub 132.397232056s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.b( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.930814743s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.405685425s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.b( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.930786133s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405685425s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.13( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.930560112s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.405624390s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.13( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.930533409s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405624390s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.12( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.930160522s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.405746460s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.12( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.930126190s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405746460s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.11( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929961205s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.405700684s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.10( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929968834s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.405761719s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.11( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929898262s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405700684s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.10( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929903030s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405761719s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.1a( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929775238s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.405838013s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.19( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929780006s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.405868530s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.1a( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929742813s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405838013s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[10.d( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.19( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929733276s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405868530s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.7( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929557800s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.405914307s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.6( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.930036545s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.406402588s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.6( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929998398s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.406402588s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[10.7( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.7( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929508209s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405914307s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.4( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929920197s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.406417847s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.4( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929883957s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.406417847s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.8( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929779053s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.406433105s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[10.4( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[10.8( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.8( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929747581s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.406433105s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.f( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929925919s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.406661987s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[10.9( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.f( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929880142s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.406661987s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.9( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929780960s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 51'16 active pruub 132.406585693s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.9( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929715157s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 0'0 unknown NOTIFY pruub 132.406585693s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.e( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929579735s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 51'16 active pruub 132.406494141s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[10.e( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.e( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929542542s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 0'0 unknown NOTIFY pruub 132.406494141s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[10.1( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.1( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929653168s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.406753540s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.2( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.930622101s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.407775879s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.1( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929609299s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.406753540s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.2( v 51'16 (0'0,51'16] local-lis/les=59/60 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.930594444s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.407775879s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.14( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929151535s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 51'16 active pruub 132.406677246s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[10.15( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.14( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929089546s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 0'0 unknown NOTIFY pruub 132.406677246s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.15( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929191589s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 51'16 active pruub 132.406829834s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[10.16( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.15( v 60'17 (0'0,60'17] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.929139137s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 51'16 mlcod 0'0 unknown NOTIFY pruub 132.406829834s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 61 pg[10.17( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.16( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.928977013s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.406875610s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.17( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.928959846s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active pruub 132.406890869s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.16( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.928937912s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.406875610s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 61 pg[10.17( v 51'16 (0'0,51'16] local-lis/les=59/60 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=10.928915024s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.406890869s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[10.b( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[10.13( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[10.12( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[10.11( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[10.10( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[10.1a( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[10.19( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[10.6( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[10.f( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[10.2( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 61 pg[10.14( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:28 np0005464891 podman[103864]: 2025-10-01 16:17:28.660378873 +0000 UTC m=+0.168337715 container init c0709745fc7cec3e68acc784481449751084803d5be7843de5bf826097816838 (image=quay.io/ceph/ceph:v18, name=vigorous_swirles, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:17:28 np0005464891 podman[103864]: 2025-10-01 16:17:28.667201963 +0000 UTC m=+0.175160765 container start c0709745fc7cec3e68acc784481449751084803d5be7843de5bf826097816838 (image=quay.io/ceph/ceph:v18, name=vigorous_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:17:28 np0005464891 podman[103864]: 2025-10-01 16:17:28.670200946 +0000 UTC m=+0.178159758 container attach c0709745fc7cec3e68acc784481449751084803d5be7843de5bf826097816838 (image=quay.io/ceph/ceph:v18, name=vigorous_swirles, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Oct  1 12:17:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct  1 12:17:29 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:17:29 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:17:29 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  1 12:17:29 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:17:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct  1 12:17:29 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.1( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.1( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.9( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.9( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.b( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.b( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.5( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[9.5( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[10.7( v 51'16 (0'0,51'16] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[10.4( v 51'16 (0'0,51'16] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[8.1c( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[11.1f( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[8.11( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=47'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[11.1a( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[11.b( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[11.12( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[10.13( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[10.10( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[10.b( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[10.12( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[10.e( v 60'17 lc 51'7 (0'0,60'17] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=60'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[10.d( v 60'17 lc 51'9 (0'0,60'17] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=60'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[10.1e( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[10.9( v 60'17 lc 51'15 (0'0,60'17] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=60'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[10.16( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[10.8( v 51'16 (0'0,51'16] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[10.1( v 51'16 (0'0,51'16] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[8.1d( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[8.1f( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[10.17( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[11.17( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[8.14( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[8.1a( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[10.15( v 60'17 lc 51'5 (0'0,60'17] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=60'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[11.19( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[8.18( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[11.1( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[11.f( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[8.c( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[8.e( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[11.e( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[11.6( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[8.f( v 47'4 lc 0'0 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[8.9( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[11.14( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[8.6( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[11.4( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[8.b( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[8.10( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 62 pg[11.10( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[10.11( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[10.19( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[10.f( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[10.6( v 51'16 (0'0,51'16] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[10.1a( v 51'16 (0'0,51'16] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[10.2( v 51'16 (0'0,51'16] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=51'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 62 pg[10.14( v 60'17 lc 51'13 (0'0,60'17] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=60'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[11.11( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[11.1b( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[8.1b( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[8.12( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[11.1e( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[11.18( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[8.4( v 47'4 (0'0,47'4] local-lis/les=61/62 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[11.9( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[11.1c( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[11.8( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[11.d( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[8.d( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[11.3( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[11.15( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[11.2( empty local-lis/les=61/62 n=0 ec=59/52 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[8.2( v 47'4 (0'0,47'4] local-lis/les=61/62 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 62 pg[8.15( v 47'4 (0'0,47'4] local-lis/les=61/62 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=47'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v152: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct  1 12:17:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  1 12:17:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct  1 12:17:30 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  1 12:17:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct  1 12:17:30 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct  1 12:17:30 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  1 12:17:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 63 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 63 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 63 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 63 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 63 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 63 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 63 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 63 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 63 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 63 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 63 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 63 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 63 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 63 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 63 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 63 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:31 np0005464891 vigorous_swirles[103879]: could not fetch user info: no user info saved
Oct  1 12:17:31 np0005464891 systemd[1]: libpod-c0709745fc7cec3e68acc784481449751084803d5be7843de5bf826097816838.scope: Deactivated successfully.
Oct  1 12:17:31 np0005464891 podman[103864]: 2025-10-01 16:17:31.401020459 +0000 UTC m=+2.908979281 container died c0709745fc7cec3e68acc784481449751084803d5be7843de5bf826097816838 (image=quay.io/ceph/ceph:v18, name=vigorous_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:17:31 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d8e0815e5e982633ec3ece10103b835e316e8b0e9eac643c5beb2b89332b0941-merged.mount: Deactivated successfully.
Oct  1 12:17:31 np0005464891 podman[103864]: 2025-10-01 16:17:31.450875929 +0000 UTC m=+2.958834731 container remove c0709745fc7cec3e68acc784481449751084803d5be7843de5bf826097816838 (image=quay.io/ceph/ceph:v18, name=vigorous_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:17:31 np0005464891 systemd[1]: libpod-conmon-c0709745fc7cec3e68acc784481449751084803d5be7843de5bf826097816838.scope: Deactivated successfully.
Oct  1 12:17:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct  1 12:17:31 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  1 12:17:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct  1 12:17:31 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 64 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64 pruub=15.565937042s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 145.874328613s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 64 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64 pruub=15.565836906s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.874328613s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 64 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64 pruub=15.565982819s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 145.874771118s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 64 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64 pruub=15.565775871s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 145.874679565s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 64 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64 pruub=15.565789223s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.874771118s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 64 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64 pruub=15.565621376s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.874679565s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 64 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64 pruub=15.564691544s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 145.874328613s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 64 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64 pruub=15.564585686s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.874328613s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 64 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64 pruub=15.564628601s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 145.874420166s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 64 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64 pruub=15.564539909s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.874420166s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 64 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64 pruub=15.564249039s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 145.874740601s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:31 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 64 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:31 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 64 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:31 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 64 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:31 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 64 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 64 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64 pruub=15.563967705s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.874740601s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 64 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64 pruub=15.563213348s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 145.874252319s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:31 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 64 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:31 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 64 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:31 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 64 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64 pruub=15.563097954s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.874252319s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:31 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 64 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:31 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 64 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:31 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 64 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:31 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 64 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:31 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 64 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:31 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 64 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:31 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 64 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:31 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 64 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:31 np0005464891 python3[104001]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:17:31 np0005464891 podman[104002]: 2025-10-01 16:17:31.941954767 +0000 UTC m=+0.077130951 container create 62770c55682ebe481d2c6a4075158ebd5953a646c6d89b1913821539f8e2c610 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 12:17:31 np0005464891 systemd[1]: Started libpod-conmon-62770c55682ebe481d2c6a4075158ebd5953a646c6d89b1913821539f8e2c610.scope.
Oct  1 12:17:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v155: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct  1 12:17:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  1 12:17:32 np0005464891 podman[104002]: 2025-10-01 16:17:31.910711066 +0000 UTC m=+0.045887290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 12:17:32 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:17:32 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635bb8e42f8a4342707abe4373795bf93d9c122bdd225cfc949938412d7868ca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:17:32 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635bb8e42f8a4342707abe4373795bf93d9c122bdd225cfc949938412d7868ca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:17:32 np0005464891 podman[104002]: 2025-10-01 16:17:32.038433166 +0000 UTC m=+0.173609380 container init 62770c55682ebe481d2c6a4075158ebd5953a646c6d89b1913821539f8e2c610 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 12:17:32 np0005464891 podman[104002]: 2025-10-01 16:17:32.051661225 +0000 UTC m=+0.186837409 container start 62770c55682ebe481d2c6a4075158ebd5953a646c6d89b1913821539f8e2c610 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 12:17:32 np0005464891 podman[104002]: 2025-10-01 16:17:32.055313347 +0000 UTC m=+0.190489591 container attach 62770c55682ebe481d2c6a4075158ebd5953a646c6d89b1913821539f8e2c610 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct  1 12:17:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct  1 12:17:32 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  1 12:17:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  1 12:17:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct  1 12:17:32 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.550920486s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 145.880493164s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.545258522s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 145.874923706s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.550796509s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.880493164s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.544161797s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 145.874938965s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.544031143s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.874938965s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.549504280s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 145.880691528s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.543221474s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 145.874832153s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.548849106s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 145.880462646s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.549017906s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 145.880508423s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.549324036s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.880691528s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.548747063s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.880462646s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.548706055s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.880508423s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.543126106s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.874832153s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.542675972s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 145.874908447s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=62/63 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.542688370s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.874923706s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.547967911s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 145.880599976s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.547889709s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.880599976s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.11( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:32 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 65 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=62/63 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65 pruub=14.542593956s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.874908447s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.9( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.b( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:32 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 65 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]: {
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "user_id": "openstack",
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "display_name": "openstack",
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "email": "",
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "suspended": 0,
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "max_buckets": 1000,
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "subusers": [],
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "keys": [
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:        {
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:            "user": "openstack",
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:            "access_key": "BE2VF1KA02J0X3F0ED6V",
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:            "secret_key": "XtefhU6p5zMN4O8rwsERRNazMGDSVt4DluMZCTQr"
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:        }
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    ],
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "swift_keys": [],
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "caps": [],
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "op_mask": "read, write, delete",
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "default_placement": "",
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "default_storage_class": "",
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "placement_tags": [],
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "bucket_quota": {
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:        "enabled": false,
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:        "check_on_raw": false,
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:        "max_size": -1,
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:        "max_size_kb": 0,
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:        "max_objects": -1
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    },
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "user_quota": {
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:        "enabled": false,
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:        "check_on_raw": false,
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:        "max_size": -1,
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:        "max_size_kb": 0,
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:        "max_objects": -1
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    },
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "temp_url_keys": [],
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "type": "rgw",
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]:    "mfa_ids": []
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]: }
Oct  1 12:17:32 np0005464891 youthful_sammet[104018]: 
Oct  1 12:17:32 np0005464891 systemd[1]: libpod-62770c55682ebe481d2c6a4075158ebd5953a646c6d89b1913821539f8e2c610.scope: Deactivated successfully.
Oct  1 12:17:32 np0005464891 podman[104103]: 2025-10-01 16:17:32.925601936 +0000 UTC m=+0.036017675 container died 62770c55682ebe481d2c6a4075158ebd5953a646c6d89b1913821539f8e2c610 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:17:32 np0005464891 systemd[1]: var-lib-containers-storage-overlay-635bb8e42f8a4342707abe4373795bf93d9c122bdd225cfc949938412d7868ca-merged.mount: Deactivated successfully.
Oct  1 12:17:32 np0005464891 podman[104103]: 2025-10-01 16:17:32.982642646 +0000 UTC m=+0.093058365 container remove 62770c55682ebe481d2c6a4075158ebd5953a646c6d89b1913821539f8e2c610 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Oct  1 12:17:32 np0005464891 systemd[1]: libpod-conmon-62770c55682ebe481d2c6a4075158ebd5953a646c6d89b1913821539f8e2c610.scope: Deactivated successfully.
Oct  1 12:17:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct  1 12:17:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct  1 12:17:33 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct  1 12:17:33 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  1 12:17:33 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 66 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:33 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 66 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:33 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 66 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:33 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 66 pg[9.3( v 54'385 (0'0,54'385] local-lis/les=65/66 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:33 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 66 pg[9.1d( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:33 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 66 pg[9.d( v 54'385 (0'0,54'385] local-lis/les=65/66 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:33 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 66 pg[9.1( v 54'385 (0'0,54'385] local-lis/les=65/66 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:33 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 66 pg[9.5( v 54'385 (0'0,54'385] local-lis/les=65/66 n=6 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:33 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 66 pg[9.1b( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=62/57 les/c/f=63/59/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v158: 321 pgs: 7 peering, 314 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 2 op/s; 488 B/s, 10 objects/s recovering
Oct  1 12:17:35 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Oct  1 12:17:35 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Oct  1 12:17:35 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct  1 12:17:35 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct  1 12:17:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v159: 321 pgs: 7 peering, 314 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 2 op/s; 365 B/s, 7 objects/s recovering
Oct  1 12:17:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:17:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v160: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 2 op/s; 507 B/s, 15 objects/s recovering
Oct  1 12:17:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct  1 12:17:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  1 12:17:38 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Oct  1 12:17:38 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Oct  1 12:17:38 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct  1 12:17:38 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct  1 12:17:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct  1 12:17:38 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  1 12:17:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  1 12:17:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct  1 12:17:38 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct  1 12:17:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  1 12:17:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 1 op/s; 439 B/s, 12 objects/s recovering
Oct  1 12:17:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct  1 12:17:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  1 12:17:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct  1 12:17:40 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  1 12:17:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  1 12:17:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct  1 12:17:40 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct  1 12:17:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:17:41 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Oct  1 12:17:41 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Oct  1 12:17:41 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  1 12:17:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v164: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s; 156 B/s, 6 objects/s recovering
Oct  1 12:17:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct  1 12:17:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  1 12:17:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:17:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:17:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:17:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:17:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:17:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:17:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct  1 12:17:42 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  1 12:17:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  1 12:17:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct  1 12:17:42 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct  1 12:17:43 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 69 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=69 pruub=10.751994133s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 153.168853760s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:43 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 69 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=69 pruub=10.751932144s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.168853760s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:43 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 69 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:43 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 69 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=69 pruub=10.751710892s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 153.169692993s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:43 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 69 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=69 pruub=10.751595497s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.169692993s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:43 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 69 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=69 pruub=10.755966187s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 153.174362183s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:43 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 69 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=69 pruub=10.755885124s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.174362183s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:43 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 69 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=69 pruub=10.751458168s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 153.170028687s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:43 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 69 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:43 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 69 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:43 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 69 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=69 pruub=10.750889778s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.170028687s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:43 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 69 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct  1 12:17:43 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  1 12:17:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct  1 12:17:43 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct  1 12:17:43 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 70 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:43 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 70 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:43 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 70 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:43 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 70 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:43 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 70 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:43 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 70 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:43 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 70 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:43 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 70 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:43 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 70 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] r=0 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:43 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 70 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] r=0 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:43 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 70 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] r=0 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:43 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 70 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] r=0 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:43 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 70 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] r=0 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:43 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 70 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] r=0 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:43 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 70 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] r=0 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:43 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 70 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] r=0 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v167: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct  1 12:17:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  1 12:17:44 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Oct  1 12:17:44 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Oct  1 12:17:44 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct  1 12:17:44 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct  1 12:17:44 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Oct  1 12:17:44 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Oct  1 12:17:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct  1 12:17:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  1 12:17:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct  1 12:17:44 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct  1 12:17:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  1 12:17:44 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 71 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=70/71 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:44 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 71 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=70/71 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:44 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 71 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=70/71 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:44 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 71 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=70/71 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[57,70)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:45 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct  1 12:17:45 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct  1 12:17:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct  1 12:17:45 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  1 12:17:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct  1 12:17:45 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct  1 12:17:45 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 71 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=71 pruub=11.861139297s) [2] r=-1 lpr=71 pi=[65,71)/1 crt=54'385 mlcod 0'0 active pruub 161.666397095s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:45 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 71 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=71 pruub=10.854405403s) [2] r=-1 lpr=71 pi=[64,71)/1 crt=54'385 mlcod 0'0 active pruub 160.659851074s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:45 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 72 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=71 pruub=10.854302406s) [2] r=-1 lpr=71 pi=[64,71)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 160.659851074s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:45 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 72 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=71 pruub=11.860643387s) [2] r=-1 lpr=71 pi=[65,71)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 161.666397095s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:45 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 71 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=71 pruub=10.853683472s) [2] r=-1 lpr=71 pi=[64,71)/1 crt=54'385 mlcod 0'0 active pruub 160.659866333s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:45 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 72 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=71 pruub=10.853609085s) [2] r=-1 lpr=71 pi=[64,71)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 160.659866333s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:45 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 71 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=71 pruub=10.853421211s) [2] r=-1 lpr=71 pi=[64,71)/1 crt=54'385 mlcod 0'0 active pruub 160.659820557s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:45 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 72 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=71 pruub=10.853187561s) [2] r=-1 lpr=71 pi=[64,71)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 160.659820557s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:45 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 72 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=70/71 n=5 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72 pruub=14.981643677s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 159.468307495s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:45 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 72 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=70/71 n=5 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72 pruub=14.981568336s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.468307495s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:45 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 72 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=70/71 n=6 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72 pruub=14.980490685s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 159.468292236s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:45 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 72 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=70/71 n=6 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72 pruub=14.977278709s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 159.465209961s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:45 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 72 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=70/71 n=6 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72 pruub=14.980268478s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.468292236s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:45 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 72 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=70/71 n=5 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72 pruub=14.980105400s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 159.468322754s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:45 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 72 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=70/71 n=5 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72 pruub=14.980038643s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.468322754s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:45 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 72 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=70/71 n=6 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72 pruub=14.976801872s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.465209961s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:45 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 72 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=71) [2] r=0 lpr=72 pi=[64,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:45 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 72 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=71) [2] r=0 lpr=72 pi=[64,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:45 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 72 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=71) [2] r=0 lpr=72 pi=[64,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:45 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 72 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:45 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 72 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:45 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 72 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:45 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 72 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:45 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 72 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=71) [2] r=0 lpr=72 pi=[65,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:45 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 72 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:45 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 72 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:45 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 72 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:45 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 72 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v170: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:17:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct  1 12:17:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  1 12:17:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:17:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct  1 12:17:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  1 12:17:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct  1 12:17:46 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct  1 12:17:46 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:46 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:46 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:46 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:46 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:46 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:46 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:46 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:46 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 73 pg[9.e( v 54'385 (0'0,54'385] local-lis/les=72/73 n=6 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:46 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 73 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:46 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 73 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:46 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 73 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=0 lpr=73 pi=[65,73)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:46 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 73 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=0 lpr=73 pi=[65,73)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:46 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 73 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:46 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 73 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:46 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 73 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=72/73 n=5 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:46 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 73 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=72/73 n=5 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:46 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 73 pg[9.6( v 54'385 (0'0,54'385] local-lis/les=72/73 n=6 ec=57/48 lis/c=70/57 les/c/f=71/59/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:46 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 73 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:46 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 73 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=64/65 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:46 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 73 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=73 pruub=8.136344910s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 153.170135498s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:46 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 73 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=73 pruub=8.136269569s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.170135498s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:46 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 73 pg[9.8( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:46 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 73 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=73 pruub=8.138762474s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 153.174163818s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:46 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 73 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=73 pruub=8.138701439s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.174163818s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:46 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 73 pg[9.18( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:46 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct  1 12:17:46 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct  1 12:17:46 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  1 12:17:46 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  1 12:17:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct  1 12:17:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct  1 12:17:47 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct  1 12:17:47 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 74 pg[9.8( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:47 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 74 pg[9.8( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:47 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 74 pg[9.18( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:47 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 74 pg[9.18( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:47 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 74 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[57,74)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:47 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 74 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[57,74)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:47 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 74 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[57,74)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:47 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 74 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[57,74)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:47 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 74 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:47 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 74 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:47 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 74 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[65,73)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:47 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 74 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[64,73)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:47 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Oct  1 12:17:47 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Oct  1 12:17:47 np0005464891 systemd-logind[801]: New session 35 of user zuul.
Oct  1 12:17:47 np0005464891 systemd[1]: Started Session 35 of User zuul.
Oct  1 12:17:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v173: 321 pgs: 2 remapped+peering, 4 active+remapped, 315 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 192 B/s, 9 objects/s recovering
Oct  1 12:17:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct  1 12:17:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct  1 12:17:48 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct  1 12:17:48 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 75 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=73/65 les/c/f=74/66/0 sis=75 pruub=14.998288155s) [2] async=[2] r=-1 lpr=75 pi=[65,75)/1 crt=54'385 mlcod 54'385 active pruub 167.025604248s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:48 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 75 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=73/65 les/c/f=74/66/0 sis=75 pruub=14.998208046s) [2] r=-1 lpr=75 pi=[65,75)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 167.025604248s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:48 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 75 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=14.994343758s) [2] async=[2] r=-1 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 54'385 active pruub 167.022293091s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:48 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 75 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=73/74 n=5 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=14.994301796s) [2] r=-1 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 167.022293091s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:48 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 75 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=14.998987198s) [2] async=[2] r=-1 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 54'385 active pruub 167.027008057s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:48 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 75 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=14.994249344s) [2] async=[2] r=-1 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 54'385 active pruub 167.022369385s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:48 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 75 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=14.994208336s) [2] r=-1 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 167.022369385s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:48 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 75 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=73/74 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=14.998884201s) [2] r=-1 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 167.027008057s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:48 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 75 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:48 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 75 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:48 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 75 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:48 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 75 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:48 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 75 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:48 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 75 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:48 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 75 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:48 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 75 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:48 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 75 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=74/75 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[57,74)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:48 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 75 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=74/75 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[57,74)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:48 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Oct  1 12:17:48 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Oct  1 12:17:48 np0005464891 python3.9[104271]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:17:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct  1 12:17:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct  1 12:17:49 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct  1 12:17:49 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 76 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=74/57 les/c/f=75/59/0 sis=76) [2] r=0 lpr=76 pi=[57,76)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:49 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 76 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=74/57 les/c/f=75/59/0 sis=76) [2] r=0 lpr=76 pi=[57,76)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:49 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 76 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=74/57 les/c/f=75/59/0 sis=76) [2] r=0 lpr=76 pi=[57,76)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:49 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 76 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=74/57 les/c/f=75/59/0 sis=76) [2] r=0 lpr=76 pi=[57,76)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:17:49 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 76 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=74/75 n=6 ec=57/48 lis/c=74/57 les/c/f=75/59/0 sis=76 pruub=14.987757683s) [2] async=[2] r=-1 lpr=76 pi=[57,76)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 162.720703125s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:49 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 76 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=74/75 n=6 ec=57/48 lis/c=74/57 les/c/f=75/59/0 sis=76 pruub=14.987585068s) [2] r=-1 lpr=76 pi=[57,76)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 162.720703125s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:49 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 76 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=74/75 n=5 ec=57/48 lis/c=74/57 les/c/f=75/59/0 sis=76 pruub=14.986962318s) [2] async=[2] r=-1 lpr=76 pi=[57,76)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 162.720657349s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:17:49 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 76 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=74/75 n=5 ec=57/48 lis/c=74/57 les/c/f=75/59/0 sis=76 pruub=14.986851692s) [2] r=-1 lpr=76 pi=[57,76)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 162.720657349s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:17:49 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 76 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=57/48 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:49 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 76 pg[9.7( v 54'385 (0'0,54'385] local-lis/les=75/76 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:49 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 76 pg[9.f( v 54'385 (0'0,54'385] local-lis/les=75/76 n=6 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:49 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 76 pg[9.17( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=57/48 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:49 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Oct  1 12:17:49 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Oct  1 12:17:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v176: 321 pgs: 2 remapped+peering, 4 active+remapped, 315 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 192 B/s, 9 objects/s recovering
Oct  1 12:17:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct  1 12:17:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct  1 12:17:50 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct  1 12:17:50 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 77 pg[9.18( v 54'385 (0'0,54'385] local-lis/les=76/77 n=5 ec=57/48 lis/c=74/57 les/c/f=75/59/0 sis=76) [2] r=0 lpr=76 pi=[57,76)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:50 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 77 pg[9.8( v 54'385 (0'0,54'385] local-lis/les=76/77 n=6 ec=57/48 lis/c=74/57 les/c/f=75/59/0 sis=76) [2] r=0 lpr=76 pi=[57,76)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:17:50 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.1a deep-scrub starts
Oct  1 12:17:50 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.1a deep-scrub ok
Oct  1 12:17:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:17:51 np0005464891 python3.9[104489]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:17:51 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct  1 12:17:51 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct  1 12:17:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v178: 321 pgs: 2 remapped+peering, 4 active+remapped, 315 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 155 B/s, 8 objects/s recovering
Oct  1 12:17:52 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Oct  1 12:17:52 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Oct  1 12:17:53 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Oct  1 12:17:53 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Oct  1 12:17:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v179: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Oct  1 12:17:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct  1 12:17:54 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  1 12:17:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct  1 12:17:54 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  1 12:17:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct  1 12:17:54 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct  1 12:17:54 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  1 12:17:54 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.19 deep-scrub starts
Oct  1 12:17:54 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.19 deep-scrub ok
Oct  1 12:17:55 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  1 12:17:55 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.18 deep-scrub starts
Oct  1 12:17:55 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.18 deep-scrub ok
Oct  1 12:17:55 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct  1 12:17:55 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct  1 12:17:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v181: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 31 B/s, 1 objects/s recovering
Oct  1 12:17:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct  1 12:17:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  1 12:17:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:17:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct  1 12:17:56 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  1 12:17:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  1 12:17:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct  1 12:17:56 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct  1 12:17:56 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Oct  1 12:17:56 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Oct  1 12:17:57 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  1 12:17:57 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct  1 12:17:57 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct  1 12:17:57 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct  1 12:17:57 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct  1 12:17:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v183: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct  1 12:17:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct  1 12:17:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  1 12:17:58 np0005464891 podman[104693]: 2025-10-01 16:17:58.167521835 +0000 UTC m=+0.091663927 container exec 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 12:17:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct  1 12:17:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  1 12:17:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct  1 12:17:58 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct  1 12:17:58 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  1 12:17:58 np0005464891 podman[104693]: 2025-10-01 16:17:58.285086272 +0000 UTC m=+0.209228364 container exec_died 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 12:17:58 np0005464891 systemd[1]: session-35.scope: Deactivated successfully.
Oct  1 12:17:58 np0005464891 systemd[1]: session-35.scope: Consumed 8.288s CPU time.
Oct  1 12:17:58 np0005464891 systemd-logind[801]: Session 35 logged out. Waiting for processes to exit.
Oct  1 12:17:58 np0005464891 systemd-logind[801]: Removed session 35.
Oct  1 12:17:58 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct  1 12:17:58 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:17:59 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.17 deep-scrub starts
Oct  1 12:17:59 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.17 deep-scrub ok
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:17:59 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 896f8761-eacb-4e40-bbb8-b4e0657dc4e4 does not exist
Oct  1 12:17:59 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 9286c4b0-2698-4b54-a314-af0162876379 does not exist
Oct  1 12:17:59 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev f5705b93-f474-4056-8f8c-0fb251d2acfe does not exist
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:17:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:18:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v185: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:18:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct  1 12:18:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  1 12:18:00 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:18:00 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:18:00 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:18:00 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  1 12:18:00 np0005464891 podman[105148]: 2025-10-01 16:18:00.610813821 +0000 UTC m=+0.056865027 container create 2120fc5ef6f6fb9f7b812a0e6e073081e8617485b07f190b5b40c9870fd63f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct  1 12:18:00 np0005464891 systemd[1]: Started libpod-conmon-2120fc5ef6f6fb9f7b812a0e6e073081e8617485b07f190b5b40c9870fd63f4b.scope.
Oct  1 12:18:00 np0005464891 podman[105148]: 2025-10-01 16:18:00.584809275 +0000 UTC m=+0.030860541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:18:00 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:18:00 np0005464891 podman[105148]: 2025-10-01 16:18:00.724801507 +0000 UTC m=+0.170852933 container init 2120fc5ef6f6fb9f7b812a0e6e073081e8617485b07f190b5b40c9870fd63f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ritchie, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:18:00 np0005464891 podman[105148]: 2025-10-01 16:18:00.732932434 +0000 UTC m=+0.178983640 container start 2120fc5ef6f6fb9f7b812a0e6e073081e8617485b07f190b5b40c9870fd63f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ritchie, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:18:00 np0005464891 podman[105148]: 2025-10-01 16:18:00.73708054 +0000 UTC m=+0.183131796 container attach 2120fc5ef6f6fb9f7b812a0e6e073081e8617485b07f190b5b40c9870fd63f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 12:18:00 np0005464891 nice_ritchie[105165]: 167 167
Oct  1 12:18:00 np0005464891 systemd[1]: libpod-2120fc5ef6f6fb9f7b812a0e6e073081e8617485b07f190b5b40c9870fd63f4b.scope: Deactivated successfully.
Oct  1 12:18:00 np0005464891 podman[105148]: 2025-10-01 16:18:00.742202973 +0000 UTC m=+0.188254189 container died 2120fc5ef6f6fb9f7b812a0e6e073081e8617485b07f190b5b40c9870fd63f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ritchie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:18:00 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4701aaf2641588109bf2230277361373ec664fd9c666ae390ef5f635c4fd69e4-merged.mount: Deactivated successfully.
Oct  1 12:18:00 np0005464891 podman[105148]: 2025-10-01 16:18:00.790431757 +0000 UTC m=+0.236482963 container remove 2120fc5ef6f6fb9f7b812a0e6e073081e8617485b07f190b5b40c9870fd63f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:18:00 np0005464891 systemd[1]: libpod-conmon-2120fc5ef6f6fb9f7b812a0e6e073081e8617485b07f190b5b40c9870fd63f4b.scope: Deactivated successfully.
Oct  1 12:18:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct  1 12:18:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  1 12:18:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct  1 12:18:00 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct  1 12:18:01 np0005464891 podman[105188]: 2025-10-01 16:18:01.021548109 +0000 UTC m=+0.058899732 container create 94998ba9a0ea1387540a4d83264f54957f2b1de2238c063ee45d5e841bdaa5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hodgkin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 12:18:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:18:01 np0005464891 systemd[1]: Started libpod-conmon-94998ba9a0ea1387540a4d83264f54957f2b1de2238c063ee45d5e841bdaa5f7.scope.
Oct  1 12:18:01 np0005464891 podman[105188]: 2025-10-01 16:18:01.001871731 +0000 UTC m=+0.039223374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:18:01 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:18:01 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d77ffa568360f24327a6a15f257de08c0541dd521a6d5e371ebc265cd92a81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:18:01 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d77ffa568360f24327a6a15f257de08c0541dd521a6d5e371ebc265cd92a81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:18:01 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d77ffa568360f24327a6a15f257de08c0541dd521a6d5e371ebc265cd92a81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:18:01 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d77ffa568360f24327a6a15f257de08c0541dd521a6d5e371ebc265cd92a81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:18:01 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d77ffa568360f24327a6a15f257de08c0541dd521a6d5e371ebc265cd92a81/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:18:01 np0005464891 podman[105188]: 2025-10-01 16:18:01.132011979 +0000 UTC m=+0.169363612 container init 94998ba9a0ea1387540a4d83264f54957f2b1de2238c063ee45d5e841bdaa5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hodgkin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:18:01 np0005464891 podman[105188]: 2025-10-01 16:18:01.143592712 +0000 UTC m=+0.180944325 container start 94998ba9a0ea1387540a4d83264f54957f2b1de2238c063ee45d5e841bdaa5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hodgkin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:18:01 np0005464891 podman[105188]: 2025-10-01 16:18:01.146567775 +0000 UTC m=+0.183919388 container attach 94998ba9a0ea1387540a4d83264f54957f2b1de2238c063ee45d5e841bdaa5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hodgkin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 12:18:01 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  1 12:18:01 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 81 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=81 pruub=9.250274658s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 169.169845581s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:01 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 81 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=81 pruub=9.250088692s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.169845581s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:01 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 81 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=81 pruub=9.254214287s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 169.174728394s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:01 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 81 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=81 pruub=9.254158974s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.174728394s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:01 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:01 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v187: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:18:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct  1 12:18:02 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  1 12:18:02 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct  1 12:18:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct  1 12:18:02 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  1 12:18:02 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct  1 12:18:02 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  1 12:18:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct  1 12:18:02 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct  1 12:18:02 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 82 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[57,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:02 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 82 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[57,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:02 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 82 pg[9.c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[57,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:02 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 82 pg[9.c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[57,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:02 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 82 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[57,82)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:02 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 82 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=57/59 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[57,82)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:02 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 82 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[57,82)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:02 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 82 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=57/59 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[57,82)/1 crt=54'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:02 np0005464891 stupefied_hodgkin[105204]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:18:02 np0005464891 stupefied_hodgkin[105204]: --> relative data size: 1.0
Oct  1 12:18:02 np0005464891 stupefied_hodgkin[105204]: --> All data devices are unavailable
Oct  1 12:18:02 np0005464891 systemd[1]: libpod-94998ba9a0ea1387540a4d83264f54957f2b1de2238c063ee45d5e841bdaa5f7.scope: Deactivated successfully.
Oct  1 12:18:02 np0005464891 systemd[1]: libpod-94998ba9a0ea1387540a4d83264f54957f2b1de2238c063ee45d5e841bdaa5f7.scope: Consumed 1.165s CPU time.
Oct  1 12:18:02 np0005464891 podman[105188]: 2025-10-01 16:18:02.3671793 +0000 UTC m=+1.404530973 container died 94998ba9a0ea1387540a4d83264f54957f2b1de2238c063ee45d5e841bdaa5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hodgkin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct  1 12:18:02 np0005464891 systemd[1]: var-lib-containers-storage-overlay-04d77ffa568360f24327a6a15f257de08c0541dd521a6d5e371ebc265cd92a81-merged.mount: Deactivated successfully.
Oct  1 12:18:02 np0005464891 podman[105188]: 2025-10-01 16:18:02.437605333 +0000 UTC m=+1.474956956 container remove 94998ba9a0ea1387540a4d83264f54957f2b1de2238c063ee45d5e841bdaa5f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:18:02 np0005464891 systemd[1]: libpod-conmon-94998ba9a0ea1387540a4d83264f54957f2b1de2238c063ee45d5e841bdaa5f7.scope: Deactivated successfully.
Oct  1 12:18:03 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Oct  1 12:18:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct  1 12:18:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct  1 12:18:03 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Oct  1 12:18:03 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct  1 12:18:03 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  1 12:18:03 np0005464891 podman[105386]: 2025-10-01 16:18:03.288316027 +0000 UTC m=+0.081731270 container create 6e9cf9b3c8ae8d2a1035c3d8bfb5be5a5685dbc0ad8229b116d353f59dcf8db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcnulty, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 12:18:03 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 83 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=82/83 n=5 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[57,82)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:18:03 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 83 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=82/83 n=6 ec=57/48 lis/c=57/57 les/c/f=59/59/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[57,82)/1 crt=54'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:18:03 np0005464891 systemd[1]: Started libpod-conmon-6e9cf9b3c8ae8d2a1035c3d8bfb5be5a5685dbc0ad8229b116d353f59dcf8db3.scope.
Oct  1 12:18:03 np0005464891 podman[105386]: 2025-10-01 16:18:03.242559541 +0000 UTC m=+0.035974834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:18:03 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:18:03 np0005464891 podman[105386]: 2025-10-01 16:18:03.380679892 +0000 UTC m=+0.174095115 container init 6e9cf9b3c8ae8d2a1035c3d8bfb5be5a5685dbc0ad8229b116d353f59dcf8db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:18:03 np0005464891 podman[105386]: 2025-10-01 16:18:03.387780479 +0000 UTC m=+0.181195682 container start 6e9cf9b3c8ae8d2a1035c3d8bfb5be5a5685dbc0ad8229b116d353f59dcf8db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:18:03 np0005464891 gracious_mcnulty[105403]: 167 167
Oct  1 12:18:03 np0005464891 systemd[1]: libpod-6e9cf9b3c8ae8d2a1035c3d8bfb5be5a5685dbc0ad8229b116d353f59dcf8db3.scope: Deactivated successfully.
Oct  1 12:18:03 np0005464891 podman[105386]: 2025-10-01 16:18:03.394559308 +0000 UTC m=+0.187974531 container attach 6e9cf9b3c8ae8d2a1035c3d8bfb5be5a5685dbc0ad8229b116d353f59dcf8db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcnulty, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:18:03 np0005464891 podman[105386]: 2025-10-01 16:18:03.395034572 +0000 UTC m=+0.188449775 container died 6e9cf9b3c8ae8d2a1035c3d8bfb5be5a5685dbc0ad8229b116d353f59dcf8db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:18:03 np0005464891 systemd[1]: var-lib-containers-storage-overlay-56fe6bf3298f938ff8b7d96dc1d4075ed65734ee1140d39e623b99cf49ab0a35-merged.mount: Deactivated successfully.
Oct  1 12:18:03 np0005464891 podman[105386]: 2025-10-01 16:18:03.435820128 +0000 UTC m=+0.229235341 container remove 6e9cf9b3c8ae8d2a1035c3d8bfb5be5a5685dbc0ad8229b116d353f59dcf8db3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 12:18:03 np0005464891 systemd[1]: libpod-conmon-6e9cf9b3c8ae8d2a1035c3d8bfb5be5a5685dbc0ad8229b116d353f59dcf8db3.scope: Deactivated successfully.
Oct  1 12:18:03 np0005464891 podman[105426]: 2025-10-01 16:18:03.653107706 +0000 UTC m=+0.061083664 container create 3864d9cc5989bd273bbbc16baadb45a310ef47a24d422de856f7d013981996b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:18:03 np0005464891 systemd[1]: Started libpod-conmon-3864d9cc5989bd273bbbc16baadb45a310ef47a24d422de856f7d013981996b2.scope.
Oct  1 12:18:03 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Oct  1 12:18:03 np0005464891 podman[105426]: 2025-10-01 16:18:03.627841511 +0000 UTC m=+0.035817519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:18:03 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Oct  1 12:18:03 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:18:03 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d4cee614391a474df941f25f08b89279ce4d95badbc9fdbf944d36e0dc0bd01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:18:03 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d4cee614391a474df941f25f08b89279ce4d95badbc9fdbf944d36e0dc0bd01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:18:03 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d4cee614391a474df941f25f08b89279ce4d95badbc9fdbf944d36e0dc0bd01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:18:03 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d4cee614391a474df941f25f08b89279ce4d95badbc9fdbf944d36e0dc0bd01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:18:03 np0005464891 podman[105426]: 2025-10-01 16:18:03.754166013 +0000 UTC m=+0.162141961 container init 3864d9cc5989bd273bbbc16baadb45a310ef47a24d422de856f7d013981996b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_almeida, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:18:03 np0005464891 podman[105426]: 2025-10-01 16:18:03.770407576 +0000 UTC m=+0.178383534 container start 3864d9cc5989bd273bbbc16baadb45a310ef47a24d422de856f7d013981996b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_almeida, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:18:03 np0005464891 podman[105426]: 2025-10-01 16:18:03.773984735 +0000 UTC m=+0.181960683 container attach 3864d9cc5989bd273bbbc16baadb45a310ef47a24d422de856f7d013981996b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  1 12:18:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v190: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:18:04 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.f scrub starts
Oct  1 12:18:04 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.f scrub ok
Oct  1 12:18:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct  1 12:18:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct  1 12:18:04 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct  1 12:18:04 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 84 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=82/83 n=6 ec=57/48 lis/c=82/57 les/c/f=83/59/0 sis=84 pruub=14.986930847s) [2] async=[2] r=-1 lpr=84 pi=[57,84)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 177.932693481s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:04 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 84 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=82/83 n=6 ec=57/48 lis/c=82/57 les/c/f=83/59/0 sis=84 pruub=14.986838341s) [2] r=-1 lpr=84 pi=[57,84)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.932693481s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:04 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 84 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=82/83 n=5 ec=57/48 lis/c=82/57 les/c/f=83/59/0 sis=84 pruub=14.983479500s) [2] async=[2] r=-1 lpr=84 pi=[57,84)/1 crt=54'385 lcod 0'0 mlcod 0'0 active pruub 177.930496216s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:04 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 84 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=82/83 n=5 ec=57/48 lis/c=82/57 les/c/f=83/59/0 sis=84 pruub=14.983258247s) [2] r=-1 lpr=84 pi=[57,84)/1 crt=54'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.930496216s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:04 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 84 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=82/57 les/c/f=83/59/0 sis=84) [2] r=0 lpr=84 pi=[57,84)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:04 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 84 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=82/57 les/c/f=83/59/0 sis=84) [2] r=0 lpr=84 pi=[57,84)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:04 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 84 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=82/57 les/c/f=83/59/0 sis=84) [2] r=0 lpr=84 pi=[57,84)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:04 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 84 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=6 ec=57/48 lis/c=82/57 les/c/f=83/59/0 sis=84) [2] r=0 lpr=84 pi=[57,84)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]: {
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:    "0": [
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:        {
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "devices": [
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "/dev/loop3"
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            ],
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "lv_name": "ceph_lv0",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "lv_size": "21470642176",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "name": "ceph_lv0",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "tags": {
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.cluster_name": "ceph",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.crush_device_class": "",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.encrypted": "0",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.osd_id": "0",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.type": "block",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.vdo": "0"
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            },
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "type": "block",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "vg_name": "ceph_vg0"
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:        }
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:    ],
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:    "1": [
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:        {
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "devices": [
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "/dev/loop4"
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            ],
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "lv_name": "ceph_lv1",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "lv_size": "21470642176",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "name": "ceph_lv1",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "tags": {
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.cluster_name": "ceph",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.crush_device_class": "",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.encrypted": "0",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.osd_id": "1",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.type": "block",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.vdo": "0"
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            },
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "type": "block",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "vg_name": "ceph_vg1"
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:        }
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:    ],
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:    "2": [
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:        {
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "devices": [
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "/dev/loop5"
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            ],
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "lv_name": "ceph_lv2",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "lv_size": "21470642176",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "name": "ceph_lv2",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "tags": {
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.cluster_name": "ceph",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.crush_device_class": "",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.encrypted": "0",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.osd_id": "2",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.type": "block",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:                "ceph.vdo": "0"
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            },
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "type": "block",
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:            "vg_name": "ceph_vg2"
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:        }
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]:    ]
Oct  1 12:18:04 np0005464891 gifted_almeida[105444]: }
Oct  1 12:18:04 np0005464891 systemd[1]: libpod-3864d9cc5989bd273bbbc16baadb45a310ef47a24d422de856f7d013981996b2.scope: Deactivated successfully.
Oct  1 12:18:04 np0005464891 podman[105426]: 2025-10-01 16:18:04.561048415 +0000 UTC m=+0.969024373 container died 3864d9cc5989bd273bbbc16baadb45a310ef47a24d422de856f7d013981996b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:18:04 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6d4cee614391a474df941f25f08b89279ce4d95badbc9fdbf944d36e0dc0bd01-merged.mount: Deactivated successfully.
Oct  1 12:18:04 np0005464891 podman[105426]: 2025-10-01 16:18:04.641184798 +0000 UTC m=+1.049160756 container remove 3864d9cc5989bd273bbbc16baadb45a310ef47a24d422de856f7d013981996b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:18:04 np0005464891 systemd[1]: libpod-conmon-3864d9cc5989bd273bbbc16baadb45a310ef47a24d422de856f7d013981996b2.scope: Deactivated successfully.
Oct  1 12:18:04 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct  1 12:18:04 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct  1 12:18:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct  1 12:18:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct  1 12:18:05 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct  1 12:18:05 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 85 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=84/85 n=5 ec=57/48 lis/c=82/57 les/c/f=83/59/0 sis=84) [2] r=0 lpr=84 pi=[57,84)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:18:05 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 85 pg[9.c( v 54'385 (0'0,54'385] local-lis/les=84/85 n=6 ec=57/48 lis/c=82/57 les/c/f=83/59/0 sis=84) [2] r=0 lpr=84 pi=[57,84)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:18:05 np0005464891 podman[105607]: 2025-10-01 16:18:05.489645299 +0000 UTC m=+0.069792926 container create 7a470f3b28db2338bfdc60a6e98719b495e0cf68f8b912ce0f0bf4ba9c2aa253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_feynman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:18:05 np0005464891 systemd[1]: Started libpod-conmon-7a470f3b28db2338bfdc60a6e98719b495e0cf68f8b912ce0f0bf4ba9c2aa253.scope.
Oct  1 12:18:05 np0005464891 podman[105607]: 2025-10-01 16:18:05.460858767 +0000 UTC m=+0.041006444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:18:05 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:18:05 np0005464891 podman[105607]: 2025-10-01 16:18:05.581655404 +0000 UTC m=+0.161803081 container init 7a470f3b28db2338bfdc60a6e98719b495e0cf68f8b912ce0f0bf4ba9c2aa253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:18:05 np0005464891 podman[105607]: 2025-10-01 16:18:05.59084351 +0000 UTC m=+0.170991117 container start 7a470f3b28db2338bfdc60a6e98719b495e0cf68f8b912ce0f0bf4ba9c2aa253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:18:05 np0005464891 podman[105607]: 2025-10-01 16:18:05.594438421 +0000 UTC m=+0.174586058 container attach 7a470f3b28db2338bfdc60a6e98719b495e0cf68f8b912ce0f0bf4ba9c2aa253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_feynman, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 12:18:05 np0005464891 nifty_feynman[105623]: 167 167
Oct  1 12:18:05 np0005464891 systemd[1]: libpod-7a470f3b28db2338bfdc60a6e98719b495e0cf68f8b912ce0f0bf4ba9c2aa253.scope: Deactivated successfully.
Oct  1 12:18:05 np0005464891 podman[105607]: 2025-10-01 16:18:05.59942261 +0000 UTC m=+0.179570237 container died 7a470f3b28db2338bfdc60a6e98719b495e0cf68f8b912ce0f0bf4ba9c2aa253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 12:18:05 np0005464891 systemd[1]: var-lib-containers-storage-overlay-03333f65adfd535f4e45ea80b6a6122dbf5610095162d923ac89786f301b2110-merged.mount: Deactivated successfully.
Oct  1 12:18:05 np0005464891 podman[105607]: 2025-10-01 16:18:05.64712998 +0000 UTC m=+0.227277617 container remove 7a470f3b28db2338bfdc60a6e98719b495e0cf68f8b912ce0f0bf4ba9c2aa253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 12:18:05 np0005464891 systemd[1]: libpod-conmon-7a470f3b28db2338bfdc60a6e98719b495e0cf68f8b912ce0f0bf4ba9c2aa253.scope: Deactivated successfully.
Oct  1 12:18:05 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct  1 12:18:05 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct  1 12:18:05 np0005464891 podman[105647]: 2025-10-01 16:18:05.889953248 +0000 UTC m=+0.073454388 container create 8405769e21684a318dafdc5e057fa29507cac79e8ce42b8b5b6c5acedf19895a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mahavira, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:18:05 np0005464891 systemd[1]: Started libpod-conmon-8405769e21684a318dafdc5e057fa29507cac79e8ce42b8b5b6c5acedf19895a.scope.
Oct  1 12:18:05 np0005464891 podman[105647]: 2025-10-01 16:18:05.861283409 +0000 UTC m=+0.044784599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:18:05 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:18:05 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca63a4184dc89b26f0a15e3e5130e6ff7498f887c24245fbeba63d293564472/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:18:05 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca63a4184dc89b26f0a15e3e5130e6ff7498f887c24245fbeba63d293564472/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:18:05 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca63a4184dc89b26f0a15e3e5130e6ff7498f887c24245fbeba63d293564472/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:18:05 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca63a4184dc89b26f0a15e3e5130e6ff7498f887c24245fbeba63d293564472/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:18:05 np0005464891 podman[105647]: 2025-10-01 16:18:05.989175415 +0000 UTC m=+0.172676595 container init 8405769e21684a318dafdc5e057fa29507cac79e8ce42b8b5b6c5acedf19895a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:18:06 np0005464891 podman[105647]: 2025-10-01 16:18:06.002585247 +0000 UTC m=+0.186086377 container start 8405769e21684a318dafdc5e057fa29507cac79e8ce42b8b5b6c5acedf19895a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mahavira, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 12:18:06 np0005464891 podman[105647]: 2025-10-01 16:18:06.006766755 +0000 UTC m=+0.190267945 container attach 8405769e21684a318dafdc5e057fa29507cac79e8ce42b8b5b6c5acedf19895a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:18:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v193: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:18:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:18:06 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct  1 12:18:06 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct  1 12:18:06 np0005464891 systemd[1]: packagekit.service: Deactivated successfully.
Oct  1 12:18:06 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct  1 12:18:06 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]: {
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:        "osd_id": 2,
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:        "type": "bluestore"
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:    },
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:        "osd_id": 0,
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:        "type": "bluestore"
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:    },
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:        "osd_id": 1,
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:        "type": "bluestore"
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]:    }
Oct  1 12:18:07 np0005464891 elastic_mahavira[105664]: }
Oct  1 12:18:07 np0005464891 systemd[1]: libpod-8405769e21684a318dafdc5e057fa29507cac79e8ce42b8b5b6c5acedf19895a.scope: Deactivated successfully.
Oct  1 12:18:07 np0005464891 systemd[1]: libpod-8405769e21684a318dafdc5e057fa29507cac79e8ce42b8b5b6c5acedf19895a.scope: Consumed 1.098s CPU time.
Oct  1 12:18:07 np0005464891 podman[105697]: 2025-10-01 16:18:07.152266566 +0000 UTC m=+0.038746381 container died 8405769e21684a318dafdc5e057fa29507cac79e8ce42b8b5b6c5acedf19895a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mahavira, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:18:07 np0005464891 systemd[1]: var-lib-containers-storage-overlay-fca63a4184dc89b26f0a15e3e5130e6ff7498f887c24245fbeba63d293564472-merged.mount: Deactivated successfully.
Oct  1 12:18:07 np0005464891 podman[105697]: 2025-10-01 16:18:07.226366851 +0000 UTC m=+0.112846606 container remove 8405769e21684a318dafdc5e057fa29507cac79e8ce42b8b5b6c5acedf19895a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mahavira, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:18:07 np0005464891 systemd[1]: libpod-conmon-8405769e21684a318dafdc5e057fa29507cac79e8ce42b8b5b6c5acedf19895a.scope: Deactivated successfully.
Oct  1 12:18:07 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 5.5 deep-scrub starts
Oct  1 12:18:07 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 5.5 deep-scrub ok
Oct  1 12:18:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:18:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:18:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:18:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:18:07 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev c459b4e6-450e-446f-9113-e1130d66f53d does not exist
Oct  1 12:18:07 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 1b467e62-05e0-42f5-a3ec-98d12fb01327 does not exist
Oct  1 12:18:07 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:18:07 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:18:07 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct  1 12:18:07 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct  1 12:18:07 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.a deep-scrub starts
Oct  1 12:18:07 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.a deep-scrub ok
Oct  1 12:18:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v194: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 2 objects/s recovering
Oct  1 12:18:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct  1 12:18:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  1 12:18:08 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Oct  1 12:18:08 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Oct  1 12:18:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct  1 12:18:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  1 12:18:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct  1 12:18:08 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct  1 12:18:08 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  1 12:18:09 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct  1 12:18:09 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct  1 12:18:09 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  1 12:18:09 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.12 deep-scrub starts
Oct  1 12:18:09 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.12 deep-scrub ok
Oct  1 12:18:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v196: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct  1 12:18:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct  1 12:18:10 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  1 12:18:10 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 5.3 deep-scrub starts
Oct  1 12:18:10 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 5.3 deep-scrub ok
Oct  1 12:18:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct  1 12:18:10 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  1 12:18:10 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  1 12:18:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct  1 12:18:10 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct  1 12:18:10 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct  1 12:18:10 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct  1 12:18:10 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct  1 12:18:10 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct  1 12:18:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:18:11 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  1 12:18:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:18:11
Oct  1 12:18:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:18:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:18:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'vms', 'volumes', 'cephfs.cephfs.meta', 'images', 'backups', 'cephfs.cephfs.data', '.mgr']
Oct  1 12:18:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Oct  1 12:18:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Oct  1 12:18:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:18:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:18:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct  1 12:18:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  1 12:18:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct  1 12:18:12 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct  1 12:18:12 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  1 12:18:12 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct  1 12:18:12 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct  1 12:18:13 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  1 12:18:13 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Oct  1 12:18:13 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Oct  1 12:18:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v200: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:18:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Oct  1 12:18:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  1 12:18:14 np0005464891 systemd-logind[801]: New session 36 of user zuul.
Oct  1 12:18:14 np0005464891 systemd[1]: Started Session 36 of User zuul.
Oct  1 12:18:14 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct  1 12:18:14 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct  1 12:18:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct  1 12:18:14 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  1 12:18:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  1 12:18:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct  1 12:18:14 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct  1 12:18:14 np0005464891 python3.9[105915]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  1 12:18:15 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct  1 12:18:15 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct  1 12:18:15 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  1 12:18:15 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct  1 12:18:15 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct  1 12:18:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v202: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:18:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Oct  1 12:18:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  1 12:18:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:18:16 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct  1 12:18:16 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct  1 12:18:16 np0005464891 python3.9[106089]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:18:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct  1 12:18:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  1 12:18:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct  1 12:18:16 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct  1 12:18:16 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  1 12:18:16 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Oct  1 12:18:16 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Oct  1 12:18:17 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  1 12:18:17 np0005464891 python3.9[106245]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:18:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v204: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:18:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Oct  1 12:18:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  1 12:18:18 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Oct  1 12:18:18 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Oct  1 12:18:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct  1 12:18:18 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  1 12:18:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  1 12:18:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct  1 12:18:18 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct  1 12:18:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 91 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=91 pruub=10.254506111s) [2] r=-1 lpr=91 pi=[64,91)/1 crt=54'385 mlcod 0'0 active pruub 192.668518066s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:18 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 91 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=91 pruub=10.254405022s) [2] r=-1 lpr=91 pi=[64,91)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 192.668518066s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:18 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 91 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=91) [2] r=0 lpr=91 pi=[64,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:18 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.1c deep-scrub starts
Oct  1 12:18:18 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.1c deep-scrub ok
Oct  1 12:18:18 np0005464891 python3.9[106398]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:18:18 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Oct  1 12:18:18 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Oct  1 12:18:19 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Oct  1 12:18:19 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Oct  1 12:18:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct  1 12:18:19 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  1 12:18:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct  1 12:18:19 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct  1 12:18:19 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=92) [2]/[0] r=-1 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:19 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=92) [2]/[0] r=-1 lpr=92 pi=[64,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 92 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=92) [2]/[0] r=0 lpr=92 pi=[64,92)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:19 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 92 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=64/65 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=92) [2]/[0] r=0 lpr=92 pi=[64,92)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:19 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Oct  1 12:18:19 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Oct  1 12:18:19 np0005464891 python3.9[106552]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:18:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v207: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:18:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Oct  1 12:18:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  1 12:18:20 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.1f deep-scrub starts
Oct  1 12:18:20 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 2.1f deep-scrub ok
Oct  1 12:18:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct  1 12:18:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  1 12:18:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct  1 12:18:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  1 12:18:20 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct  1 12:18:20 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 93 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=92/93 n=5 ec=57/48 lis/c=64/64 les/c/f=65/65/0 sis=92) [2]/[0] async=[2] r=0 lpr=92 pi=[64,92)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:18:20 np0005464891 python3.9[106702]: ansible-ansible.builtin.service_facts Invoked
Oct  1 12:18:20 np0005464891 network[106719]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 12:18:20 np0005464891 network[106720]: 'network-scripts' will be removed from distribution in near future.
Oct  1 12:18:20 np0005464891 network[106721]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 12:18:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:18:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct  1 12:18:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct  1 12:18:21 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct  1 12:18:21 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 94 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=92/93 n=5 ec=57/48 lis/c=92/64 les/c/f=93/65/0 sis=94 pruub=15.431584358s) [2] async=[2] r=-1 lpr=94 pi=[64,94)/1 crt=54'385 mlcod 54'385 active pruub 200.449707031s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:21 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 94 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=92/93 n=5 ec=57/48 lis/c=92/64 les/c/f=93/65/0 sis=94 pruub=15.431507111s) [2] r=-1 lpr=94 pi=[64,94)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 200.449707031s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:21 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 94 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=92/64 les/c/f=93/65/0 sis=94) [2] r=0 lpr=94 pi=[64,94)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:21 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 94 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=92/64 les/c/f=93/65/0 sis=94) [2] r=0 lpr=94 pi=[64,94)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:21 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct  1 12:18:21 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct  1 12:18:21 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:18:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:18:21 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct  1 12:18:21 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct  1 12:18:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v210: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:18:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Oct  1 12:18:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  1 12:18:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct  1 12:18:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  1 12:18:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct  1 12:18:22 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct  1 12:18:22 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 95 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=95 pruub=15.577956200s) [1] r=-1 lpr=95 pi=[65,95)/1 crt=54'385 mlcod 0'0 active pruub 201.664855957s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:22 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 95 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=95 pruub=15.577851295s) [1] r=-1 lpr=95 pi=[65,95)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 201.664855957s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:22 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=95) [1] r=0 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:22 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 95 pg[9.13( v 54'385 (0'0,54'385] local-lis/les=94/95 n=5 ec=57/48 lis/c=92/64 les/c/f=93/65/0 sis=94) [2] r=0 lpr=94 pi=[64,94)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:18:22 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  1 12:18:22 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  1 12:18:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct  1 12:18:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct  1 12:18:23 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct  1 12:18:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[65,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:23 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[65,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:23 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 96 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=96) [1]/[0] r=0 lpr=96 pi=[65,96)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:23 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 96 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=96) [1]/[0] r=0 lpr=96 pi=[65,96)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:23 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 5.1e deep-scrub starts
Oct  1 12:18:23 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 5.1e deep-scrub ok
Oct  1 12:18:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v213: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  1 12:18:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct  1 12:18:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct  1 12:18:24 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct  1 12:18:24 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct  1 12:18:24 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 97 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=96/97 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[65,96)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:18:24 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct  1 12:18:24 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.c deep-scrub starts
Oct  1 12:18:24 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.c deep-scrub ok
Oct  1 12:18:25 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Oct  1 12:18:25 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Oct  1 12:18:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct  1 12:18:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct  1 12:18:25 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct  1 12:18:25 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 98 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=96/65 les/c/f=97/66/0 sis=98) [1] r=0 lpr=98 pi=[65,98)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:25 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 98 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=96/65 les/c/f=97/66/0 sis=98) [1] r=0 lpr=98 pi=[65,98)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:25 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 98 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=96/97 n=5 ec=57/48 lis/c=96/65 les/c/f=97/66/0 sis=98 pruub=15.047797203s) [1] async=[1] r=-1 lpr=98 pi=[65,98)/1 crt=54'385 mlcod 54'385 active pruub 204.621414185s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:25 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 98 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=96/97 n=5 ec=57/48 lis/c=96/65 les/c/f=97/66/0 sis=98 pruub=15.047705650s) [1] r=-1 lpr=98 pi=[65,98)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 204.621414185s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:25 np0005464891 python3.9[106985]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:18:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v216: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  1 12:18:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:18:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct  1 12:18:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct  1 12:18:26 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct  1 12:18:26 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 99 pg[9.15( v 54'385 (0'0,54'385] local-lis/les=98/99 n=5 ec=57/48 lis/c=96/65 les/c/f=97/66/0 sis=98) [1] r=0 lpr=98 pi=[65,98)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:18:26 np0005464891 python3.9[107135]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:18:27 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct  1 12:18:27 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct  1 12:18:27 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct  1 12:18:27 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct  1 12:18:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v218: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 209 B/s wr, 6 op/s; 45 B/s, 1 objects/s recovering
Oct  1 12:18:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct  1 12:18:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  1 12:18:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct  1 12:18:28 np0005464891 python3.9[107289]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:18:28 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct  1 12:18:29 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct  1 12:18:29 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct  1 12:18:29 np0005464891 python3.9[107447]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 12:18:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v219: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Oct  1 12:18:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct  1 12:18:30 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  1 12:18:30 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct  1 12:18:30 np0005464891 python3.9[107531]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:18:30 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Oct  1 12:18:31 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.c deep-scrub starts
Oct  1 12:18:31 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Oct  1 12:18:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 130 B/s wr, 4 op/s; 28 B/s, 1 objects/s recovering
Oct  1 12:18:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct  1 12:18:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  1 12:18:33 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct  1 12:18:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v221: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 121 B/s wr, 4 op/s; 26 B/s, 0 objects/s recovering
Oct  1 12:18:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct  1 12:18:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  1 12:18:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v222: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 102 B/s wr, 3 op/s; 21 B/s, 0 objects/s recovering
Oct  1 12:18:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct  1 12:18:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  1 12:18:37 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 9.775462151s
Oct  1 12:18:37 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 9.775462151s
Oct  1 12:18:37 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.775655746s, txc = 0x55a66e92c300
Oct  1 12:18:37 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct  1 12:18:37 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct  1 12:18:37 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Oct  1 12:18:37 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 8.111556053s
Oct  1 12:18:37 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 9.635681152s
Oct  1 12:18:37 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.635916710s, txc = 0x5640506f5b00
Oct  1 12:18:37 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 9.635682106s
Oct  1 12:18:37 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 8.111557007s
Oct  1 12:18:37 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.111799240s, txc = 0x5605ae9ed200
Oct  1 12:18:37 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Oct  1 12:18:37 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.c deep-scrub ok
Oct  1 12:18:37 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct  1 12:18:37 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.850442886s, txc = 0x55a66d19c900
Oct  1 12:18:37 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.812510490s, txc = 0x55a66e917b00
Oct  1 12:18:37 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.810244083s, txc = 0x55a66e956600
Oct  1 12:18:37 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.719147682s, txc = 0x56404f5be000
Oct  1 12:18:37 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.678269863s, txc = 0x5640508d5200
Oct  1 12:18:37 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.210989952s, txc = 0x5605aee60600
Oct  1 12:18:37 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.175872803s, txc = 0x5605af122f00
Oct  1 12:18:37 np0005464891 ceph-osd[87649]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.216737270s, txc = 0x5605ace7ef00
Oct  1 12:18:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  1 12:18:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct  1 12:18:37 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct  1 12:18:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v224: 321 pgs: 1 active+clean+scrubbing+deep, 5 active+clean+scrubbing, 315 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  1 12:18:38 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Oct  1 12:18:38 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Oct  1 12:18:38 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct  1 12:18:38 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct  1 12:18:38 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct  1 12:18:38 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 100 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=72/73 n=5 ec=57/48 lis/c=72/72 les/c/f=73/73/0 sis=100 pruub=11.215332031s) [0] r=-1 lpr=100 pi=[72,100)/1 crt=54'385 mlcod 0'0 active pruub 202.905593872s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:38 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 101 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=72/73 n=5 ec=57/48 lis/c=72/72 les/c/f=73/73/0 sis=100 pruub=11.215259552s) [0] r=-1 lpr=100 pi=[72,100)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 202.905593872s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:38 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=72/72 les/c/f=73/73/0 sis=100) [0] r=0 lpr=101 pi=[72,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:39 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct  1 12:18:39 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct  1 12:18:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  1 12:18:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  1 12:18:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  1 12:18:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  1 12:18:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  1 12:18:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct  1 12:18:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct  1 12:18:39 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct  1 12:18:39 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 102 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=72/72 les/c/f=73/73/0 sis=102) [0]/[2] r=-1 lpr=102 pi=[72,102)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:39 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 102 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=72/72 les/c/f=73/73/0 sis=102) [0]/[2] r=-1 lpr=102 pi=[72,102)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:39 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 102 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=72/73 n=5 ec=57/48 lis/c=72/72 les/c/f=73/73/0 sis=102) [0]/[2] r=0 lpr=102 pi=[72,102)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:39 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 102 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=72/73 n=5 ec=57/48 lis/c=72/72 les/c/f=73/73/0 sis=102) [0]/[2] r=0 lpr=102 pi=[72,102)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v227: 321 pgs: 1 active+clean+scrubbing+deep, 5 active+clean+scrubbing, 315 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:18:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Oct  1 12:18:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  1 12:18:40 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct  1 12:18:40 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct  1 12:18:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct  1 12:18:40 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  1 12:18:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  1 12:18:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct  1 12:18:41 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct  1 12:18:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:18:41 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.d scrub starts
Oct  1 12:18:41 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.d scrub ok
Oct  1 12:18:41 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct  1 12:18:41 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct  1 12:18:41 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 103 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=102/103 n=5 ec=57/48 lis/c=72/72 les/c/f=73/73/0 sis=102) [0]/[2] async=[0] r=0 lpr=102 pi=[72,102)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:18:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct  1 12:18:42 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  1 12:18:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct  1 12:18:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v229: 321 pgs: 1 active+clean+scrubbing+deep, 5 active+clean+scrubbing, 315 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:18:42 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct  1 12:18:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Oct  1 12:18:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  1 12:18:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:18:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:18:42 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 104 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=102/103 n=5 ec=57/48 lis/c=102/72 les/c/f=103/73/0 sis=104 pruub=15.389219284s) [0] async=[0] r=-1 lpr=104 pi=[72,104)/1 crt=54'385 mlcod 54'385 active pruub 210.266387939s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:42 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 104 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=102/103 n=5 ec=57/48 lis/c=102/72 les/c/f=103/73/0 sis=104 pruub=15.388957024s) [0] r=-1 lpr=104 pi=[72,104)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 210.266387939s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:42 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 104 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=102/72 les/c/f=103/73/0 sis=104) [0] r=0 lpr=104 pi=[72,104)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:42 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 104 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=102/72 les/c/f=103/73/0 sis=104) [0] r=0 lpr=104 pi=[72,104)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:18:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:18:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:18:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:18:42 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct  1 12:18:42 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct  1 12:18:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct  1 12:18:43 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  1 12:18:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  1 12:18:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct  1 12:18:43 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct  1 12:18:43 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 105 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=105 pruub=10.659099579s) [2] r=-1 lpr=105 pi=[65,105)/1 crt=54'385 mlcod 0'0 active pruub 217.667846680s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:43 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 105 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=105 pruub=10.658953667s) [2] r=-1 lpr=105 pi=[65,105)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 217.667846680s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:43 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 105 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=105) [2] r=0 lpr=105 pi=[65,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:43 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 105 pg[9.16( v 54'385 (0'0,54'385] local-lis/les=104/105 n=5 ec=57/48 lis/c=102/72 les/c/f=103/73/0 sis=104) [0] r=0 lpr=104 pi=[72,104)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:18:43 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct  1 12:18:43 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct  1 12:18:43 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct  1 12:18:43 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct  1 12:18:43 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct  1 12:18:43 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct  1 12:18:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v232: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 0 objects/s recovering
Oct  1 12:18:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct  1 12:18:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct  1 12:18:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  1 12:18:44 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct  1 12:18:44 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 106 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=106) [2]/[0] r=0 lpr=106 pi=[65,106)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:44 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 106 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=65/66 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=106) [2]/[0] r=0 lpr=106 pi=[65,106)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:44 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=106) [2]/[0] r=-1 lpr=106 pi=[65,106)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:44 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=106) [2]/[0] r=-1 lpr=106 pi=[65,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:44 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct  1 12:18:44 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct  1 12:18:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct  1 12:18:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct  1 12:18:45 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct  1 12:18:45 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 107 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=106/107 n=5 ec=57/48 lis/c=65/65 les/c/f=66/66/0 sis=106) [2]/[0] async=[2] r=0 lpr=106 pi=[65,106)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:18:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v235: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct  1 12:18:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:18:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct  1 12:18:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct  1 12:18:46 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct  1 12:18:46 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 108 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=106/65 les/c/f=107/66/0 sis=108) [2] r=0 lpr=108 pi=[65,108)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:46 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 108 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=106/65 les/c/f=107/66/0 sis=108) [2] r=0 lpr=108 pi=[65,108)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:46 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 108 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=106/107 n=5 ec=57/48 lis/c=106/65 les/c/f=107/66/0 sis=108 pruub=15.342240334s) [2] async=[2] r=-1 lpr=108 pi=[65,108)/1 crt=54'385 mlcod 54'385 active pruub 225.362213135s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:46 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 108 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=106/107 n=5 ec=57/48 lis/c=106/65 les/c/f=107/66/0 sis=108 pruub=15.342148781s) [2] r=-1 lpr=108 pi=[65,108)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 225.362213135s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:46 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct  1 12:18:46 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct  1 12:18:46 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct  1 12:18:46 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct  1 12:18:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct  1 12:18:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct  1 12:18:47 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct  1 12:18:47 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 109 pg[9.19( v 54'385 (0'0,54'385] local-lis/les=108/109 n=5 ec=57/48 lis/c=106/65 les/c/f=107/66/0 sis=108) [2] r=0 lpr=108 pi=[65,108)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:18:47 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct  1 12:18:47 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct  1 12:18:47 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct  1 12:18:47 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct  1 12:18:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v238: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct  1 12:18:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Oct  1 12:18:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  1 12:18:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct  1 12:18:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  1 12:18:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct  1 12:18:48 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct  1 12:18:48 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  1 12:18:48 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct  1 12:18:48 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct  1 12:18:49 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  1 12:18:49 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct  1 12:18:49 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct  1 12:18:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v240: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 1 objects/s recovering
Oct  1 12:18:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Oct  1 12:18:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  1 12:18:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct  1 12:18:50 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  1 12:18:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  1 12:18:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct  1 12:18:50 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct  1 12:18:50 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct  1 12:18:50 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct  1 12:18:50 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Oct  1 12:18:50 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Oct  1 12:18:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:18:51 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  1 12:18:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v242: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct  1 12:18:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Oct  1 12:18:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  1 12:18:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct  1 12:18:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  1 12:18:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct  1 12:18:52 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct  1 12:18:52 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  1 12:18:52 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct  1 12:18:52 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct  1 12:18:52 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 112 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=84/85 n=5 ec=57/48 lis/c=84/84 les/c/f=85/85/0 sis=112 pruub=8.661423683s) [0] r=-1 lpr=112 pi=[84,112)/1 crt=54'385 mlcod 0'0 active pruub 214.169311523s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:52 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 112 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=84/85 n=5 ec=57/48 lis/c=84/84 les/c/f=85/85/0 sis=112 pruub=8.661355972s) [0] r=-1 lpr=112 pi=[84,112)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 214.169311523s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:52 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=84/84 les/c/f=85/85/0 sis=112) [0] r=0 lpr=112 pi=[84,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct  1 12:18:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct  1 12:18:53 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct  1 12:18:53 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  1 12:18:53 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=84/84 les/c/f=85/85/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[84,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:53 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=84/84 les/c/f=85/85/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[84,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:53 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 113 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=84/85 n=5 ec=57/48 lis/c=84/84 les/c/f=85/85/0 sis=113) [0]/[2] r=0 lpr=113 pi=[84,113)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:53 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 113 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=84/85 n=5 ec=57/48 lis/c=84/84 les/c/f=85/85/0 sis=113) [0]/[2] r=0 lpr=113 pi=[84,113)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:53 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct  1 12:18:53 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct  1 12:18:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v245: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:18:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Oct  1 12:18:54 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  1 12:18:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct  1 12:18:54 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  1 12:18:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct  1 12:18:54 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct  1 12:18:54 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  1 12:18:54 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 114 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=113/114 n=5 ec=57/48 lis/c=84/84 les/c/f=85/85/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[84,113)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:18:54 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct  1 12:18:54 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct  1 12:18:54 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct  1 12:18:54 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct  1 12:18:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct  1 12:18:55 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  1 12:18:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct  1 12:18:55 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct  1 12:18:55 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 115 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=113/114 n=5 ec=57/48 lis/c=113/84 les/c/f=114/85/0 sis=115 pruub=14.985365868s) [0] async=[0] r=-1 lpr=115 pi=[84,115)/1 crt=54'385 mlcod 54'385 active pruub 223.058273315s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:55 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 115 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=113/114 n=5 ec=57/48 lis/c=113/84 les/c/f=114/85/0 sis=115 pruub=14.985254288s) [0] r=-1 lpr=115 pi=[84,115)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 223.058273315s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:55 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 115 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=113/84 les/c/f=114/85/0 sis=115) [0] r=0 lpr=115 pi=[84,115)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:55 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 115 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=113/84 les/c/f=114/85/0 sis=115) [0] r=0 lpr=115 pi=[84,115)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:55 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct  1 12:18:55 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct  1 12:18:55 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct  1 12:18:55 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct  1 12:18:55 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct  1 12:18:55 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct  1 12:18:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v248: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:18:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Oct  1 12:18:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  1 12:18:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:18:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct  1 12:18:56 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  1 12:18:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  1 12:18:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct  1 12:18:56 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct  1 12:18:56 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 116 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=72/73 n=5 ec=57/48 lis/c=72/72 les/c/f=73/73/0 sis=116 pruub=9.806852341s) [0] r=-1 lpr=116 pi=[72,116)/1 crt=54'385 mlcod 0'0 active pruub 218.905944824s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:56 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 116 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=72/73 n=5 ec=57/48 lis/c=72/72 les/c/f=73/73/0 sis=116 pruub=9.806078911s) [0] r=-1 lpr=116 pi=[72,116)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 218.905944824s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:56 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=72/72 les/c/f=73/73/0 sis=116) [0] r=0 lpr=116 pi=[72,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:56 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 116 pg[9.1c( v 54'385 (0'0,54'385] local-lis/les=115/116 n=5 ec=57/48 lis/c=113/84 les/c/f=114/85/0 sis=115) [0] r=0 lpr=115 pi=[84,115)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:18:56 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct  1 12:18:56 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct  1 12:18:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct  1 12:18:57 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  1 12:18:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct  1 12:18:57 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct  1 12:18:57 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=72/72 les/c/f=73/73/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[72,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:57 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=72/72 les/c/f=73/73/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[72,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:57 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 117 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=72/73 n=5 ec=57/48 lis/c=72/72 les/c/f=73/73/0 sis=117) [0]/[2] r=0 lpr=117 pi=[72,117)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:57 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 117 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=72/73 n=5 ec=57/48 lis/c=72/72 les/c/f=73/73/0 sis=117) [0]/[2] r=0 lpr=117 pi=[72,117)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:18:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v251: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct  1 12:18:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct  1 12:18:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct  1 12:18:58 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct  1 12:18:58 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 118 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=117/118 n=5 ec=57/48 lis/c=72/72 les/c/f=73/73/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[72,117)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:18:58 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct  1 12:18:58 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct  1 12:18:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct  1 12:18:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct  1 12:18:59 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct  1 12:18:59 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 119 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=117/118 n=5 ec=57/48 lis/c=117/72 les/c/f=118/73/0 sis=119 pruub=14.975466728s) [0] async=[0] r=-1 lpr=119 pi=[72,119)/1 crt=54'385 mlcod 54'385 active pruub 227.184326172s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:59 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 119 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=117/118 n=5 ec=57/48 lis/c=117/72 les/c/f=118/73/0 sis=119 pruub=14.974881172s) [0] r=-1 lpr=119 pi=[72,119)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 227.184326172s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:18:59 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 119 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=117/72 les/c/f=118/73/0 sis=119) [0] r=0 lpr=119 pi=[72,119)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:18:59 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 119 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=117/72 les/c/f=118/73/0 sis=119) [0] r=0 lpr=119 pi=[72,119)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:19:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct  1 12:19:00 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct  1 12:19:00 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct  1 12:19:00 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct  1 12:19:00 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct  1 12:19:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct  1 12:19:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct  1 12:19:00 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct  1 12:19:00 np0005464891 ceph-osd[87649]: osd.0 pg_epoch: 120 pg[9.1e( v 54'385 (0'0,54'385] local-lis/les=119/120 n=5 ec=57/48 lis/c=117/72 les/c/f=118/73/0 sis=119) [0] r=0 lpr=119 pi=[72,119)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:19:00 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct  1 12:19:00 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct  1 12:19:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:19:01 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct  1 12:19:01 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct  1 12:19:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 1 objects/s recovering
Oct  1 12:19:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  1 12:19:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 12:19:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:19:04 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct  1 12:19:04 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct  1 12:19:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct  1 12:19:04 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 12:19:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:19:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct  1 12:19:04 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct  1 12:19:04 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 121 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=57/48 lis/c=75/75 les/c/f=76/76/0 sis=121 pruub=12.509227753s) [1] r=-1 lpr=121 pi=[75,121)/1 crt=54'385 mlcod 0'0 active pruub 229.954879761s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:19:04 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 121 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=57/48 lis/c=75/75 les/c/f=76/76/0 sis=121 pruub=12.509121895s) [1] r=-1 lpr=121 pi=[75,121)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 229.954879761s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:19:04 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 121 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=75/75 les/c/f=76/76/0 sis=121) [1] r=0 lpr=121 pi=[75,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:19:05 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.3 deep-scrub starts
Oct  1 12:19:05 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.3 deep-scrub ok
Oct  1 12:19:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct  1 12:19:05 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 12:19:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct  1 12:19:05 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct  1 12:19:05 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 122 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=57/48 lis/c=75/75 les/c/f=76/76/0 sis=122) [1]/[2] r=0 lpr=122 pi=[75,122)/1 crt=54'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:19:05 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 122 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=75/76 n=5 ec=57/48 lis/c=75/75 les/c/f=76/76/0 sis=122) [1]/[2] r=0 lpr=122 pi=[75,122)/1 crt=54'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 12:19:05 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 122 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=75/75 les/c/f=76/76/0 sis=122) [1]/[2] r=-1 lpr=122 pi=[75,122)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:19:05 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 122 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/48 lis/c=75/75 les/c/f=76/76/0 sis=122) [1]/[2] r=-1 lpr=122 pi=[75,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 12:19:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  1 12:19:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:19:06 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Oct  1 12:19:06 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Oct  1 12:19:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct  1 12:19:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct  1 12:19:06 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct  1 12:19:07 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Oct  1 12:19:07 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Oct  1 12:19:07 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct  1 12:19:07 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 123 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=122/123 n=5 ec=57/48 lis/c=75/75 les/c/f=76/76/0 sis=122) [1]/[2] async=[1] r=0 lpr=122 pi=[75,122)/1 crt=54'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:19:07 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct  1 12:19:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct  1 12:19:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct  1 12:19:07 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct  1 12:19:07 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 124 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=122/123 n=5 ec=57/48 lis/c=122/75 les/c/f=123/76/0 sis=124 pruub=15.860220909s) [1] async=[1] r=-1 lpr=124 pi=[75,124)/1 crt=54'385 mlcod 54'385 active pruub 236.158325195s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:19:07 np0005464891 ceph-osd[89750]: osd.2 pg_epoch: 124 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=122/123 n=5 ec=57/48 lis/c=122/75 les/c/f=123/76/0 sis=124 pruub=15.859159470s) [1] r=-1 lpr=124 pi=[75,124)/1 crt=54'385 mlcod 0'0 unknown NOTIFY pruub 236.158325195s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 12:19:07 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 124 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=122/75 les/c/f=123/76/0 sis=124) [1] r=0 lpr=124 pi=[75,124)/1 luod=0'0 crt=54'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 12:19:07 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 124 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=0/0 n=5 ec=57/48 lis/c=122/75 les/c/f=123/76/0 sis=124) [1] r=0 lpr=124 pi=[75,124)/1 crt=54'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 12:19:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct  1 12:19:08 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct  1 12:19:08 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct  1 12:19:08 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct  1 12:19:08 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:19:08 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 3b0887b5-0499-4a77-8fdc-bd5da9a5e018 does not exist
Oct  1 12:19:08 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 3469d9e9-2de5-44ad-a4ef-42466f0563c1 does not exist
Oct  1 12:19:08 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 0789f565-83fc-4653-a1bd-bf69ee0022de does not exist
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct  1 12:19:08 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct  1 12:19:08 np0005464891 ceph-osd[88747]: osd.1 pg_epoch: 125 pg[9.1f( v 54'385 (0'0,54'385] local-lis/les=124/125 n=5 ec=57/48 lis/c=122/75 les/c/f=123/76/0 sis=124) [1] r=0 lpr=124 pi=[75,124)/1 crt=54'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 12:19:09 np0005464891 podman[107949]: 2025-10-01 16:19:09.126561523 +0000 UTC m=+0.065031021 container create 7ffa67bf4385b06f36ccb12d49b5071ee564183a80054c9d851627ff4feacdf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 12:19:09 np0005464891 systemd[1]: Started libpod-conmon-7ffa67bf4385b06f36ccb12d49b5071ee564183a80054c9d851627ff4feacdf0.scope.
Oct  1 12:19:09 np0005464891 podman[107949]: 2025-10-01 16:19:09.0973712 +0000 UTC m=+0.035840778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:19:09 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:19:09 np0005464891 podman[107949]: 2025-10-01 16:19:09.235766637 +0000 UTC m=+0.174236225 container init 7ffa67bf4385b06f36ccb12d49b5071ee564183a80054c9d851627ff4feacdf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mendeleev, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 12:19:09 np0005464891 podman[107949]: 2025-10-01 16:19:09.243176449 +0000 UTC m=+0.181645977 container start 7ffa67bf4385b06f36ccb12d49b5071ee564183a80054c9d851627ff4feacdf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mendeleev, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:19:09 np0005464891 podman[107949]: 2025-10-01 16:19:09.250657715 +0000 UTC m=+0.189127303 container attach 7ffa67bf4385b06f36ccb12d49b5071ee564183a80054c9d851627ff4feacdf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Oct  1 12:19:09 np0005464891 boring_mendeleev[107966]: 167 167
Oct  1 12:19:09 np0005464891 systemd[1]: libpod-7ffa67bf4385b06f36ccb12d49b5071ee564183a80054c9d851627ff4feacdf0.scope: Deactivated successfully.
Oct  1 12:19:09 np0005464891 podman[107949]: 2025-10-01 16:19:09.253888765 +0000 UTC m=+0.192358303 container died 7ffa67bf4385b06f36ccb12d49b5071ee564183a80054c9d851627ff4feacdf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:19:09 np0005464891 systemd[1]: var-lib-containers-storage-overlay-069f75805cec58f08a796648e1d0836fc187a719964606f9ff7c915356cf29a4-merged.mount: Deactivated successfully.
Oct  1 12:19:09 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Oct  1 12:19:09 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Oct  1 12:19:09 np0005464891 podman[107949]: 2025-10-01 16:19:09.328039991 +0000 UTC m=+0.266509519 container remove 7ffa67bf4385b06f36ccb12d49b5071ee564183a80054c9d851627ff4feacdf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mendeleev, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 12:19:09 np0005464891 systemd[1]: libpod-conmon-7ffa67bf4385b06f36ccb12d49b5071ee564183a80054c9d851627ff4feacdf0.scope: Deactivated successfully.
Oct  1 12:19:09 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct  1 12:19:09 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct  1 12:19:09 np0005464891 podman[107991]: 2025-10-01 16:19:09.577534787 +0000 UTC m=+0.076993877 container create ee41595bbfdb9a9464e8f8a4bc7285d65693e306586dd473a6d27d7e71896b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_pike, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:19:09 np0005464891 systemd[1]: Started libpod-conmon-ee41595bbfdb9a9464e8f8a4bc7285d65693e306586dd473a6d27d7e71896b71.scope.
Oct  1 12:19:09 np0005464891 podman[107991]: 2025-10-01 16:19:09.545950715 +0000 UTC m=+0.045409865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:19:09 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:19:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/461135ad4cb186f1a8ef6bd9732910677b4b33f08975db70d6508fae802cde8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:19:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/461135ad4cb186f1a8ef6bd9732910677b4b33f08975db70d6508fae802cde8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:19:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/461135ad4cb186f1a8ef6bd9732910677b4b33f08975db70d6508fae802cde8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:19:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/461135ad4cb186f1a8ef6bd9732910677b4b33f08975db70d6508fae802cde8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:19:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/461135ad4cb186f1a8ef6bd9732910677b4b33f08975db70d6508fae802cde8a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:19:09 np0005464891 podman[107991]: 2025-10-01 16:19:09.706043148 +0000 UTC m=+0.205502318 container init ee41595bbfdb9a9464e8f8a4bc7285d65693e306586dd473a6d27d7e71896b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_pike, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:19:09 np0005464891 podman[107991]: 2025-10-01 16:19:09.723539902 +0000 UTC m=+0.222999022 container start ee41595bbfdb9a9464e8f8a4bc7285d65693e306586dd473a6d27d7e71896b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:19:09 np0005464891 podman[107991]: 2025-10-01 16:19:09.742599164 +0000 UTC m=+0.242058294 container attach ee41595bbfdb9a9464e8f8a4bc7285d65693e306586dd473a6d27d7e71896b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_pike, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 12:19:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 1 objects/s recovering
Oct  1 12:19:10 np0005464891 competent_pike[108008]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:19:10 np0005464891 competent_pike[108008]: --> relative data size: 1.0
Oct  1 12:19:10 np0005464891 competent_pike[108008]: --> All data devices are unavailable
Oct  1 12:19:10 np0005464891 systemd[1]: libpod-ee41595bbfdb9a9464e8f8a4bc7285d65693e306586dd473a6d27d7e71896b71.scope: Deactivated successfully.
Oct  1 12:19:10 np0005464891 systemd[1]: libpod-ee41595bbfdb9a9464e8f8a4bc7285d65693e306586dd473a6d27d7e71896b71.scope: Consumed 1.138s CPU time.
Oct  1 12:19:10 np0005464891 podman[108038]: 2025-10-01 16:19:10.966231215 +0000 UTC m=+0.037687863 container died ee41595bbfdb9a9464e8f8a4bc7285d65693e306586dd473a6d27d7e71896b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:19:10 np0005464891 systemd[1]: var-lib-containers-storage-overlay-461135ad4cb186f1a8ef6bd9732910677b4b33f08975db70d6508fae802cde8a-merged.mount: Deactivated successfully.
Oct  1 12:19:11 np0005464891 podman[108038]: 2025-10-01 16:19:11.028903278 +0000 UTC m=+0.100359866 container remove ee41595bbfdb9a9464e8f8a4bc7285d65693e306586dd473a6d27d7e71896b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_pike, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:19:11 np0005464891 systemd[1]: libpod-conmon-ee41595bbfdb9a9464e8f8a4bc7285d65693e306586dd473a6d27d7e71896b71.scope: Deactivated successfully.
Oct  1 12:19:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:19:11 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.1a deep-scrub starts
Oct  1 12:19:11 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.1a deep-scrub ok
Oct  1 12:19:11 np0005464891 podman[108196]: 2025-10-01 16:19:11.880625313 +0000 UTC m=+0.068810095 container create bd46f8b0736e42d6346c6eaf6e2d386b8ad9f71dd1992eb6f787b2f078e9daf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:19:11 np0005464891 systemd[1]: Started libpod-conmon-bd46f8b0736e42d6346c6eaf6e2d386b8ad9f71dd1992eb6f787b2f078e9daf8.scope.
Oct  1 12:19:11 np0005464891 podman[108196]: 2025-10-01 16:19:11.847040611 +0000 UTC m=+0.035225443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:19:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:19:11
Oct  1 12:19:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:19:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:19:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr']
Oct  1 12:19:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:19:11 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:19:11 np0005464891 podman[108196]: 2025-10-01 16:19:11.987791376 +0000 UTC m=+0.175976188 container init bd46f8b0736e42d6346c6eaf6e2d386b8ad9f71dd1992eb6f787b2f078e9daf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:19:11 np0005464891 podman[108196]: 2025-10-01 16:19:11.999015483 +0000 UTC m=+0.187200305 container start bd46f8b0736e42d6346c6eaf6e2d386b8ad9f71dd1992eb6f787b2f078e9daf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 12:19:12 np0005464891 podman[108196]: 2025-10-01 16:19:12.002551781 +0000 UTC m=+0.190736583 container attach bd46f8b0736e42d6346c6eaf6e2d386b8ad9f71dd1992eb6f787b2f078e9daf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:19:12 np0005464891 interesting_jackson[108212]: 167 167
Oct  1 12:19:12 np0005464891 systemd[1]: libpod-bd46f8b0736e42d6346c6eaf6e2d386b8ad9f71dd1992eb6f787b2f078e9daf8.scope: Deactivated successfully.
Oct  1 12:19:12 np0005464891 podman[108196]: 2025-10-01 16:19:12.005652047 +0000 UTC m=+0.193836839 container died bd46f8b0736e42d6346c6eaf6e2d386b8ad9f71dd1992eb6f787b2f078e9daf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:19:12 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4add7e0b76acc519f4dae632c0a1fd6c3063ec8756be7dc38eacd90eede6f311-merged.mount: Deactivated successfully.
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:19:12 np0005464891 podman[108196]: 2025-10-01 16:19:12.049280458 +0000 UTC m=+0.237465250 container remove bd46f8b0736e42d6346c6eaf6e2d386b8ad9f71dd1992eb6f787b2f078e9daf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:19:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:19:12 np0005464891 systemd[1]: libpod-conmon-bd46f8b0736e42d6346c6eaf6e2d386b8ad9f71dd1992eb6f787b2f078e9daf8.scope: Deactivated successfully.
Oct  1 12:19:12 np0005464891 podman[108235]: 2025-10-01 16:19:12.263699496 +0000 UTC m=+0.068920377 container create 25ae95acce2c7444248b81ea89cb2c305463c5b4d2d82f38037ab3e874e70753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 12:19:12 np0005464891 systemd[1]: Started libpod-conmon-25ae95acce2c7444248b81ea89cb2c305463c5b4d2d82f38037ab3e874e70753.scope.
Oct  1 12:19:12 np0005464891 podman[108235]: 2025-10-01 16:19:12.227941361 +0000 UTC m=+0.033162292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:19:12 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:19:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7175485b1d5ca26f675e11dec8600b71d0a73264f201cc7b4b40a5db6f4dcd2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:19:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7175485b1d5ca26f675e11dec8600b71d0a73264f201cc7b4b40a5db6f4dcd2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:19:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7175485b1d5ca26f675e11dec8600b71d0a73264f201cc7b4b40a5db6f4dcd2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:19:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7175485b1d5ca26f675e11dec8600b71d0a73264f201cc7b4b40a5db6f4dcd2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:19:12 np0005464891 podman[108235]: 2025-10-01 16:19:12.383686606 +0000 UTC m=+0.188907497 container init 25ae95acce2c7444248b81ea89cb2c305463c5b4d2d82f38037ab3e874e70753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:19:12 np0005464891 podman[108235]: 2025-10-01 16:19:12.391750846 +0000 UTC m=+0.196971697 container start 25ae95acce2c7444248b81ea89cb2c305463c5b4d2d82f38037ab3e874e70753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:19:12 np0005464891 podman[108235]: 2025-10-01 16:19:12.395583771 +0000 UTC m=+0.200804632 container attach 25ae95acce2c7444248b81ea89cb2c305463c5b4d2d82f38037ab3e874e70753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 12:19:13 np0005464891 nice_shtern[108252]: {
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:    "0": [
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:        {
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "devices": [
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "/dev/loop3"
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            ],
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "lv_name": "ceph_lv0",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "lv_size": "21470642176",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "name": "ceph_lv0",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "tags": {
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.cluster_name": "ceph",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.crush_device_class": "",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.encrypted": "0",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.osd_id": "0",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.type": "block",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.vdo": "0"
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            },
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "type": "block",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "vg_name": "ceph_vg0"
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:        }
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:    ],
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:    "1": [
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:        {
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "devices": [
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "/dev/loop4"
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            ],
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "lv_name": "ceph_lv1",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "lv_size": "21470642176",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "name": "ceph_lv1",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "tags": {
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.cluster_name": "ceph",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.crush_device_class": "",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.encrypted": "0",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.osd_id": "1",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.type": "block",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.vdo": "0"
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            },
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "type": "block",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "vg_name": "ceph_vg1"
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:        }
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:    ],
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:    "2": [
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:        {
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "devices": [
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "/dev/loop5"
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            ],
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "lv_name": "ceph_lv2",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "lv_size": "21470642176",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "name": "ceph_lv2",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "tags": {
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.cluster_name": "ceph",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.crush_device_class": "",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.encrypted": "0",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.osd_id": "2",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.type": "block",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:                "ceph.vdo": "0"
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            },
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "type": "block",
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:            "vg_name": "ceph_vg2"
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:        }
Oct  1 12:19:13 np0005464891 nice_shtern[108252]:    ]
Oct  1 12:19:13 np0005464891 nice_shtern[108252]: }
Oct  1 12:19:13 np0005464891 systemd[1]: libpod-25ae95acce2c7444248b81ea89cb2c305463c5b4d2d82f38037ab3e874e70753.scope: Deactivated successfully.
Oct  1 12:19:13 np0005464891 podman[108235]: 2025-10-01 16:19:13.217445277 +0000 UTC m=+1.022666148 container died 25ae95acce2c7444248b81ea89cb2c305463c5b4d2d82f38037ab3e874e70753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 12:19:13 np0005464891 systemd[1]: var-lib-containers-storage-overlay-7175485b1d5ca26f675e11dec8600b71d0a73264f201cc7b4b40a5db6f4dcd2d-merged.mount: Deactivated successfully.
Oct  1 12:19:13 np0005464891 podman[108235]: 2025-10-01 16:19:13.292096945 +0000 UTC m=+1.097317796 container remove 25ae95acce2c7444248b81ea89cb2c305463c5b4d2d82f38037ab3e874e70753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 12:19:13 np0005464891 systemd[1]: libpod-conmon-25ae95acce2c7444248b81ea89cb2c305463c5b4d2d82f38037ab3e874e70753.scope: Deactivated successfully.
Oct  1 12:19:13 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct  1 12:19:13 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct  1 12:19:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Oct  1 12:19:14 np0005464891 podman[108413]: 2025-10-01 16:19:14.093824813 +0000 UTC m=+0.065903303 container create 3a938aadcde816370087fa0029c1180787a37220f983667ef661145aebe7258e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_volhard, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:19:14 np0005464891 systemd[1]: Started libpod-conmon-3a938aadcde816370087fa0029c1180787a37220f983667ef661145aebe7258e.scope.
Oct  1 12:19:14 np0005464891 podman[108413]: 2025-10-01 16:19:14.066341812 +0000 UTC m=+0.038420312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:19:14 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:19:14 np0005464891 podman[108413]: 2025-10-01 16:19:14.185371708 +0000 UTC m=+0.157450228 container init 3a938aadcde816370087fa0029c1180787a37220f983667ef661145aebe7258e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_volhard, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:19:14 np0005464891 podman[108413]: 2025-10-01 16:19:14.194376852 +0000 UTC m=+0.166455342 container start 3a938aadcde816370087fa0029c1180787a37220f983667ef661145aebe7258e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_volhard, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:19:14 np0005464891 podman[108413]: 2025-10-01 16:19:14.201717203 +0000 UTC m=+0.173795693 container attach 3a938aadcde816370087fa0029c1180787a37220f983667ef661145aebe7258e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_volhard, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:19:14 np0005464891 nifty_volhard[108429]: 167 167
Oct  1 12:19:14 np0005464891 systemd[1]: libpod-3a938aadcde816370087fa0029c1180787a37220f983667ef661145aebe7258e.scope: Deactivated successfully.
Oct  1 12:19:14 np0005464891 podman[108413]: 2025-10-01 16:19:14.204718937 +0000 UTC m=+0.176797407 container died 3a938aadcde816370087fa0029c1180787a37220f983667ef661145aebe7258e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:19:14 np0005464891 systemd[1]: var-lib-containers-storage-overlay-541238e12bfa97b4aac926d56ba252032b24c542945f896e2135a724ac414f6f-merged.mount: Deactivated successfully.
Oct  1 12:19:14 np0005464891 podman[108413]: 2025-10-01 16:19:14.24967958 +0000 UTC m=+0.221758030 container remove 3a938aadcde816370087fa0029c1180787a37220f983667ef661145aebe7258e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_volhard, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct  1 12:19:14 np0005464891 systemd[1]: libpod-conmon-3a938aadcde816370087fa0029c1180787a37220f983667ef661145aebe7258e.scope: Deactivated successfully.
Oct  1 12:19:14 np0005464891 podman[108453]: 2025-10-01 16:19:14.473296796 +0000 UTC m=+0.059157485 container create 9257e048d19709ab94a76dd67c576702d9aa1ccfeecf2e8d9993011c0372a749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:19:14 np0005464891 systemd[1]: Started libpod-conmon-9257e048d19709ab94a76dd67c576702d9aa1ccfeecf2e8d9993011c0372a749.scope.
Oct  1 12:19:14 np0005464891 podman[108453]: 2025-10-01 16:19:14.445679003 +0000 UTC m=+0.031539732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:19:14 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:19:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63746ba7eb50ae0ec3b3d22baf47f41343e5c962279f5e9d7920dc94669c5d88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:19:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63746ba7eb50ae0ec3b3d22baf47f41343e5c962279f5e9d7920dc94669c5d88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:19:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63746ba7eb50ae0ec3b3d22baf47f41343e5c962279f5e9d7920dc94669c5d88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:19:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63746ba7eb50ae0ec3b3d22baf47f41343e5c962279f5e9d7920dc94669c5d88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:19:14 np0005464891 podman[108453]: 2025-10-01 16:19:14.569991011 +0000 UTC m=+0.155851670 container init 9257e048d19709ab94a76dd67c576702d9aa1ccfeecf2e8d9993011c0372a749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:19:14 np0005464891 podman[108453]: 2025-10-01 16:19:14.582285645 +0000 UTC m=+0.168146294 container start 9257e048d19709ab94a76dd67c576702d9aa1ccfeecf2e8d9993011c0372a749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_pascal, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:19:14 np0005464891 podman[108453]: 2025-10-01 16:19:14.585440823 +0000 UTC m=+0.171301472 container attach 9257e048d19709ab94a76dd67c576702d9aa1ccfeecf2e8d9993011c0372a749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]: {
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:        "osd_id": 2,
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:        "type": "bluestore"
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:    },
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:        "osd_id": 0,
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:        "type": "bluestore"
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:    },
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:        "osd_id": 1,
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:        "type": "bluestore"
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]:    }
Oct  1 12:19:15 np0005464891 amazing_pascal[108469]: }
Oct  1 12:19:15 np0005464891 systemd[1]: libpod-9257e048d19709ab94a76dd67c576702d9aa1ccfeecf2e8d9993011c0372a749.scope: Deactivated successfully.
Oct  1 12:19:15 np0005464891 podman[108453]: 2025-10-01 16:19:15.680778079 +0000 UTC m=+1.266638738 container died 9257e048d19709ab94a76dd67c576702d9aa1ccfeecf2e8d9993011c0372a749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:19:15 np0005464891 systemd[1]: libpod-9257e048d19709ab94a76dd67c576702d9aa1ccfeecf2e8d9993011c0372a749.scope: Consumed 1.096s CPU time.
Oct  1 12:19:15 np0005464891 systemd[1]: var-lib-containers-storage-overlay-63746ba7eb50ae0ec3b3d22baf47f41343e5c962279f5e9d7920dc94669c5d88-merged.mount: Deactivated successfully.
Oct  1 12:19:15 np0005464891 podman[108453]: 2025-10-01 16:19:15.767358432 +0000 UTC m=+1.353219091 container remove 9257e048d19709ab94a76dd67c576702d9aa1ccfeecf2e8d9993011c0372a749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_pascal, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:19:15 np0005464891 systemd[1]: libpod-conmon-9257e048d19709ab94a76dd67c576702d9aa1ccfeecf2e8d9993011c0372a749.scope: Deactivated successfully.
Oct  1 12:19:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:19:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:19:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:19:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:19:15 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 5b0d647c-806f-4e7e-a7ff-7ed01f1d5c4b does not exist
Oct  1 12:19:15 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 75098e13-fdd1-40f3-bf2f-817c6339e73f does not exist
Oct  1 12:19:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Oct  1 12:19:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:19:16 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:19:16 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:19:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:18 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct  1 12:19:18 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct  1 12:19:18 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct  1 12:19:18 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct  1 12:19:19 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Oct  1 12:19:19 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Oct  1 12:19:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:20 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Oct  1 12:19:20 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Oct  1 12:19:20 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct  1 12:19:20 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct  1 12:19:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:19:21 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct  1 12:19:21 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct  1 12:19:21 np0005464891 python3.9[108717]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:19:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:19:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:22 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 8.15 deep-scrub starts
Oct  1 12:19:22 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 8.15 deep-scrub ok
Oct  1 12:19:23 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct  1 12:19:23 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct  1 12:19:23 np0005464891 python3.9[109004]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  1 12:19:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:24 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Oct  1 12:19:24 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Oct  1 12:19:24 np0005464891 python3.9[109156]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  1 12:19:25 np0005464891 python3.9[109309]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:19:25 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 8.f deep-scrub starts
Oct  1 12:19:25 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 8.f deep-scrub ok
Oct  1 12:19:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:19:26 np0005464891 python3.9[109461]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  1 12:19:26 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct  1 12:19:26 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct  1 12:19:26 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 11.e scrub starts
Oct  1 12:19:26 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 11.e scrub ok
Oct  1 12:19:27 np0005464891 python3.9[109613]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:19:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:28 np0005464891 python3.9[109765]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:19:28 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct  1 12:19:28 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct  1 12:19:28 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct  1 12:19:28 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct  1 12:19:28 np0005464891 python3.9[109843]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:19:29 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct  1 12:19:29 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 10.1e deep-scrub starts
Oct  1 12:19:29 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct  1 12:19:29 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 10.1e deep-scrub ok
Oct  1 12:19:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:30 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct  1 12:19:30 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct  1 12:19:30 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct  1 12:19:30 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct  1 12:19:30 np0005464891 python3.9[109995]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  1 12:19:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:19:31 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct  1 12:19:31 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct  1 12:19:31 np0005464891 python3.9[110148]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  1 12:19:31 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct  1 12:19:31 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct  1 12:19:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:32 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct  1 12:19:32 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct  1 12:19:32 np0005464891 python3.9[110301]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  1 12:19:33 np0005464891 python3.9[110453]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  1 12:19:33 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Oct  1 12:19:33 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Oct  1 12:19:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:34 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct  1 12:19:34 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct  1 12:19:34 np0005464891 python3.9[110605]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:19:35 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 8.d scrub starts
Oct  1 12:19:35 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 8.d scrub ok
Oct  1 12:19:35 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct  1 12:19:35 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct  1 12:19:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:19:36 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct  1 12:19:36 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct  1 12:19:36 np0005464891 python3.9[110758]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:19:37 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct  1 12:19:37 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct  1 12:19:37 np0005464891 python3.9[110910]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:19:37 np0005464891 python3.9[110988]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:19:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:38 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct  1 12:19:38 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct  1 12:19:38 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct  1 12:19:38 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct  1 12:19:38 np0005464891 python3.9[111140]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:19:39 np0005464891 python3.9[111218]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:19:39 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct  1 12:19:39 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct  1 12:19:39 np0005464891 python3.9[111370]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:19:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:40 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.3 deep-scrub starts
Oct  1 12:19:40 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.3 deep-scrub ok
Oct  1 12:19:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:19:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:19:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:19:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:19:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:19:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:19:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:19:42 np0005464891 python3.9[111521]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:19:42 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct  1 12:19:42 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct  1 12:19:42 np0005464891 python3.9[111673]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  1 12:19:43 np0005464891 python3.9[111823]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:19:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:44 np0005464891 python3.9[111975]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:19:44 np0005464891 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  1 12:19:45 np0005464891 systemd[1]: tuned.service: Deactivated successfully.
Oct  1 12:19:45 np0005464891 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  1 12:19:45 np0005464891 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  1 12:19:45 np0005464891 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  1 12:19:45 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct  1 12:19:45 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct  1 12:19:46 np0005464891 python3.9[112137]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  1 12:19:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:19:47 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct  1 12:19:47 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct  1 12:19:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:48 np0005464891 python3.9[112289]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:19:48 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct  1 12:19:48 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct  1 12:19:48 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct  1 12:19:48 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct  1 12:19:48 np0005464891 python3.9[112443]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:19:49 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct  1 12:19:49 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct  1 12:19:49 np0005464891 systemd-logind[801]: Session 36 logged out. Waiting for processes to exit.
Oct  1 12:19:49 np0005464891 systemd[1]: session-36.scope: Deactivated successfully.
Oct  1 12:19:49 np0005464891 systemd[1]: session-36.scope: Consumed 1min 6.012s CPU time.
Oct  1 12:19:49 np0005464891 systemd-logind[801]: Removed session 36.
Oct  1 12:19:49 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Oct  1 12:19:49 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Oct  1 12:19:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:19:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:52 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 8.a scrub starts
Oct  1 12:19:52 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 8.a scrub ok
Oct  1 12:19:53 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Oct  1 12:19:53 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Oct  1 12:19:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:54 np0005464891 systemd-logind[801]: New session 37 of user zuul.
Oct  1 12:19:54 np0005464891 systemd[1]: Started Session 37 of User zuul.
Oct  1 12:19:55 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct  1 12:19:55 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct  1 12:19:55 np0005464891 python3.9[112623]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:19:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:19:56 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct  1 12:19:56 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct  1 12:19:56 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct  1 12:19:56 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct  1 12:19:57 np0005464891 python3.9[112779]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  1 12:19:57 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct  1 12:19:57 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct  1 12:19:57 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct  1 12:19:57 np0005464891 ceph-osd[89750]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct  1 12:19:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:19:58 np0005464891 python3.9[112932]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 12:19:58 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct  1 12:19:58 np0005464891 ceph-osd[88747]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct  1 12:21:04 np0005464891 python3.9[120092]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  1 12:21:04 np0005464891 rsyslogd[1011]: imjournal: 894 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  1 12:21:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:05 np0005464891 python3.9[120244]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:21:05 np0005464891 python3.9[120322]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:21:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:06 np0005464891 python3.9[120474]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:21:07 np0005464891 python3.9[120552]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:08 np0005464891 python3.9[120704]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:09 np0005464891 python3.9[120856]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 12:21:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:21:11 np0005464891 python3.9[120940]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:21:11 np0005464891 systemd[1]: session-39.scope: Deactivated successfully.
Oct  1 12:21:11 np0005464891 systemd[1]: session-39.scope: Consumed 25.563s CPU time.
Oct  1 12:21:11 np0005464891 systemd-logind[801]: Session 39 logged out. Waiting for processes to exit.
Oct  1 12:21:11 np0005464891 systemd-logind[801]: Removed session 39.
Oct  1 12:21:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:21:11
Oct  1 12:21:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:21:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:21:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'backups', '.rgw.root', '.mgr', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta']
Oct  1 12:21:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:21:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:12 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct  1 12:21:12 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct  1 12:21:13 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct  1 12:21:13 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct  1 12:21:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:15 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Oct  1 12:21:15 np0005464891 ceph-osd[87649]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Oct  1 12:21:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:21:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:19 np0005464891 systemd-logind[801]: New session 40 of user zuul.
Oct  1 12:21:19 np0005464891 systemd[1]: Started Session 40 of User zuul.
Oct  1 12:21:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:20 np0005464891 python3.9[121123]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:21:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:21:21 np0005464891 python3.9[121275]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:21:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:22 np0005464891 python3.9[121353]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:22 np0005464891 systemd[1]: session-40.scope: Deactivated successfully.
Oct  1 12:21:22 np0005464891 systemd[1]: session-40.scope: Consumed 1.747s CPU time.
Oct  1 12:21:22 np0005464891 systemd-logind[801]: Session 40 logged out. Waiting for processes to exit.
Oct  1 12:21:22 np0005464891 systemd-logind[801]: Removed session 40.
Oct  1 12:21:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:21:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:21:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:21:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:21:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:21:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:21:25 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 90b2adc5-bb73-43ed-b69f-308c64b86319 does not exist
Oct  1 12:21:25 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 7a105c02-085e-4a17-a24a-b0385a6d7c34 does not exist
Oct  1 12:21:25 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 0fcb45b6-2fb6-4f69-9cf1-902ca0a94b94 does not exist
Oct  1 12:21:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:21:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:21:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:21:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:21:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:21:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:21:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:21:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:26 np0005464891 podman[121649]: 2025-10-01 16:21:26.461394235 +0000 UTC m=+0.074813690 container create 27d2ced44fb3703ba679a0824182540815899858126fa24a37a7119493c8141f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kare, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 12:21:26 np0005464891 podman[121649]: 2025-10-01 16:21:26.411650266 +0000 UTC m=+0.025069731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:21:26 np0005464891 systemd[1]: Started libpod-conmon-27d2ced44fb3703ba679a0824182540815899858126fa24a37a7119493c8141f.scope.
Oct  1 12:21:26 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:21:26 np0005464891 podman[121649]: 2025-10-01 16:21:26.565274449 +0000 UTC m=+0.178693964 container init 27d2ced44fb3703ba679a0824182540815899858126fa24a37a7119493c8141f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 12:21:26 np0005464891 podman[121649]: 2025-10-01 16:21:26.572843527 +0000 UTC m=+0.186262982 container start 27d2ced44fb3703ba679a0824182540815899858126fa24a37a7119493c8141f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 12:21:26 np0005464891 podman[121649]: 2025-10-01 16:21:26.57936628 +0000 UTC m=+0.192785705 container attach 27d2ced44fb3703ba679a0824182540815899858126fa24a37a7119493c8141f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:21:26 np0005464891 adoring_kare[121666]: 167 167
Oct  1 12:21:26 np0005464891 systemd[1]: libpod-27d2ced44fb3703ba679a0824182540815899858126fa24a37a7119493c8141f.scope: Deactivated successfully.
Oct  1 12:21:26 np0005464891 podman[121649]: 2025-10-01 16:21:26.582164673 +0000 UTC m=+0.195584168 container died 27d2ced44fb3703ba679a0824182540815899858126fa24a37a7119493c8141f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kare, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 12:21:26 np0005464891 systemd[1]: var-lib-containers-storage-overlay-a5f5adc77171cd7f7c0dcf2ec85cef97f652cad8c0e09bd0ac243d6a002754da-merged.mount: Deactivated successfully.
Oct  1 12:21:26 np0005464891 systemd[1]: libpod-conmon-27d2ced44fb3703ba679a0824182540815899858126fa24a37a7119493c8141f.scope: Deactivated successfully.
Oct  1 12:21:26 np0005464891 podman[121649]: 2025-10-01 16:21:26.637683664 +0000 UTC m=+0.251103119 container remove 27d2ced44fb3703ba679a0824182540815899858126fa24a37a7119493c8141f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 12:21:26 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:21:26 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:21:26 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:21:26 np0005464891 podman[121693]: 2025-10-01 16:21:26.843758677 +0000 UTC m=+0.052306478 container create ef6fb87dd031d7ea308d0dd2259e9184e88122d6509e1b4e52487bec5e112100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jackson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:21:26 np0005464891 systemd[1]: Started libpod-conmon-ef6fb87dd031d7ea308d0dd2259e9184e88122d6509e1b4e52487bec5e112100.scope.
Oct  1 12:21:26 np0005464891 podman[121693]: 2025-10-01 16:21:26.819421826 +0000 UTC m=+0.027969717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:21:26 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:21:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef954fee37c51dfd2c99453d4f430dc8658a9506b8a3f3a79c5c113c6eab8d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:21:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef954fee37c51dfd2c99453d4f430dc8658a9506b8a3f3a79c5c113c6eab8d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:21:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef954fee37c51dfd2c99453d4f430dc8658a9506b8a3f3a79c5c113c6eab8d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:21:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef954fee37c51dfd2c99453d4f430dc8658a9506b8a3f3a79c5c113c6eab8d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:21:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef954fee37c51dfd2c99453d4f430dc8658a9506b8a3f3a79c5c113c6eab8d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:21:26 np0005464891 podman[121693]: 2025-10-01 16:21:26.938343486 +0000 UTC m=+0.146891347 container init ef6fb87dd031d7ea308d0dd2259e9184e88122d6509e1b4e52487bec5e112100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jackson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:21:26 np0005464891 podman[121693]: 2025-10-01 16:21:26.949067038 +0000 UTC m=+0.157614859 container start ef6fb87dd031d7ea308d0dd2259e9184e88122d6509e1b4e52487bec5e112100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jackson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 12:21:26 np0005464891 podman[121693]: 2025-10-01 16:21:26.952585711 +0000 UTC m=+0.161133532 container attach ef6fb87dd031d7ea308d0dd2259e9184e88122d6509e1b4e52487bec5e112100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 12:21:27 np0005464891 stoic_jackson[121710]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:21:27 np0005464891 stoic_jackson[121710]: --> relative data size: 1.0
Oct  1 12:21:27 np0005464891 stoic_jackson[121710]: --> All data devices are unavailable
Oct  1 12:21:27 np0005464891 systemd[1]: libpod-ef6fb87dd031d7ea308d0dd2259e9184e88122d6509e1b4e52487bec5e112100.scope: Deactivated successfully.
Oct  1 12:21:27 np0005464891 podman[121693]: 2025-10-01 16:21:27.970357772 +0000 UTC m=+1.178905563 container died ef6fb87dd031d7ea308d0dd2259e9184e88122d6509e1b4e52487bec5e112100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 12:21:28 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9ef954fee37c51dfd2c99453d4f430dc8658a9506b8a3f3a79c5c113c6eab8d9-merged.mount: Deactivated successfully.
Oct  1 12:21:28 np0005464891 podman[121693]: 2025-10-01 16:21:28.06337752 +0000 UTC m=+1.271925361 container remove ef6fb87dd031d7ea308d0dd2259e9184e88122d6509e1b4e52487bec5e112100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 12:21:28 np0005464891 systemd[1]: libpod-conmon-ef6fb87dd031d7ea308d0dd2259e9184e88122d6509e1b4e52487bec5e112100.scope: Deactivated successfully.
Oct  1 12:21:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:28 np0005464891 systemd-logind[801]: New session 41 of user zuul.
Oct  1 12:21:28 np0005464891 systemd[1]: Started Session 41 of User zuul.
Oct  1 12:21:28 np0005464891 podman[121949]: 2025-10-01 16:21:28.780326216 +0000 UTC m=+0.084409822 container create 723e664cbeada89c80104b3b82e5481eb1449ba50c1ac3186495ded31f7cc890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:21:28 np0005464891 podman[121949]: 2025-10-01 16:21:28.719182077 +0000 UTC m=+0.023265663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:21:28 np0005464891 systemd[1]: Started libpod-conmon-723e664cbeada89c80104b3b82e5481eb1449ba50c1ac3186495ded31f7cc890.scope.
Oct  1 12:21:28 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:21:28 np0005464891 podman[121949]: 2025-10-01 16:21:28.877391871 +0000 UTC m=+0.181475437 container init 723e664cbeada89c80104b3b82e5481eb1449ba50c1ac3186495ded31f7cc890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:21:28 np0005464891 podman[121949]: 2025-10-01 16:21:28.885939526 +0000 UTC m=+0.190023092 container start 723e664cbeada89c80104b3b82e5481eb1449ba50c1ac3186495ded31f7cc890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 12:21:28 np0005464891 elastic_mccarthy[121966]: 167 167
Oct  1 12:21:28 np0005464891 systemd[1]: libpod-723e664cbeada89c80104b3b82e5481eb1449ba50c1ac3186495ded31f7cc890.scope: Deactivated successfully.
Oct  1 12:21:28 np0005464891 podman[121949]: 2025-10-01 16:21:28.903876107 +0000 UTC m=+0.207959673 container attach 723e664cbeada89c80104b3b82e5481eb1449ba50c1ac3186495ded31f7cc890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:21:28 np0005464891 podman[121949]: 2025-10-01 16:21:28.904236988 +0000 UTC m=+0.208320554 container died 723e664cbeada89c80104b3b82e5481eb1449ba50c1ac3186495ded31f7cc890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 12:21:28 np0005464891 systemd[1]: var-lib-containers-storage-overlay-337ae6becaabadee6bf71d335d9adad10299ee3cdd7df0cf68c8e995723274c4-merged.mount: Deactivated successfully.
Oct  1 12:21:29 np0005464891 podman[121949]: 2025-10-01 16:21:29.048886133 +0000 UTC m=+0.352969699 container remove 723e664cbeada89c80104b3b82e5481eb1449ba50c1ac3186495ded31f7cc890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mccarthy, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:21:29 np0005464891 systemd[1]: libpod-conmon-723e664cbeada89c80104b3b82e5481eb1449ba50c1ac3186495ded31f7cc890.scope: Deactivated successfully.
Oct  1 12:21:29 np0005464891 podman[122088]: 2025-10-01 16:21:29.219941655 +0000 UTC m=+0.039337276 container create f6327183b6efe2c910acd1aaf8b35b3312602be88b0dd299d317f055e1612074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tharp, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:21:29 np0005464891 systemd[1]: Started libpod-conmon-f6327183b6efe2c910acd1aaf8b35b3312602be88b0dd299d317f055e1612074.scope.
Oct  1 12:21:29 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:21:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2a0738e947f230d68f7d90afe919cd1bc45d762880a54838ccc4b4fb211273a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:21:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2a0738e947f230d68f7d90afe919cd1bc45d762880a54838ccc4b4fb211273a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:21:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2a0738e947f230d68f7d90afe919cd1bc45d762880a54838ccc4b4fb211273a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:21:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2a0738e947f230d68f7d90afe919cd1bc45d762880a54838ccc4b4fb211273a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:21:29 np0005464891 podman[122088]: 2025-10-01 16:21:29.296212921 +0000 UTC m=+0.115608562 container init f6327183b6efe2c910acd1aaf8b35b3312602be88b0dd299d317f055e1612074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 12:21:29 np0005464891 podman[122088]: 2025-10-01 16:21:29.206598953 +0000 UTC m=+0.025994594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:21:29 np0005464891 podman[122088]: 2025-10-01 16:21:29.30411496 +0000 UTC m=+0.123510581 container start f6327183b6efe2c910acd1aaf8b35b3312602be88b0dd299d317f055e1612074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tharp, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 12:21:29 np0005464891 podman[122088]: 2025-10-01 16:21:29.307639303 +0000 UTC m=+0.127034954 container attach f6327183b6efe2c910acd1aaf8b35b3312602be88b0dd299d317f055e1612074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tharp, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:21:29 np0005464891 python3.9[122082]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:21:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]: {
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:    "0": [
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:        {
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "devices": [
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "/dev/loop3"
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            ],
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "lv_name": "ceph_lv0",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "lv_size": "21470642176",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "name": "ceph_lv0",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "tags": {
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.cluster_name": "ceph",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.crush_device_class": "",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.encrypted": "0",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.osd_id": "0",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.type": "block",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.vdo": "0"
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            },
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "type": "block",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "vg_name": "ceph_vg0"
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:        }
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:    ],
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:    "1": [
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:        {
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "devices": [
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "/dev/loop4"
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            ],
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "lv_name": "ceph_lv1",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "lv_size": "21470642176",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "name": "ceph_lv1",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "tags": {
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.cluster_name": "ceph",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.crush_device_class": "",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.encrypted": "0",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.osd_id": "1",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.type": "block",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.vdo": "0"
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            },
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "type": "block",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "vg_name": "ceph_vg1"
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:        }
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:    ],
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:    "2": [
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:        {
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "devices": [
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "/dev/loop5"
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            ],
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "lv_name": "ceph_lv2",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "lv_size": "21470642176",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "name": "ceph_lv2",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "tags": {
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.cluster_name": "ceph",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.crush_device_class": "",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.encrypted": "0",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.osd_id": "2",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.type": "block",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:                "ceph.vdo": "0"
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            },
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "type": "block",
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:            "vg_name": "ceph_vg2"
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:        }
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]:    ]
Oct  1 12:21:30 np0005464891 intelligent_tharp[122105]: }
Oct  1 12:21:30 np0005464891 systemd[1]: libpod-f6327183b6efe2c910acd1aaf8b35b3312602be88b0dd299d317f055e1612074.scope: Deactivated successfully.
Oct  1 12:21:30 np0005464891 podman[122217]: 2025-10-01 16:21:30.202529523 +0000 UTC m=+0.038462275 container died f6327183b6efe2c910acd1aaf8b35b3312602be88b0dd299d317f055e1612074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:21:30 np0005464891 systemd[1]: var-lib-containers-storage-overlay-f2a0738e947f230d68f7d90afe919cd1bc45d762880a54838ccc4b4fb211273a-merged.mount: Deactivated successfully.
Oct  1 12:21:30 np0005464891 podman[122217]: 2025-10-01 16:21:30.256091418 +0000 UTC m=+0.092024150 container remove f6327183b6efe2c910acd1aaf8b35b3312602be88b0dd299d317f055e1612074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 12:21:30 np0005464891 systemd[1]: libpod-conmon-f6327183b6efe2c910acd1aaf8b35b3312602be88b0dd299d317f055e1612074.scope: Deactivated successfully.
Oct  1 12:21:30 np0005464891 python3.9[122283]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:30 np0005464891 podman[122522]: 2025-10-01 16:21:30.883693946 +0000 UTC m=+0.042136812 container create 7cf83637b05301d3a9ffe84e4a2b3dc27cc714dd86483b16b621ae1f94043e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:21:30 np0005464891 systemd[1]: Started libpod-conmon-7cf83637b05301d3a9ffe84e4a2b3dc27cc714dd86483b16b621ae1f94043e75.scope.
Oct  1 12:21:30 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:21:30 np0005464891 podman[122522]: 2025-10-01 16:21:30.954994043 +0000 UTC m=+0.113436939 container init 7cf83637b05301d3a9ffe84e4a2b3dc27cc714dd86483b16b621ae1f94043e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leakey, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:21:30 np0005464891 podman[122522]: 2025-10-01 16:21:30.866069547 +0000 UTC m=+0.024512403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:21:30 np0005464891 podman[122522]: 2025-10-01 16:21:30.963565892 +0000 UTC m=+0.122008768 container start 7cf83637b05301d3a9ffe84e4a2b3dc27cc714dd86483b16b621ae1f94043e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leakey, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 12:21:30 np0005464891 podman[122522]: 2025-10-01 16:21:30.96688717 +0000 UTC m=+0.125330076 container attach 7cf83637b05301d3a9ffe84e4a2b3dc27cc714dd86483b16b621ae1f94043e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leakey, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:21:30 np0005464891 epic_leakey[122539]: 167 167
Oct  1 12:21:30 np0005464891 systemd[1]: libpod-7cf83637b05301d3a9ffe84e4a2b3dc27cc714dd86483b16b621ae1f94043e75.scope: Deactivated successfully.
Oct  1 12:21:30 np0005464891 podman[122522]: 2025-10-01 16:21:30.971222444 +0000 UTC m=+0.129665300 container died 7cf83637b05301d3a9ffe84e4a2b3dc27cc714dd86483b16b621ae1f94043e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 12:21:30 np0005464891 systemd[1]: var-lib-containers-storage-overlay-3bebf91a5dfa1880f282750bb39fb2348788d3039fdf9f41f8822294f1ed2e4f-merged.mount: Deactivated successfully.
Oct  1 12:21:31 np0005464891 podman[122522]: 2025-10-01 16:21:31.017542797 +0000 UTC m=+0.175985653 container remove 7cf83637b05301d3a9ffe84e4a2b3dc27cc714dd86483b16b621ae1f94043e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:21:31 np0005464891 systemd[1]: libpod-conmon-7cf83637b05301d3a9ffe84e4a2b3dc27cc714dd86483b16b621ae1f94043e75.scope: Deactivated successfully.
Oct  1 12:21:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:21:31 np0005464891 podman[122610]: 2025-10-01 16:21:31.184734616 +0000 UTC m=+0.050138995 container create e44edf84d84899f178db1846a9abffb801c93fd506d308a242ef4cca7354e546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:21:31 np0005464891 systemd[1]: Started libpod-conmon-e44edf84d84899f178db1846a9abffb801c93fd506d308a242ef4cca7354e546.scope.
Oct  1 12:21:31 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:21:31 np0005464891 podman[122610]: 2025-10-01 16:21:31.163762067 +0000 UTC m=+0.029166476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:21:31 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f017ce7e208614430666c326b25bbdce3616567c13dbefb7c46f281651e9203/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:21:31 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f017ce7e208614430666c326b25bbdce3616567c13dbefb7c46f281651e9203/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:21:31 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f017ce7e208614430666c326b25bbdce3616567c13dbefb7c46f281651e9203/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:21:31 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f017ce7e208614430666c326b25bbdce3616567c13dbefb7c46f281651e9203/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:21:31 np0005464891 podman[122610]: 2025-10-01 16:21:31.294577618 +0000 UTC m=+0.159982047 container init e44edf84d84899f178db1846a9abffb801c93fd506d308a242ef4cca7354e546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 12:21:31 np0005464891 podman[122610]: 2025-10-01 16:21:31.303485175 +0000 UTC m=+0.168889554 container start e44edf84d84899f178db1846a9abffb801c93fd506d308a242ef4cca7354e546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 12:21:31 np0005464891 podman[122610]: 2025-10-01 16:21:31.316835341 +0000 UTC m=+0.182239740 container attach e44edf84d84899f178db1846a9abffb801c93fd506d308a242ef4cca7354e546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:21:31 np0005464891 python3.9[122652]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:21:31 np0005464891 python3.9[122737]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.u6c4e0_u recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]: {
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:        "osd_id": 2,
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:        "type": "bluestore"
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:    },
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:        "osd_id": 0,
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:        "type": "bluestore"
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:    },
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:        "osd_id": 1,
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:        "type": "bluestore"
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]:    }
Oct  1 12:21:32 np0005464891 ecstatic_chebyshev[122655]: }
Oct  1 12:21:32 np0005464891 systemd[1]: libpod-e44edf84d84899f178db1846a9abffb801c93fd506d308a242ef4cca7354e546.scope: Deactivated successfully.
Oct  1 12:21:32 np0005464891 systemd[1]: libpod-e44edf84d84899f178db1846a9abffb801c93fd506d308a242ef4cca7354e546.scope: Consumed 1.118s CPU time.
Oct  1 12:21:32 np0005464891 podman[122610]: 2025-10-01 16:21:32.415290967 +0000 UTC m=+1.280695376 container died e44edf84d84899f178db1846a9abffb801c93fd506d308a242ef4cca7354e546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct  1 12:21:32 np0005464891 systemd[1]: var-lib-containers-storage-overlay-7f017ce7e208614430666c326b25bbdce3616567c13dbefb7c46f281651e9203-merged.mount: Deactivated successfully.
Oct  1 12:21:32 np0005464891 podman[122610]: 2025-10-01 16:21:32.488741881 +0000 UTC m=+1.354146270 container remove e44edf84d84899f178db1846a9abffb801c93fd506d308a242ef4cca7354e546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:21:32 np0005464891 systemd[1]: libpod-conmon-e44edf84d84899f178db1846a9abffb801c93fd506d308a242ef4cca7354e546.scope: Deactivated successfully.
Oct  1 12:21:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:21:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:21:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:21:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:21:32 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 3a4e178e-79a6-4a63-92f6-ac84742bf160 does not exist
Oct  1 12:21:32 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 8d58999a-5bd7-401e-be9d-68e7d86eb3fd does not exist
Oct  1 12:21:32 np0005464891 python3.9[122934]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:21:33 np0005464891 python3.9[123059]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.rfoh5x6h recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:33 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:21:33 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:21:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:34 np0005464891 python3.9[123211]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:21:34 np0005464891 python3.9[123363]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:21:35 np0005464891 python3.9[123441]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:21:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:21:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:36 np0005464891 python3.9[123593]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:21:36 np0005464891 python3.9[123671]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:21:37 np0005464891 python3.9[123823]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:38 np0005464891 python3.9[123975]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:21:38 np0005464891 python3.9[124053]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:39 np0005464891 python3.9[124205]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:21:39 np0005464891 python3.9[124283]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:40 np0005464891 python3.9[124435]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:21:40 np0005464891 systemd[1]: Reloading.
Oct  1 12:21:41 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:21:41 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:21:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:21:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:21:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:21:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:21:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:21:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:21:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:21:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:42 np0005464891 python3.9[124624]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:21:42 np0005464891 python3.9[124702]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:43 np0005464891 python3.9[124854]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:21:43 np0005464891 python3.9[124932]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:44 np0005464891 python3.9[125084]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:21:44 np0005464891 systemd[1]: Reloading.
Oct  1 12:21:44 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:21:44 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:21:45 np0005464891 systemd[1]: Starting Create netns directory...
Oct  1 12:21:45 np0005464891 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  1 12:21:45 np0005464891 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  1 12:21:45 np0005464891 systemd[1]: Finished Create netns directory.
Oct  1 12:21:45 np0005464891 python3.9[125274]: ansible-ansible.builtin.service_facts Invoked
Oct  1 12:21:45 np0005464891 network[125291]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 12:21:45 np0005464891 network[125292]: 'network-scripts' will be removed from distribution in near future.
Oct  1 12:21:45 np0005464891 network[125293]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 12:21:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:21:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:50 np0005464891 python3.9[125558]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:21:50 np0005464891 python3.9[125636]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:21:51 np0005464891 python3.9[125788]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:52 np0005464891 python3.9[125940]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:21:52 np0005464891 python3.9[126018]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:53 np0005464891 python3.9[126170]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  1 12:21:53 np0005464891 systemd[1]: Starting Time & Date Service...
Oct  1 12:21:54 np0005464891 systemd[1]: Started Time & Date Service.
Oct  1 12:21:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:54 np0005464891 python3.9[126326]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:55 np0005464891 python3.9[126478]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:21:56 np0005464891 python3.9[126556]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:21:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:56 np0005464891 python3.9[126708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:21:57 np0005464891 python3.9[126786]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.mm9tupjy recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:57 np0005464891 python3.9[126938]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:21:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:21:58 np0005464891 python3.9[127016]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:21:59 np0005464891 python3.9[127168]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:22:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:00 np0005464891 python3[127321]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  1 12:22:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:22:01 np0005464891 python3.9[127473]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:22:01 np0005464891 python3.9[127551]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:22:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:02 np0005464891 python3.9[127703]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:22:03 np0005464891 python3.9[127781]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:22:04 np0005464891 python3.9[127933]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:22:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:04 np0005464891 python3.9[128011]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:22:05 np0005464891 python3.9[128163]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:22:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:22:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:06 np0005464891 python3.9[128241]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:22:07 np0005464891 python3.9[128393]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:22:07 np0005464891 python3.9[128471]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:22:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:08 np0005464891 python3.9[128623]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:22:09 np0005464891 python3.9[128778]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:22:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:10 np0005464891 python3.9[128930]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:22:10 np0005464891 python3.9[129082]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:22:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:22:11 np0005464891 python3.9[129234]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  1 12:22:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:22:11
Oct  1 12:22:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:22:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:22:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'volumes']
Oct  1 12:22:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:22:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:12 np0005464891 python3.9[129386]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  1 12:22:13 np0005464891 systemd[1]: session-41.scope: Deactivated successfully.
Oct  1 12:22:13 np0005464891 systemd[1]: session-41.scope: Consumed 33.604s CPU time.
Oct  1 12:22:13 np0005464891 systemd-logind[801]: Session 41 logged out. Waiting for processes to exit.
Oct  1 12:22:13 np0005464891 systemd-logind[801]: Removed session 41.
Oct  1 12:22:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:22:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:18 np0005464891 systemd-logind[801]: New session 42 of user zuul.
Oct  1 12:22:18 np0005464891 systemd[1]: Started Session 42 of User zuul.
Oct  1 12:22:19 np0005464891 python3.9[129566]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  1 12:22:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:20 np0005464891 python3.9[129718]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:22:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:22:21 np0005464891 python3.9[129872]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:22:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:22:22 np0005464891 python3.9[130024]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.1s8syaqu follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:22:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:22 np0005464891 python3.9[130149]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.1s8syaqu mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759335741.5472198-44-42482716173301/.source.1s8syaqu _original_basename=.5a4a62yq follow=False checksum=10da071e2f36530ace7b7e94b9e99694e89dffda backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:22:24 np0005464891 python3.9[130301]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:22:24 np0005464891 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  1 12:22:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:24 np0005464891 python3.9[130455]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4hOP3QCxrOdsa7WBbefy0n2KvT8H5MFb7vhedousiQtIDtfQG88361GnDSbYiNsMctn9YWcyB3bvj3SNuQyq26F6oD3WCIGA6G85exG/LQ3aqQfASJCXnbGmmUDjSIfPcahJjp/RQegPuXZRNCzYOw1Ov4k+Q+ajDcYnoKOKhL5/I/NFUChQ4623v9YjiyGyFVw+obms9D+Xmu84VwfjkiIiM1KHkxz4cmZT3CEkEwjJEPTaRuoR5Ne2LLDZJ3sRpYiUX915IlN02zycveY1kLbbKRcbf5UMD4PhezWic783KHvTFq2n7f/coSTiu+yObWXdBZxwFfU7Eefos02eSRkpix/lO+8vRSqcp+A98+JAM/Xwdxkp+OFX8E3VSqjh67zKCygLiOhHUkkSbRCXDhsQxuR1LcOHQUaA+lTFzDPWA0/jH9gZDZ+lGQoXnLw4nruJhWKvVTMTm07/Tppp5bVuQsfnpTsCA5mYgxdEsUZMICn1sV+ZVgaXQ8XfTkLc=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINX8HwxLVwxENs9tCFtflAI5hi67Do7RqwmxtF2aVjMJ#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE1K9wYJJZgF+UvKp1gousr20Dexp/t9lquorq16XUwZo+6SmIYlX4LQwKuPQaD8nV6Hg+7ZlPBdy2aLkm4OOZc=#012 create=True mode=0644 path=/tmp/ansible.1s8syaqu state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:22:25 np0005464891 python3.9[130607]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.1s8syaqu' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:22:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:22:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:26 np0005464891 python3.9[130761]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.1s8syaqu state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:22:27 np0005464891 systemd-logind[801]: Session 42 logged out. Waiting for processes to exit.
Oct  1 12:22:27 np0005464891 systemd[1]: session-42.scope: Deactivated successfully.
Oct  1 12:22:27 np0005464891 systemd[1]: session-42.scope: Consumed 5.718s CPU time.
Oct  1 12:22:27 np0005464891 systemd-logind[801]: Removed session 42.
Oct  1 12:22:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:22:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:32 np0005464891 systemd[1]: session-19.scope: Deactivated successfully.
Oct  1 12:22:32 np0005464891 systemd[1]: session-19.scope: Consumed 1min 30.691s CPU time.
Oct  1 12:22:32 np0005464891 systemd-logind[801]: Session 19 logged out. Waiting for processes to exit.
Oct  1 12:22:32 np0005464891 systemd-logind[801]: Removed session 19.
Oct  1 12:22:32 np0005464891 systemd-logind[801]: New session 43 of user zuul.
Oct  1 12:22:32 np0005464891 systemd[1]: Started Session 43 of User zuul.
Oct  1 12:22:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:22:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:22:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:22:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:22:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:22:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:22:33 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev ac47f4df-8361-45eb-9100-4a149d6158ef does not exist
Oct  1 12:22:33 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 212df471-bb0b-4efd-b583-6d9287d64f2c does not exist
Oct  1 12:22:33 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev b44faaa7-2810-4ffa-9a87-599dc5c826f5 does not exist
Oct  1 12:22:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:22:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:22:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:22:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:22:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:22:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:22:33 np0005464891 python3.9[131068]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:22:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:34 np0005464891 podman[131244]: 2025-10-01 16:22:34.256118926 +0000 UTC m=+0.065118444 container create dffba594e7194a6d481835fb2a02e1ffb396e0b1afa4946fe1978414e241e627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jang, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:22:34 np0005464891 systemd[1]: Started libpod-conmon-dffba594e7194a6d481835fb2a02e1ffb396e0b1afa4946fe1978414e241e627.scope.
Oct  1 12:22:34 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:22:34 np0005464891 podman[131244]: 2025-10-01 16:22:34.232038764 +0000 UTC m=+0.041038302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:22:34 np0005464891 podman[131244]: 2025-10-01 16:22:34.338370334 +0000 UTC m=+0.147369912 container init dffba594e7194a6d481835fb2a02e1ffb396e0b1afa4946fe1978414e241e627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jang, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 12:22:34 np0005464891 podman[131244]: 2025-10-01 16:22:34.347488847 +0000 UTC m=+0.156488345 container start dffba594e7194a6d481835fb2a02e1ffb396e0b1afa4946fe1978414e241e627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:22:34 np0005464891 podman[131244]: 2025-10-01 16:22:34.351663698 +0000 UTC m=+0.160663246 container attach dffba594e7194a6d481835fb2a02e1ffb396e0b1afa4946fe1978414e241e627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jang, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 12:22:34 np0005464891 thirsty_jang[131306]: 167 167
Oct  1 12:22:34 np0005464891 systemd[1]: libpod-dffba594e7194a6d481835fb2a02e1ffb396e0b1afa4946fe1978414e241e627.scope: Deactivated successfully.
Oct  1 12:22:34 np0005464891 podman[131244]: 2025-10-01 16:22:34.354251846 +0000 UTC m=+0.163251344 container died dffba594e7194a6d481835fb2a02e1ffb396e0b1afa4946fe1978414e241e627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:22:34 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e2c16564ce1fa5b6f236dfa570ee9d8e11804cf64654942be4c4b8f66a8692c7-merged.mount: Deactivated successfully.
Oct  1 12:22:34 np0005464891 podman[131244]: 2025-10-01 16:22:34.389646109 +0000 UTC m=+0.198645607 container remove dffba594e7194a6d481835fb2a02e1ffb396e0b1afa4946fe1978414e241e627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jang, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:22:34 np0005464891 systemd[1]: libpod-conmon-dffba594e7194a6d481835fb2a02e1ffb396e0b1afa4946fe1978414e241e627.scope: Deactivated successfully.
Oct  1 12:22:34 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:22:34 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:22:34 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:22:34 np0005464891 podman[131330]: 2025-10-01 16:22:34.598791723 +0000 UTC m=+0.057924102 container create 8c21617532a5d0c009df65b410a973bcade32c3c1137c287ac5eead22a5f2234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:22:34 np0005464891 systemd[1]: Started libpod-conmon-8c21617532a5d0c009df65b410a973bcade32c3c1137c287ac5eead22a5f2234.scope.
Oct  1 12:22:34 np0005464891 podman[131330]: 2025-10-01 16:22:34.573545571 +0000 UTC m=+0.032678040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:22:34 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:22:34 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f992bf2bbfb28326bc0f13538f6b97d7e04d0f334a1b0bfe8a725ee0158e107a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:22:34 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f992bf2bbfb28326bc0f13538f6b97d7e04d0f334a1b0bfe8a725ee0158e107a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:22:34 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f992bf2bbfb28326bc0f13538f6b97d7e04d0f334a1b0bfe8a725ee0158e107a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:22:34 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f992bf2bbfb28326bc0f13538f6b97d7e04d0f334a1b0bfe8a725ee0158e107a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:22:34 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f992bf2bbfb28326bc0f13538f6b97d7e04d0f334a1b0bfe8a725ee0158e107a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:22:34 np0005464891 podman[131330]: 2025-10-01 16:22:34.720283708 +0000 UTC m=+0.179416157 container init 8c21617532a5d0c009df65b410a973bcade32c3c1137c287ac5eead22a5f2234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:22:34 np0005464891 podman[131330]: 2025-10-01 16:22:34.738626369 +0000 UTC m=+0.197758728 container start 8c21617532a5d0c009df65b410a973bcade32c3c1137c287ac5eead22a5f2234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct  1 12:22:34 np0005464891 podman[131330]: 2025-10-01 16:22:34.748932091 +0000 UTC m=+0.208064490 container attach 8c21617532a5d0c009df65b410a973bcade32c3c1137c287ac5eead22a5f2234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 12:22:35 np0005464891 python3.9[131427]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  1 12:22:35 np0005464891 kind_herschel[131347]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:22:35 np0005464891 kind_herschel[131347]: --> relative data size: 1.0
Oct  1 12:22:35 np0005464891 kind_herschel[131347]: --> All data devices are unavailable
Oct  1 12:22:35 np0005464891 systemd[1]: libpod-8c21617532a5d0c009df65b410a973bcade32c3c1137c287ac5eead22a5f2234.scope: Deactivated successfully.
Oct  1 12:22:35 np0005464891 podman[131330]: 2025-10-01 16:22:35.756912705 +0000 UTC m=+1.216045394 container died 8c21617532a5d0c009df65b410a973bcade32c3c1137c287ac5eead22a5f2234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 12:22:35 np0005464891 systemd[1]: var-lib-containers-storage-overlay-f992bf2bbfb28326bc0f13538f6b97d7e04d0f334a1b0bfe8a725ee0158e107a-merged.mount: Deactivated successfully.
Oct  1 12:22:35 np0005464891 podman[131330]: 2025-10-01 16:22:35.860529406 +0000 UTC m=+1.319661775 container remove 8c21617532a5d0c009df65b410a973bcade32c3c1137c287ac5eead22a5f2234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:22:35 np0005464891 systemd[1]: libpod-conmon-8c21617532a5d0c009df65b410a973bcade32c3c1137c287ac5eead22a5f2234.scope: Deactivated successfully.
Oct  1 12:22:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:22:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:36 np0005464891 python3.9[131616]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:22:36 np0005464891 podman[131834]: 2025-10-01 16:22:36.61188663 +0000 UTC m=+0.066013985 container create cf21c11aa6138595dafb588172bfcd305e24e26ecaefc9f83729f17b35b236d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:22:36 np0005464891 systemd[1]: Started libpod-conmon-cf21c11aa6138595dafb588172bfcd305e24e26ecaefc9f83729f17b35b236d7.scope.
Oct  1 12:22:36 np0005464891 podman[131834]: 2025-10-01 16:22:36.58370366 +0000 UTC m=+0.037831055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:22:36 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:22:36 np0005464891 podman[131834]: 2025-10-01 16:22:36.713267169 +0000 UTC m=+0.167394564 container init cf21c11aa6138595dafb588172bfcd305e24e26ecaefc9f83729f17b35b236d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rubin, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:22:36 np0005464891 podman[131834]: 2025-10-01 16:22:36.72427236 +0000 UTC m=+0.178399675 container start cf21c11aa6138595dafb588172bfcd305e24e26ecaefc9f83729f17b35b236d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 12:22:36 np0005464891 podman[131834]: 2025-10-01 16:22:36.72721394 +0000 UTC m=+0.181341335 container attach cf21c11aa6138595dafb588172bfcd305e24e26ecaefc9f83729f17b35b236d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rubin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 12:22:36 np0005464891 xenodochial_rubin[131850]: 167 167
Oct  1 12:22:36 np0005464891 systemd[1]: libpod-cf21c11aa6138595dafb588172bfcd305e24e26ecaefc9f83729f17b35b236d7.scope: Deactivated successfully.
Oct  1 12:22:36 np0005464891 podman[131834]: 2025-10-01 16:22:36.731978381 +0000 UTC m=+0.186105726 container died cf21c11aa6138595dafb588172bfcd305e24e26ecaefc9f83729f17b35b236d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rubin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:22:36 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b562eb99a503818083c6435b4a713ca7a0221a31edd17553a512e004897a7296-merged.mount: Deactivated successfully.
Oct  1 12:22:36 np0005464891 podman[131834]: 2025-10-01 16:22:36.802344563 +0000 UTC m=+0.256471888 container remove cf21c11aa6138595dafb588172bfcd305e24e26ecaefc9f83729f17b35b236d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rubin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:22:36 np0005464891 systemd[1]: libpod-conmon-cf21c11aa6138595dafb588172bfcd305e24e26ecaefc9f83729f17b35b236d7.scope: Deactivated successfully.
Oct  1 12:22:37 np0005464891 podman[131950]: 2025-10-01 16:22:37.003720154 +0000 UTC m=+0.060215747 container create 2080dd9f8815dfbcaa44f0ae5986fa5a09c83a69d1c34c2c27a32b5f1f12b43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_torvalds, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:22:37 np0005464891 systemd[1]: Started libpod-conmon-2080dd9f8815dfbcaa44f0ae5986fa5a09c83a69d1c34c2c27a32b5f1f12b43b.scope.
Oct  1 12:22:37 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:22:37 np0005464891 podman[131950]: 2025-10-01 16:22:36.981441975 +0000 UTC m=+0.037937588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:22:37 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb831205fb474ead260663a01c0cba2d91b25ab39ee27454da2b9e0938442fe7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:22:37 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb831205fb474ead260663a01c0cba2d91b25ab39ee27454da2b9e0938442fe7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:22:37 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb831205fb474ead260663a01c0cba2d91b25ab39ee27454da2b9e0938442fe7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:22:37 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb831205fb474ead260663a01c0cba2d91b25ab39ee27454da2b9e0938442fe7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:22:37 np0005464891 python3.9[131945]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:22:37 np0005464891 podman[131950]: 2025-10-01 16:22:37.106999575 +0000 UTC m=+0.163495178 container init 2080dd9f8815dfbcaa44f0ae5986fa5a09c83a69d1c34c2c27a32b5f1f12b43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:22:37 np0005464891 podman[131950]: 2025-10-01 16:22:37.114587302 +0000 UTC m=+0.171082895 container start 2080dd9f8815dfbcaa44f0ae5986fa5a09c83a69d1c34c2c27a32b5f1f12b43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:22:37 np0005464891 podman[131950]: 2025-10-01 16:22:37.117532392 +0000 UTC m=+0.174028035 container attach 2080dd9f8815dfbcaa44f0ae5986fa5a09c83a69d1c34c2c27a32b5f1f12b43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_torvalds, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]: {
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:    "0": [
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:        {
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "devices": [
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "/dev/loop3"
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            ],
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "lv_name": "ceph_lv0",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "lv_size": "21470642176",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "name": "ceph_lv0",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "tags": {
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.cluster_name": "ceph",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.crush_device_class": "",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.encrypted": "0",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.osd_id": "0",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.type": "block",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.vdo": "0"
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            },
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "type": "block",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "vg_name": "ceph_vg0"
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:        }
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:    ],
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:    "1": [
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:        {
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "devices": [
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "/dev/loop4"
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            ],
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "lv_name": "ceph_lv1",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "lv_size": "21470642176",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "name": "ceph_lv1",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "tags": {
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.cluster_name": "ceph",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.crush_device_class": "",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.encrypted": "0",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.osd_id": "1",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.type": "block",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.vdo": "0"
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            },
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "type": "block",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "vg_name": "ceph_vg1"
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:        }
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:    ],
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:    "2": [
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:        {
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "devices": [
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "/dev/loop5"
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            ],
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "lv_name": "ceph_lv2",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "lv_size": "21470642176",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "name": "ceph_lv2",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "tags": {
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.cluster_name": "ceph",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.crush_device_class": "",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.encrypted": "0",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.osd_id": "2",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.type": "block",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:                "ceph.vdo": "0"
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            },
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "type": "block",
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:            "vg_name": "ceph_vg2"
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:        }
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]:    ]
Oct  1 12:22:37 np0005464891 crazy_torvalds[131966]: }
Oct  1 12:22:37 np0005464891 systemd[1]: libpod-2080dd9f8815dfbcaa44f0ae5986fa5a09c83a69d1c34c2c27a32b5f1f12b43b.scope: Deactivated successfully.
Oct  1 12:22:38 np0005464891 podman[132128]: 2025-10-01 16:22:38.006237519 +0000 UTC m=+0.040997061 container died 2080dd9f8815dfbcaa44f0ae5986fa5a09c83a69d1c34c2c27a32b5f1f12b43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:22:38 np0005464891 python3.9[132125]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:22:38 np0005464891 systemd[1]: var-lib-containers-storage-overlay-fb831205fb474ead260663a01c0cba2d91b25ab39ee27454da2b9e0938442fe7-merged.mount: Deactivated successfully.
Oct  1 12:22:38 np0005464891 podman[132128]: 2025-10-01 16:22:38.069425524 +0000 UTC m=+0.104185086 container remove 2080dd9f8815dfbcaa44f0ae5986fa5a09c83a69d1c34c2c27a32b5f1f12b43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:22:38 np0005464891 systemd[1]: libpod-conmon-2080dd9f8815dfbcaa44f0ae5986fa5a09c83a69d1c34c2c27a32b5f1f12b43b.scope: Deactivated successfully.
Oct  1 12:22:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:38 np0005464891 podman[132438]: 2025-10-01 16:22:38.781334671 +0000 UTC m=+0.047479338 container create e3ae5b3ecb269f1d9d539ff841eb72f7ae672825355aa897e00d4b246f023599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct  1 12:22:38 np0005464891 systemd[1]: Started libpod-conmon-e3ae5b3ecb269f1d9d539ff841eb72f7ae672825355aa897e00d4b246f023599.scope.
Oct  1 12:22:38 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:22:38 np0005464891 podman[132438]: 2025-10-01 16:22:38.854219472 +0000 UTC m=+0.120364159 container init e3ae5b3ecb269f1d9d539ff841eb72f7ae672825355aa897e00d4b246f023599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_meitner, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:22:38 np0005464891 podman[132438]: 2025-10-01 16:22:38.761687774 +0000 UTC m=+0.027832461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:22:38 np0005464891 podman[132438]: 2025-10-01 16:22:38.866787696 +0000 UTC m=+0.132932353 container start e3ae5b3ecb269f1d9d539ff841eb72f7ae672825355aa897e00d4b246f023599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_meitner, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:22:38 np0005464891 podman[132438]: 2025-10-01 16:22:38.871051832 +0000 UTC m=+0.137196489 container attach e3ae5b3ecb269f1d9d539ff841eb72f7ae672825355aa897e00d4b246f023599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_meitner, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:22:38 np0005464891 focused_meitner[132454]: 167 167
Oct  1 12:22:38 np0005464891 systemd[1]: libpod-e3ae5b3ecb269f1d9d539ff841eb72f7ae672825355aa897e00d4b246f023599.scope: Deactivated successfully.
Oct  1 12:22:38 np0005464891 podman[132438]: 2025-10-01 16:22:38.876383988 +0000 UTC m=+0.142528635 container died e3ae5b3ecb269f1d9d539ff841eb72f7ae672825355aa897e00d4b246f023599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:22:38 np0005464891 systemd[1]: var-lib-containers-storage-overlay-2b8d48b3dadd8f9ae4d24b17a099d91866f727a7d4ae09fda4b867f98a8d0829-merged.mount: Deactivated successfully.
Oct  1 12:22:38 np0005464891 podman[132438]: 2025-10-01 16:22:38.91310011 +0000 UTC m=+0.179244767 container remove e3ae5b3ecb269f1d9d539ff841eb72f7ae672825355aa897e00d4b246f023599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_meitner, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 12:22:38 np0005464891 python3.9[132437]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:22:38 np0005464891 systemd[1]: libpod-conmon-e3ae5b3ecb269f1d9d539ff841eb72f7ae672825355aa897e00d4b246f023599.scope: Deactivated successfully.
Oct  1 12:22:39 np0005464891 podman[132500]: 2025-10-01 16:22:39.076743961 +0000 UTC m=+0.054362556 container create bbe3f5fecc79f99b7149ea88fba632c63007acfe7df923b16cdcbdd5b6d3e6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 12:22:39 np0005464891 systemd[1]: Started libpod-conmon-bbe3f5fecc79f99b7149ea88fba632c63007acfe7df923b16cdcbdd5b6d3e6a2.scope.
Oct  1 12:22:39 np0005464891 podman[132500]: 2025-10-01 16:22:39.046218197 +0000 UTC m=+0.023836842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:22:39 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:22:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a48f7f489401e54e37c0bd60479c9cc5dc915e1a8e8f825df14e77a6109e21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:22:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a48f7f489401e54e37c0bd60479c9cc5dc915e1a8e8f825df14e77a6109e21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:22:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a48f7f489401e54e37c0bd60479c9cc5dc915e1a8e8f825df14e77a6109e21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:22:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a48f7f489401e54e37c0bd60479c9cc5dc915e1a8e8f825df14e77a6109e21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:22:39 np0005464891 podman[132500]: 2025-10-01 16:22:39.18799552 +0000 UTC m=+0.165614145 container init bbe3f5fecc79f99b7149ea88fba632c63007acfe7df923b16cdcbdd5b6d3e6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:22:39 np0005464891 podman[132500]: 2025-10-01 16:22:39.195778752 +0000 UTC m=+0.173397357 container start bbe3f5fecc79f99b7149ea88fba632c63007acfe7df923b16cdcbdd5b6d3e6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:22:39 np0005464891 podman[132500]: 2025-10-01 16:22:39.203048561 +0000 UTC m=+0.180667166 container attach bbe3f5fecc79f99b7149ea88fba632c63007acfe7df923b16cdcbdd5b6d3e6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 12:22:39 np0005464891 systemd[1]: session-43.scope: Deactivated successfully.
Oct  1 12:22:39 np0005464891 systemd[1]: session-43.scope: Consumed 4.378s CPU time.
Oct  1 12:22:39 np0005464891 systemd-logind[801]: Session 43 logged out. Waiting for processes to exit.
Oct  1 12:22:39 np0005464891 systemd-logind[801]: Removed session 43.
Oct  1 12:22:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]: {
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:        "osd_id": 2,
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:        "type": "bluestore"
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:    },
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:        "osd_id": 0,
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:        "type": "bluestore"
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:    },
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:        "osd_id": 1,
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:        "type": "bluestore"
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]:    }
Oct  1 12:22:40 np0005464891 nervous_lovelace[132516]: }
Oct  1 12:22:40 np0005464891 systemd[1]: libpod-bbe3f5fecc79f99b7149ea88fba632c63007acfe7df923b16cdcbdd5b6d3e6a2.scope: Deactivated successfully.
Oct  1 12:22:40 np0005464891 podman[132500]: 2025-10-01 16:22:40.307270315 +0000 UTC m=+1.284888950 container died bbe3f5fecc79f99b7149ea88fba632c63007acfe7df923b16cdcbdd5b6d3e6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 12:22:40 np0005464891 systemd[1]: libpod-bbe3f5fecc79f99b7149ea88fba632c63007acfe7df923b16cdcbdd5b6d3e6a2.scope: Consumed 1.115s CPU time.
Oct  1 12:22:40 np0005464891 systemd[1]: var-lib-containers-storage-overlay-14a48f7f489401e54e37c0bd60479c9cc5dc915e1a8e8f825df14e77a6109e21-merged.mount: Deactivated successfully.
Oct  1 12:22:40 np0005464891 podman[132500]: 2025-10-01 16:22:40.375475078 +0000 UTC m=+1.353093673 container remove bbe3f5fecc79f99b7149ea88fba632c63007acfe7df923b16cdcbdd5b6d3e6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 12:22:40 np0005464891 systemd[1]: libpod-conmon-bbe3f5fecc79f99b7149ea88fba632c63007acfe7df923b16cdcbdd5b6d3e6a2.scope: Deactivated successfully.
Oct  1 12:22:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:22:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:22:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:22:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:22:40 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 0a41bf60-80a3-4bae-9731-c250f4ed34ec does not exist
Oct  1 12:22:40 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev b9af85e1-d01a-4b5f-96ac-db3579570d51 does not exist
Oct  1 12:22:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:22:41 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:22:41 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:22:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:22:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:22:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:22:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:22:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:22:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:22:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:44 np0005464891 systemd-logind[801]: New session 44 of user zuul.
Oct  1 12:22:44 np0005464891 systemd[1]: Started Session 44 of User zuul.
Oct  1 12:22:45 np0005464891 python3.9[132766]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:22:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:22:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:46 np0005464891 python3.9[132922]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 12:22:47 np0005464891 python3.9[133006]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  1 12:22:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:49 np0005464891 python3.9[133157]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:22:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:51 np0005464891 python3.9[133308]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  1 12:22:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:22:51 np0005464891 python3.9[133458]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:22:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:52 np0005464891 python3.9[133608]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:22:52 np0005464891 systemd-logind[801]: Session 44 logged out. Waiting for processes to exit.
Oct  1 12:22:52 np0005464891 systemd[1]: session-44.scope: Deactivated successfully.
Oct  1 12:22:52 np0005464891 systemd[1]: session-44.scope: Consumed 6.045s CPU time.
Oct  1 12:22:52 np0005464891 systemd-logind[801]: Removed session 44.
Oct  1 12:22:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:22:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:57 np0005464891 systemd-logind[801]: New session 45 of user zuul.
Oct  1 12:22:57 np0005464891 systemd[1]: Started Session 45 of User zuul.
Oct  1 12:22:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:22:58 np0005464891 python3.9[133786]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:23:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:00 np0005464891 python3.9[133942]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:23:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:23:01 np0005464891 python3.9[134094]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:23:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:02 np0005464891 python3.9[134246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:03 np0005464891 python3.9[134369]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335781.741765-65-229795595895996/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=92ce01107e0768498f4dc0de5bfe0100296963db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:03 np0005464891 python3.9[134521]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:04 np0005464891 python3.9[134644]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335783.428221-65-75540428594464/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=2e3dc6cec57fe7855fd309c688409b7bb3ce62c9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:05 np0005464891 python3.9[134796]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:05 np0005464891 python3.9[134919]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335784.742442-65-79242507821957/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=4fc63d97f89f60cfa2ba93d71c8c34f029b3f3b8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:23:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:06 np0005464891 python3.9[135071]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:23:07 np0005464891 python3.9[135223]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:23:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:08 np0005464891 python3.9[135375]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:08 np0005464891 python3.9[135498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335787.5896642-124-42394310159314/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b7757568c3df9a2e89afdaef4cd905a77a98d703 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:09 np0005464891 python3.9[135650]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:10 np0005464891 python3.9[135773]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335789.0147748-124-84378302400249/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=16989653d20dd26f972f8efa1ff4a07be907c407 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:10 np0005464891 python3.9[135925]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:23:11 np0005464891 python3.9[136048]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335790.3415291-124-115467266019587/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=1904dc08acf60b2cef18ecc32d483c9b5bdf5030 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:23:11
Oct  1 12:23:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:23:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:23:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'volumes', 'images', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'vms']
Oct  1 12:23:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:23:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:12 np0005464891 python3.9[136200]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:23:13 np0005464891 python3.9[136352]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:23:13 np0005464891 python3.9[136504]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:14 np0005464891 python3.9[136627]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335793.2794724-183-97479967462350/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b45581a8b9bf279ca2fac12ff0f36351a39cf6a8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:15 np0005464891 python3.9[136779]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:15 np0005464891 python3.9[136902]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335794.6722848-183-236047835896615/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=16989653d20dd26f972f8efa1ff4a07be907c407 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:23:16.112274) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335796112303, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1596, "num_deletes": 251, "total_data_size": 2414982, "memory_usage": 2452136, "flush_reason": "Manual Compaction"}
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335796127747, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1395044, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7404, "largest_seqno": 8999, "table_properties": {"data_size": 1389786, "index_size": 2398, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14587, "raw_average_key_size": 20, "raw_value_size": 1377607, "raw_average_value_size": 1924, "num_data_blocks": 114, "num_entries": 716, "num_filter_entries": 716, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335634, "oldest_key_time": 1759335634, "file_creation_time": 1759335796, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 15538 microseconds, and 7286 cpu microseconds.
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:23:16.127806) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1395044 bytes OK
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:23:16.127830) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:23:16.129414) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:23:16.129437) EVENT_LOG_v1 {"time_micros": 1759335796129429, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:23:16.129488) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2407927, prev total WAL file size 2407927, number of live WAL files 2.
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:23:16.130758) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1362KB)], [20(7091KB)]
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335796130810, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8656410, "oldest_snapshot_seqno": -1}
Oct  1 12:23:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3390 keys, 6893631 bytes, temperature: kUnknown
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335796189666, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6893631, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6867520, "index_size": 16529, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 81296, "raw_average_key_size": 23, "raw_value_size": 6802795, "raw_average_value_size": 2006, "num_data_blocks": 733, "num_entries": 3390, "num_filter_entries": 3390, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759335796, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:23:16.189952) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6893631 bytes
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:23:16.191783) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.9 rd, 117.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.9 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(11.1) write-amplify(4.9) OK, records in: 3829, records dropped: 439 output_compression: NoCompression
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:23:16.191817) EVENT_LOG_v1 {"time_micros": 1759335796191800, "job": 6, "event": "compaction_finished", "compaction_time_micros": 58934, "compaction_time_cpu_micros": 30225, "output_level": 6, "num_output_files": 1, "total_output_size": 6893631, "num_input_records": 3829, "num_output_records": 3390, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335796192369, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335796194764, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:23:16.130603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:23:16.194810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:23:16.194815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:23:16.194816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:23:16.194818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:23:16 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:23:16.194820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:23:16 np0005464891 python3.9[137054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:17 np0005464891 python3.9[137177]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335795.9601686-183-83216986927398/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ea1e310230a719aa5da818c28e7879666468af34 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:18 np0005464891 python3.9[137329]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:23:19 np0005464891 python3.9[137481]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:19 np0005464891 python3.9[137604]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335798.6676536-251-169390448294685/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5550135599e6eebc154c2000aa6ebbcac01cf5a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:20 np0005464891 python3.9[137756]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:23:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:23:21 np0005464891 python3.9[137908]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:23:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:23:22 np0005464891 python3.9[138033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335800.983079-275-178914951282045/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5550135599e6eebc154c2000aa6ebbcac01cf5a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:22 np0005464891 python3.9[138185]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:23:23 np0005464891 python3.9[138337]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:24 np0005464891 python3.9[138460]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335803.011081-299-53486092556793/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5550135599e6eebc154c2000aa6ebbcac01cf5a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:24 np0005464891 python3.9[138612]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:23:25 np0005464891 python3.9[138764]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:23:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2000 writes, 8931 keys, 2000 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2000 writes, 2000 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2000 writes, 8931 keys, 2000 commit groups, 1.0 writes per commit group, ingest: 10.96 MB, 0.02 MB/s#012Interval WAL: 2000 writes, 2000 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    130.8      0.06              0.02         3    0.021       0      0       0.0       0.0#012  L6      1/0    6.57 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    120.0    106.4      0.13              0.07         2    0.063    7232    729       0.0       0.0#012 Sum      1/0    6.57 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     80.0    114.5      0.19              0.09         5    0.038    7232    729       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     81.7    116.6      0.19              0.09         4    0.047    7232    729       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    120.0    106.4      0.13              0.07         2    0.063    7232    729       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    138.4      0.06              0.02         2    0.030       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bddc5951f0#2 capacity: 308.00 MB usage: 584.75 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(35,493.03 KB,0.156323%) FilterBlock(6,28.55 KB,0.00905124%) IndexBlock(6,63.17 KB,0.0200296%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 12:23:26 np0005464891 python3.9[138887]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335805.0138767-323-164848504530639/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5550135599e6eebc154c2000aa6ebbcac01cf5a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:23:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:26 np0005464891 python3.9[139039]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:23:27 np0005464891 python3.9[139191]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:27 np0005464891 python3.9[139314]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335806.7735364-347-85907728540372/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5550135599e6eebc154c2000aa6ebbcac01cf5a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:28 np0005464891 python3.9[139466]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:23:29 np0005464891 python3.9[139618]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:29 np0005464891 python3.9[139741]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335808.6205354-371-217012783215071/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5550135599e6eebc154c2000aa6ebbcac01cf5a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:29 np0005464891 systemd[1]: session-45.scope: Deactivated successfully.
Oct  1 12:23:29 np0005464891 systemd[1]: session-45.scope: Consumed 25.235s CPU time.
Oct  1 12:23:29 np0005464891 systemd-logind[801]: Session 45 logged out. Waiting for processes to exit.
Oct  1 12:23:29 np0005464891 systemd-logind[801]: Removed session 45.
Oct  1 12:23:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:23:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:35 np0005464891 systemd-logind[801]: New session 46 of user zuul.
Oct  1 12:23:35 np0005464891 systemd[1]: Started Session 46 of User zuul.
Oct  1 12:23:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:23:36 np0005464891 python3.9[139921]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:36 np0005464891 python3.9[140073]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:37 np0005464891 python3.9[140196]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759335816.3266723-34-271808751564905/.source.conf _original_basename=ceph.conf follow=False checksum=d73f2d651d66d624d24fe92d4d628cf95ea79f40 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:38 np0005464891 python3.9[140348]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:38 np0005464891 python3.9[140471]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759335817.803296-34-155665749704859/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=481d67d46ef630aeafdb22315b77310ef59269d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:39 np0005464891 systemd[1]: session-46.scope: Deactivated successfully.
Oct  1 12:23:39 np0005464891 systemd[1]: session-46.scope: Consumed 2.772s CPU time.
Oct  1 12:23:39 np0005464891 systemd-logind[801]: Session 46 logged out. Waiting for processes to exit.
Oct  1 12:23:39 np0005464891 systemd-logind[801]: Removed session 46.
Oct  1 12:23:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:23:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:23:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:23:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:23:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:23:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:23:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:23:41 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev a9f0f9cb-f425-494d-be84-43da38a07c70 does not exist
Oct  1 12:23:41 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 46aba193-66bd-4891-8f4d-f2aae69f5b65 does not exist
Oct  1 12:23:41 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 9a9f4b22-11bf-4553-ae5f-8e8eadc4de58 does not exist
Oct  1 12:23:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:23:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:23:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:23:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:23:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:23:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:23:41 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:23:41 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:23:41 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:23:41 np0005464891 podman[140764]: 2025-10-01 16:23:41.919421966 +0000 UTC m=+0.038460939 container create 44da33985027a6a10900a6b83cd4158aefc15564714f266e861d573e743b2c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 12:23:41 np0005464891 systemd[1]: Started libpod-conmon-44da33985027a6a10900a6b83cd4158aefc15564714f266e861d573e743b2c55.scope.
Oct  1 12:23:41 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:23:41 np0005464891 podman[140764]: 2025-10-01 16:23:41.902466647 +0000 UTC m=+0.021505650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:23:42 np0005464891 podman[140764]: 2025-10-01 16:23:42.004679224 +0000 UTC m=+0.123718267 container init 44da33985027a6a10900a6b83cd4158aefc15564714f266e861d573e743b2c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 12:23:42 np0005464891 podman[140764]: 2025-10-01 16:23:42.014323669 +0000 UTC m=+0.133362672 container start 44da33985027a6a10900a6b83cd4158aefc15564714f266e861d573e743b2c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:23:42 np0005464891 podman[140764]: 2025-10-01 16:23:42.0177148 +0000 UTC m=+0.136753773 container attach 44da33985027a6a10900a6b83cd4158aefc15564714f266e861d573e743b2c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:23:42 np0005464891 clever_keldysh[140780]: 167 167
Oct  1 12:23:42 np0005464891 systemd[1]: libpod-44da33985027a6a10900a6b83cd4158aefc15564714f266e861d573e743b2c55.scope: Deactivated successfully.
Oct  1 12:23:42 np0005464891 podman[140764]: 2025-10-01 16:23:42.022033694 +0000 UTC m=+0.141072707 container died 44da33985027a6a10900a6b83cd4158aefc15564714f266e861d573e743b2c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 12:23:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:23:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:23:42 np0005464891 systemd[1]: var-lib-containers-storage-overlay-78079b5547bd7c2d73970f2b5c3b1471e735c67ff41c4f869eb4301ddcc7f33d-merged.mount: Deactivated successfully.
Oct  1 12:23:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:23:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:23:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:23:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:23:42 np0005464891 podman[140764]: 2025-10-01 16:23:42.057942905 +0000 UTC m=+0.176981878 container remove 44da33985027a6a10900a6b83cd4158aefc15564714f266e861d573e743b2c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 12:23:42 np0005464891 systemd[1]: libpod-conmon-44da33985027a6a10900a6b83cd4158aefc15564714f266e861d573e743b2c55.scope: Deactivated successfully.
Oct  1 12:23:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:42 np0005464891 podman[140804]: 2025-10-01 16:23:42.268788567 +0000 UTC m=+0.065976567 container create a1a66c229d0d9f425ae27a2af475413afb222ceb2ef89b5df841c43005e07bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 12:23:42 np0005464891 systemd[1]: Started libpod-conmon-a1a66c229d0d9f425ae27a2af475413afb222ceb2ef89b5df841c43005e07bad.scope.
Oct  1 12:23:42 np0005464891 podman[140804]: 2025-10-01 16:23:42.243331913 +0000 UTC m=+0.040519963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:23:42 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:23:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe3e9c9d4f483d5cec7f329da1bc88c959e896d200afa2c6ef543696c2668307/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:23:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe3e9c9d4f483d5cec7f329da1bc88c959e896d200afa2c6ef543696c2668307/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:23:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe3e9c9d4f483d5cec7f329da1bc88c959e896d200afa2c6ef543696c2668307/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:23:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe3e9c9d4f483d5cec7f329da1bc88c959e896d200afa2c6ef543696c2668307/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:23:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe3e9c9d4f483d5cec7f329da1bc88c959e896d200afa2c6ef543696c2668307/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:23:42 np0005464891 podman[140804]: 2025-10-01 16:23:42.364312306 +0000 UTC m=+0.161500306 container init a1a66c229d0d9f425ae27a2af475413afb222ceb2ef89b5df841c43005e07bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:23:42 np0005464891 podman[140804]: 2025-10-01 16:23:42.378190294 +0000 UTC m=+0.175378284 container start a1a66c229d0d9f425ae27a2af475413afb222ceb2ef89b5df841c43005e07bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:23:42 np0005464891 podman[140804]: 2025-10-01 16:23:42.382509579 +0000 UTC m=+0.179697579 container attach a1a66c229d0d9f425ae27a2af475413afb222ceb2ef89b5df841c43005e07bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Oct  1 12:23:43 np0005464891 confident_bartik[140821]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:23:43 np0005464891 confident_bartik[140821]: --> relative data size: 1.0
Oct  1 12:23:43 np0005464891 confident_bartik[140821]: --> All data devices are unavailable
Oct  1 12:23:43 np0005464891 systemd[1]: libpod-a1a66c229d0d9f425ae27a2af475413afb222ceb2ef89b5df841c43005e07bad.scope: Deactivated successfully.
Oct  1 12:23:43 np0005464891 systemd[1]: libpod-a1a66c229d0d9f425ae27a2af475413afb222ceb2ef89b5df841c43005e07bad.scope: Consumed 1.057s CPU time.
Oct  1 12:23:43 np0005464891 podman[140804]: 2025-10-01 16:23:43.493127976 +0000 UTC m=+1.290315946 container died a1a66c229d0d9f425ae27a2af475413afb222ceb2ef89b5df841c43005e07bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 12:23:43 np0005464891 systemd[1]: var-lib-containers-storage-overlay-fe3e9c9d4f483d5cec7f329da1bc88c959e896d200afa2c6ef543696c2668307-merged.mount: Deactivated successfully.
Oct  1 12:23:43 np0005464891 podman[140804]: 2025-10-01 16:23:43.572419836 +0000 UTC m=+1.369607796 container remove a1a66c229d0d9f425ae27a2af475413afb222ceb2ef89b5df841c43005e07bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bartik, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:23:43 np0005464891 systemd[1]: libpod-conmon-a1a66c229d0d9f425ae27a2af475413afb222ceb2ef89b5df841c43005e07bad.scope: Deactivated successfully.
Oct  1 12:23:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:44 np0005464891 podman[141004]: 2025-10-01 16:23:44.285344184 +0000 UTC m=+0.056698643 container create 9e81f028f272a81e583adbea90b1e2aec4525797fe9d24bb9c4b0e3b4f7a633d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:23:44 np0005464891 systemd[1]: Started libpod-conmon-9e81f028f272a81e583adbea90b1e2aec4525797fe9d24bb9c4b0e3b4f7a633d.scope.
Oct  1 12:23:44 np0005464891 podman[141004]: 2025-10-01 16:23:44.266275618 +0000 UTC m=+0.037630067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:23:44 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:23:44 np0005464891 podman[141004]: 2025-10-01 16:23:44.376217059 +0000 UTC m=+0.147571528 container init 9e81f028f272a81e583adbea90b1e2aec4525797fe9d24bb9c4b0e3b4f7a633d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_grothendieck, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 12:23:44 np0005464891 podman[141004]: 2025-10-01 16:23:44.38262784 +0000 UTC m=+0.153982259 container start 9e81f028f272a81e583adbea90b1e2aec4525797fe9d24bb9c4b0e3b4f7a633d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_grothendieck, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct  1 12:23:44 np0005464891 podman[141004]: 2025-10-01 16:23:44.385612229 +0000 UTC m=+0.156966668 container attach 9e81f028f272a81e583adbea90b1e2aec4525797fe9d24bb9c4b0e3b4f7a633d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_grothendieck, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 12:23:44 np0005464891 systemd[1]: libpod-9e81f028f272a81e583adbea90b1e2aec4525797fe9d24bb9c4b0e3b4f7a633d.scope: Deactivated successfully.
Oct  1 12:23:44 np0005464891 exciting_grothendieck[141020]: 167 167
Oct  1 12:23:44 np0005464891 podman[141004]: 2025-10-01 16:23:44.388617168 +0000 UTC m=+0.159971617 container died 9e81f028f272a81e583adbea90b1e2aec4525797fe9d24bb9c4b0e3b4f7a633d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct  1 12:23:44 np0005464891 conmon[141020]: conmon 9e81f028f272a81e583a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e81f028f272a81e583adbea90b1e2aec4525797fe9d24bb9c4b0e3b4f7a633d.scope/container/memory.events
Oct  1 12:23:44 np0005464891 systemd[1]: var-lib-containers-storage-overlay-78e63bb061a8e4881bc432c27313c61046d2deb19595898122e72e77d2ff2f1a-merged.mount: Deactivated successfully.
Oct  1 12:23:44 np0005464891 systemd-logind[801]: New session 47 of user zuul.
Oct  1 12:23:44 np0005464891 podman[141004]: 2025-10-01 16:23:44.43363844 +0000 UTC m=+0.204992899 container remove 9e81f028f272a81e583adbea90b1e2aec4525797fe9d24bb9c4b0e3b4f7a633d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_grothendieck, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 12:23:44 np0005464891 systemd[1]: Started Session 47 of User zuul.
Oct  1 12:23:44 np0005464891 systemd[1]: libpod-conmon-9e81f028f272a81e583adbea90b1e2aec4525797fe9d24bb9c4b0e3b4f7a633d.scope: Deactivated successfully.
Oct  1 12:23:44 np0005464891 podman[141077]: 2025-10-01 16:23:44.618081434 +0000 UTC m=+0.044713595 container create 3676dc89d384cde28584f9bf9f05b10ac178684ce4ac6fa72339eafcf89b2d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:23:44 np0005464891 systemd[1]: Started libpod-conmon-3676dc89d384cde28584f9bf9f05b10ac178684ce4ac6fa72339eafcf89b2d6e.scope.
Oct  1 12:23:44 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:23:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3d19ab72394199fdcb9f267fd3755fe8549ddc9a4232ec57111f193400b415/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:23:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3d19ab72394199fdcb9f267fd3755fe8549ddc9a4232ec57111f193400b415/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:23:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3d19ab72394199fdcb9f267fd3755fe8549ddc9a4232ec57111f193400b415/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:23:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3d19ab72394199fdcb9f267fd3755fe8549ddc9a4232ec57111f193400b415/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:23:44 np0005464891 podman[141077]: 2025-10-01 16:23:44.597687395 +0000 UTC m=+0.024319616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:23:44 np0005464891 podman[141077]: 2025-10-01 16:23:44.697832466 +0000 UTC m=+0.124464727 container init 3676dc89d384cde28584f9bf9f05b10ac178684ce4ac6fa72339eafcf89b2d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:23:44 np0005464891 podman[141077]: 2025-10-01 16:23:44.710503142 +0000 UTC m=+0.137135333 container start 3676dc89d384cde28584f9bf9f05b10ac178684ce4ac6fa72339eafcf89b2d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:23:44 np0005464891 podman[141077]: 2025-10-01 16:23:44.71386146 +0000 UTC m=+0.140493711 container attach 3676dc89d384cde28584f9bf9f05b10ac178684ce4ac6fa72339eafcf89b2d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]: {
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:    "0": [
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:        {
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "devices": [
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "/dev/loop3"
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            ],
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "lv_name": "ceph_lv0",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "lv_size": "21470642176",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "name": "ceph_lv0",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "tags": {
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.cluster_name": "ceph",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.crush_device_class": "",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.encrypted": "0",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.osd_id": "0",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.type": "block",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.vdo": "0"
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            },
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "type": "block",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "vg_name": "ceph_vg0"
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:        }
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:    ],
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:    "1": [
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:        {
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "devices": [
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "/dev/loop4"
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            ],
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "lv_name": "ceph_lv1",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "lv_size": "21470642176",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "name": "ceph_lv1",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "tags": {
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.cluster_name": "ceph",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.crush_device_class": "",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.encrypted": "0",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.osd_id": "1",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.type": "block",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.vdo": "0"
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            },
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "type": "block",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "vg_name": "ceph_vg1"
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:        }
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:    ],
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:    "2": [
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:        {
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "devices": [
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "/dev/loop5"
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            ],
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "lv_name": "ceph_lv2",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "lv_size": "21470642176",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "name": "ceph_lv2",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "tags": {
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.cluster_name": "ceph",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.crush_device_class": "",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.encrypted": "0",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.osd_id": "2",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.type": "block",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:                "ceph.vdo": "0"
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            },
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "type": "block",
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:            "vg_name": "ceph_vg2"
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:        }
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]:    ]
Oct  1 12:23:45 np0005464891 elegant_thompson[141117]: }
Oct  1 12:23:45 np0005464891 python3.9[141219]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:23:45 np0005464891 systemd[1]: libpod-3676dc89d384cde28584f9bf9f05b10ac178684ce4ac6fa72339eafcf89b2d6e.scope: Deactivated successfully.
Oct  1 12:23:45 np0005464891 podman[141225]: 2025-10-01 16:23:45.529000014 +0000 UTC m=+0.022437455 container died 3676dc89d384cde28584f9bf9f05b10ac178684ce4ac6fa72339eafcf89b2d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:23:45 np0005464891 systemd[1]: var-lib-containers-storage-overlay-7b3d19ab72394199fdcb9f267fd3755fe8549ddc9a4232ec57111f193400b415-merged.mount: Deactivated successfully.
Oct  1 12:23:45 np0005464891 podman[141225]: 2025-10-01 16:23:45.607969505 +0000 UTC m=+0.101406946 container remove 3676dc89d384cde28584f9bf9f05b10ac178684ce4ac6fa72339eafcf89b2d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:23:45 np0005464891 systemd[1]: libpod-conmon-3676dc89d384cde28584f9bf9f05b10ac178684ce4ac6fa72339eafcf89b2d6e.scope: Deactivated successfully.
Oct  1 12:23:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:23:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:46 np0005464891 podman[141459]: 2025-10-01 16:23:46.258546241 +0000 UTC m=+0.041300663 container create 461de8d6062c076d4f64c587f520f54db04d19b58faf255264d9a3c9d75eff6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 12:23:46 np0005464891 systemd[1]: Started libpod-conmon-461de8d6062c076d4f64c587f520f54db04d19b58faf255264d9a3c9d75eff6a.scope.
Oct  1 12:23:46 np0005464891 podman[141459]: 2025-10-01 16:23:46.238697716 +0000 UTC m=+0.021452108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:23:46 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:23:46 np0005464891 podman[141459]: 2025-10-01 16:23:46.360857301 +0000 UTC m=+0.143611703 container init 461de8d6062c076d4f64c587f520f54db04d19b58faf255264d9a3c9d75eff6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_payne, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:23:46 np0005464891 podman[141459]: 2025-10-01 16:23:46.372280203 +0000 UTC m=+0.155034625 container start 461de8d6062c076d4f64c587f520f54db04d19b58faf255264d9a3c9d75eff6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 12:23:46 np0005464891 nifty_payne[141498]: 167 167
Oct  1 12:23:46 np0005464891 systemd[1]: libpod-461de8d6062c076d4f64c587f520f54db04d19b58faf255264d9a3c9d75eff6a.scope: Deactivated successfully.
Oct  1 12:23:46 np0005464891 conmon[141498]: conmon 461de8d6062c076d4f64 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-461de8d6062c076d4f64c587f520f54db04d19b58faf255264d9a3c9d75eff6a.scope/container/memory.events
Oct  1 12:23:46 np0005464891 podman[141459]: 2025-10-01 16:23:46.381863567 +0000 UTC m=+0.164617999 container attach 461de8d6062c076d4f64c587f520f54db04d19b58faf255264d9a3c9d75eff6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_payne, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:23:46 np0005464891 podman[141459]: 2025-10-01 16:23:46.384889777 +0000 UTC m=+0.167644209 container died 461de8d6062c076d4f64c587f520f54db04d19b58faf255264d9a3c9d75eff6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_payne, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 12:23:46 np0005464891 systemd[1]: var-lib-containers-storage-overlay-70b8aa27548f99f989f7f9d908bd1418d72582b873db270bb5dcdac6976cb84b-merged.mount: Deactivated successfully.
Oct  1 12:23:46 np0005464891 podman[141459]: 2025-10-01 16:23:46.483298483 +0000 UTC m=+0.266052865 container remove 461de8d6062c076d4f64c587f520f54db04d19b58faf255264d9a3c9d75eff6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:23:46 np0005464891 systemd[1]: libpod-conmon-461de8d6062c076d4f64c587f520f54db04d19b58faf255264d9a3c9d75eff6a.scope: Deactivated successfully.
Oct  1 12:23:46 np0005464891 podman[141575]: 2025-10-01 16:23:46.672873992 +0000 UTC m=+0.050103437 container create 939219b0426e1c1c7413eb592088fedc7ead8fd267dd963498c45100c355e69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 12:23:46 np0005464891 python3.9[141567]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:23:46 np0005464891 systemd[1]: Started libpod-conmon-939219b0426e1c1c7413eb592088fedc7ead8fd267dd963498c45100c355e69b.scope.
Oct  1 12:23:46 np0005464891 podman[141575]: 2025-10-01 16:23:46.653411047 +0000 UTC m=+0.030640532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:23:46 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:23:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74570a161c85902c85c516d80a4bbba417a4caa7e75f9a254ce4f139a7b2a5d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:23:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74570a161c85902c85c516d80a4bbba417a4caa7e75f9a254ce4f139a7b2a5d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:23:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74570a161c85902c85c516d80a4bbba417a4caa7e75f9a254ce4f139a7b2a5d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:23:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74570a161c85902c85c516d80a4bbba417a4caa7e75f9a254ce4f139a7b2a5d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:23:46 np0005464891 podman[141575]: 2025-10-01 16:23:46.769864481 +0000 UTC m=+0.147093936 container init 939219b0426e1c1c7413eb592088fedc7ead8fd267dd963498c45100c355e69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:23:46 np0005464891 podman[141575]: 2025-10-01 16:23:46.780945044 +0000 UTC m=+0.158174479 container start 939219b0426e1c1c7413eb592088fedc7ead8fd267dd963498c45100c355e69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_zhukovsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:23:46 np0005464891 podman[141575]: 2025-10-01 16:23:46.784294183 +0000 UTC m=+0.161523648 container attach 939219b0426e1c1c7413eb592088fedc7ead8fd267dd963498c45100c355e69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:23:47 np0005464891 python3.9[141747]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]: {
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:        "osd_id": 2,
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:        "type": "bluestore"
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:    },
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:        "osd_id": 0,
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:        "type": "bluestore"
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:    },
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:        "osd_id": 1,
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:        "type": "bluestore"
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]:    }
Oct  1 12:23:47 np0005464891 nice_zhukovsky[141591]: }
Oct  1 12:23:47 np0005464891 systemd[1]: libpod-939219b0426e1c1c7413eb592088fedc7ead8fd267dd963498c45100c355e69b.scope: Deactivated successfully.
Oct  1 12:23:47 np0005464891 podman[141575]: 2025-10-01 16:23:47.779261278 +0000 UTC m=+1.156490723 container died 939219b0426e1c1c7413eb592088fedc7ead8fd267dd963498c45100c355e69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:23:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:48 np0005464891 python3.9[141932]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:23:48 np0005464891 systemd[1]: var-lib-containers-storage-overlay-74570a161c85902c85c516d80a4bbba417a4caa7e75f9a254ce4f139a7b2a5d2-merged.mount: Deactivated successfully.
Oct  1 12:23:48 np0005464891 podman[141575]: 2025-10-01 16:23:48.338524717 +0000 UTC m=+1.715754192 container remove 939219b0426e1c1c7413eb592088fedc7ead8fd267dd963498c45100c355e69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct  1 12:23:48 np0005464891 systemd[1]: libpod-conmon-939219b0426e1c1c7413eb592088fedc7ead8fd267dd963498c45100c355e69b.scope: Deactivated successfully.
Oct  1 12:23:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:23:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:23:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:23:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:23:48 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 146bc011-fe6b-4dfd-a2f4-1eabdd2705b6 does not exist
Oct  1 12:23:48 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 92ad7a41-cb50-4403-8a9d-655e3ddf1c4a does not exist
Oct  1 12:23:49 np0005464891 python3.9[142139]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  1 12:23:49 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:23:49 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:23:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:50 np0005464891 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct  1 12:23:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:23:51 np0005464891 python3.9[142295]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 12:23:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:52 np0005464891 python3.9[142379]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:23:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:54 np0005464891 python3.9[142532]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 12:23:55 np0005464891 python3[142687]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct  1 12:23:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:23:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:56 np0005464891 python3.9[142839]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:57 np0005464891 python3.9[142991]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:57 np0005464891 python3.9[143069]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:23:58 np0005464891 python3.9[143221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:23:59 np0005464891 python3.9[143299]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.lsnil3uo recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:23:59 np0005464891 python3.9[143451]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:24:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:00 np0005464891 python3.9[143529]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:01 np0005464891 python3.9[143681]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:24:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:24:01 np0005464891 python3[143834]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  1 12:24:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:02 np0005464891 python3.9[143986]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:24:03 np0005464891 python3.9[144111]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335842.1686215-157-124530543632804/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:04 np0005464891 python3.9[144263]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:24:04 np0005464891 python3.9[144388]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335843.678657-172-75016864727580/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:05 np0005464891 python3.9[144540]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:24:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:24:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:06 np0005464891 python3.9[144665]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335845.079041-187-107849555081862/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:06 np0005464891 python3.9[144817]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:24:07 np0005464891 python3.9[144942]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335846.4106355-202-15255449473745/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:08 np0005464891 python3.9[145094]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:24:08 np0005464891 python3.9[145219]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759335847.7408009-217-135177136234504/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:09 np0005464891 python3.9[145371]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:10 np0005464891 python3.9[145523]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:24:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:24:11 np0005464891 python3.9[145678]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:24:11
Oct  1 12:24:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:24:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:24:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'volumes', '.rgw.root', 'default.rgw.control', 'backups', 'vms', 'default.rgw.log', '.mgr']
Oct  1 12:24:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:24:12 np0005464891 python3.9[145830]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:24:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:12 np0005464891 python3.9[145983]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:24:13 np0005464891 python3.9[146137]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:24:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:14 np0005464891 python3.9[146292]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:15 np0005464891 python3.9[146442]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:24:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:24:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:16 np0005464891 python3.9[146595]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:74:f6:ca:ec" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:24:16 np0005464891 ovs-vsctl[146596]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:74:f6:ca:ec external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct  1 12:24:17 np0005464891 python3.9[146748]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:24:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:18 np0005464891 python3.9[146903]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:24:18 np0005464891 ovs-vsctl[146904]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct  1 12:24:19 np0005464891 python3.9[147054]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:24:19 np0005464891 python3.9[147208]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:24:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:20 np0005464891 python3.9[147360]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:24:20.767798) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335860767894, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 742, "num_deletes": 251, "total_data_size": 980709, "memory_usage": 995144, "flush_reason": "Manual Compaction"}
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335860776528, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 972235, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9000, "largest_seqno": 9741, "table_properties": {"data_size": 968376, "index_size": 1639, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8182, "raw_average_key_size": 18, "raw_value_size": 960740, "raw_average_value_size": 2178, "num_data_blocks": 76, "num_entries": 441, "num_filter_entries": 441, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335797, "oldest_key_time": 1759335797, "file_creation_time": 1759335860, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 8756 microseconds, and 5465 cpu microseconds.
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:24:20.776567) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 972235 bytes OK
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:24:20.776584) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:24:20.777767) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:24:20.777782) EVENT_LOG_v1 {"time_micros": 1759335860777777, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:24:20.777799) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 976920, prev total WAL file size 976920, number of live WAL files 2.
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:24:20.778517) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(949KB)], [23(6732KB)]
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335860778612, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7865866, "oldest_snapshot_seqno": -1}
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3317 keys, 6316215 bytes, temperature: kUnknown
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335860821810, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6316215, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6291623, "index_size": 15203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80553, "raw_average_key_size": 24, "raw_value_size": 6229203, "raw_average_value_size": 1877, "num_data_blocks": 663, "num_entries": 3317, "num_filter_entries": 3317, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759335860, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:24:20.822001) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6316215 bytes
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:24:20.822939) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.0 rd, 147.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 6.6 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(14.6) write-amplify(6.5) OK, records in: 3831, records dropped: 514 output_compression: NoCompression
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:24:20.822954) EVENT_LOG_v1 {"time_micros": 1759335860822947, "job": 8, "event": "compaction_finished", "compaction_time_micros": 42975, "compaction_time_cpu_micros": 29880, "output_level": 6, "num_output_files": 1, "total_output_size": 6316215, "num_input_records": 3831, "num_output_records": 3317, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335860823168, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759335860824470, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:24:20.778241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:24:20.824496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:24:20.824506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:24:20.824508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:24:20.824509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:24:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:24:20.824511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:24:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:24:21 np0005464891 python3.9[147438]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:24:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:24:21 np0005464891 python3.9[147590]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:24:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:22 np0005464891 python3.9[147668]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:24:22 np0005464891 python3.9[147820]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:23 np0005464891 python3.9[147972]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:24:24 np0005464891 python3.9[148050]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:24 np0005464891 python3.9[148202]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:24:25 np0005464891 python3.9[148280]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:26 np0005464891 python3.9[148432]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:24:26 np0005464891 systemd[1]: Reloading.
Oct  1 12:24:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:24:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:26 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:24:26 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:24:27 np0005464891 python3.9[148622]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:24:27 np0005464891 python3.9[148700]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:28 np0005464891 python3.9[148852]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:24:28 np0005464891 python3.9[148930]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:29 np0005464891 python3.9[149082]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:24:29 np0005464891 systemd[1]: Reloading.
Oct  1 12:24:29 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:24:29 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:24:30 np0005464891 systemd[1]: Starting Create netns directory...
Oct  1 12:24:30 np0005464891 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  1 12:24:30 np0005464891 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  1 12:24:30 np0005464891 systemd[1]: Finished Create netns directory.
Oct  1 12:24:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:31 np0005464891 python3.9[149276]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:24:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:24:31 np0005464891 python3.9[149428]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:24:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:32 np0005464891 python3.9[149551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759335871.2300956-468-125614563754149/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:24:33 np0005464891 python3.9[149703]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:24:34 np0005464891 python3.9[149855]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:24:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:34 np0005464891 python3.9[149978]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759335873.5203073-493-236359468340857/.source.json _original_basename=.x7gfjmkw follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:35 np0005464891 python3.9[150130]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:24:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:37 np0005464891 python3.9[150557]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct  1 12:24:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:38 np0005464891 python3.9[150709]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  1 12:24:39 np0005464891 python3.9[150861]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  1 12:24:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:24:41 np0005464891 python3[151039]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  1 12:24:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:24:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:24:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:24:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:24:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:24:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:24:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:45 np0005464891 podman[151052]: 2025-10-01 16:24:45.933909833 +0000 UTC m=+4.742849478 image pull ceb6fcca0131acbc0ff37d5322c126e14f8045fca848e7440fedac2d6444d8c2 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct  1 12:24:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:24:46 np0005464891 podman[151168]: 2025-10-01 16:24:46.172946856 +0000 UTC m=+0.068163088 container create 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:24:46 np0005464891 podman[151168]: 2025-10-01 16:24:46.142688516 +0000 UTC m=+0.037904738 image pull ceb6fcca0131acbc0ff37d5322c126e14f8045fca848e7440fedac2d6444d8c2 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct  1 12:24:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:46 np0005464891 python3[151039]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct  1 12:24:47 np0005464891 python3.9[151358]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:24:47 np0005464891 python3.9[151512]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:48 np0005464891 python3.9[151588]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:24:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:24:49 np0005464891 python3.9[151840]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759335888.5500648-581-124907153417500/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:24:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:24:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:24:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:24:49 np0005464891 python3.9[152032]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 12:24:49 np0005464891 systemd[1]: Reloading.
Oct  1 12:24:49 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:24:49 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:24:50 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 23884e6f-b94a-4328-990a-5d43474cda5a does not exist
Oct  1 12:24:50 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 2985c5d2-3774-44df-a2b2-dd1a7330f044 does not exist
Oct  1 12:24:50 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev bb38d301-db6a-488f-9b97-699124b38fd6 does not exist
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:24:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:24:50 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:24:50 np0005464891 python3.9[152277]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:24:50 np0005464891 podman[152322]: 2025-10-01 16:24:50.812064407 +0000 UTC m=+0.055956337 container create c622bc339b0502adf1ac5fdf72526076560f265a93449960fe2c470f85a300cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:24:50 np0005464891 systemd[1]: Started libpod-conmon-c622bc339b0502adf1ac5fdf72526076560f265a93449960fe2c470f85a300cf.scope.
Oct  1 12:24:50 np0005464891 systemd[1]: Reloading.
Oct  1 12:24:50 np0005464891 podman[152322]: 2025-10-01 16:24:50.78781915 +0000 UTC m=+0.031711080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:24:50 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:24:50 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:24:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:24:51 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:24:51 np0005464891 systemd[1]: Starting ovn_controller container...
Oct  1 12:24:51 np0005464891 podman[152322]: 2025-10-01 16:24:51.214198855 +0000 UTC m=+0.458090775 container init c622bc339b0502adf1ac5fdf72526076560f265a93449960fe2c470f85a300cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chebyshev, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:24:51 np0005464891 podman[152322]: 2025-10-01 16:24:51.222551752 +0000 UTC m=+0.466443692 container start c622bc339b0502adf1ac5fdf72526076560f265a93449960fe2c470f85a300cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 12:24:51 np0005464891 podman[152322]: 2025-10-01 16:24:51.226427127 +0000 UTC m=+0.470319477 container attach c622bc339b0502adf1ac5fdf72526076560f265a93449960fe2c470f85a300cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 12:24:51 np0005464891 flamboyant_chebyshev[152341]: 167 167
Oct  1 12:24:51 np0005464891 systemd[1]: libpod-c622bc339b0502adf1ac5fdf72526076560f265a93449960fe2c470f85a300cf.scope: Deactivated successfully.
Oct  1 12:24:51 np0005464891 podman[152322]: 2025-10-01 16:24:51.229560792 +0000 UTC m=+0.473452722 container died c622bc339b0502adf1ac5fdf72526076560f265a93449960fe2c470f85a300cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 12:24:51 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9fd1163460ed65158a47fc282e21203fe820f850fec3790dd398257d13d08c68-merged.mount: Deactivated successfully.
Oct  1 12:24:51 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:24:51 np0005464891 podman[152322]: 2025-10-01 16:24:51.30261795 +0000 UTC m=+0.546509850 container remove c622bc339b0502adf1ac5fdf72526076560f265a93449960fe2c470f85a300cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:24:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/191568c488e8c9204dce9737b10556f12795fed1b559d9d1b5155e8c12ae6fcd/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct  1 12:24:51 np0005464891 systemd[1]: libpod-conmon-c622bc339b0502adf1ac5fdf72526076560f265a93449960fe2c470f85a300cf.scope: Deactivated successfully.
Oct  1 12:24:51 np0005464891 systemd[1]: Started /usr/bin/podman healthcheck run 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe.
Oct  1 12:24:51 np0005464891 podman[152380]: 2025-10-01 16:24:51.341827232 +0000 UTC m=+0.114708387 container init 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: + sudo -E kolla_set_configs
Oct  1 12:24:51 np0005464891 podman[152380]: 2025-10-01 16:24:51.374793835 +0000 UTC m=+0.147674960 container start 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  1 12:24:51 np0005464891 edpm-start-podman-container[152380]: ovn_controller
Oct  1 12:24:51 np0005464891 systemd[1]: Created slice User Slice of UID 0.
Oct  1 12:24:51 np0005464891 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  1 12:24:51 np0005464891 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  1 12:24:51 np0005464891 systemd[1]: Starting User Manager for UID 0...
Oct  1 12:24:51 np0005464891 edpm-start-podman-container[152379]: Creating additional drop-in dependency for "ovn_controller" (03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe)
Oct  1 12:24:51 np0005464891 systemd[1]: Reloading.
Oct  1 12:24:51 np0005464891 podman[152452]: 2025-10-01 16:24:51.48651025 +0000 UTC m=+0.044676351 container create 70e72015af848f84f44ffcea3278bcd6660f642590e4c38f9455f7896b1b00de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:24:51 np0005464891 podman[152418]: 2025-10-01 16:24:51.491650328 +0000 UTC m=+0.107527522 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 12:24:51 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:24:51 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:24:51 np0005464891 podman[152452]: 2025-10-01 16:24:51.470340082 +0000 UTC m=+0.028506203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:24:51 np0005464891 systemd[152451]: Queued start job for default target Main User Target.
Oct  1 12:24:51 np0005464891 systemd[152451]: Created slice User Application Slice.
Oct  1 12:24:51 np0005464891 systemd[152451]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  1 12:24:51 np0005464891 systemd[152451]: Started Daily Cleanup of User's Temporary Directories.
Oct  1 12:24:51 np0005464891 systemd[152451]: Reached target Paths.
Oct  1 12:24:51 np0005464891 systemd[152451]: Reached target Timers.
Oct  1 12:24:51 np0005464891 systemd[152451]: Starting D-Bus User Message Bus Socket...
Oct  1 12:24:51 np0005464891 systemd[152451]: Starting Create User's Volatile Files and Directories...
Oct  1 12:24:51 np0005464891 systemd[152451]: Finished Create User's Volatile Files and Directories.
Oct  1 12:24:51 np0005464891 systemd[152451]: Listening on D-Bus User Message Bus Socket.
Oct  1 12:24:51 np0005464891 systemd[152451]: Reached target Sockets.
Oct  1 12:24:51 np0005464891 systemd[152451]: Reached target Basic System.
Oct  1 12:24:51 np0005464891 systemd[152451]: Reached target Main User Target.
Oct  1 12:24:51 np0005464891 systemd[152451]: Startup finished in 170ms.
Oct  1 12:24:51 np0005464891 systemd[1]: Started User Manager for UID 0.
Oct  1 12:24:51 np0005464891 systemd[1]: Started ovn_controller container.
Oct  1 12:24:51 np0005464891 systemd[1]: 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe-5bbafec0c1fb1865.service: Main process exited, code=exited, status=1/FAILURE
Oct  1 12:24:51 np0005464891 systemd[1]: 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe-5bbafec0c1fb1865.service: Failed with result 'exit-code'.
Oct  1 12:24:51 np0005464891 systemd[1]: Started libpod-conmon-70e72015af848f84f44ffcea3278bcd6660f642590e4c38f9455f7896b1b00de.scope.
Oct  1 12:24:51 np0005464891 systemd[1]: Started Session c1 of User root.
Oct  1 12:24:51 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:24:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0482f0a9bea217a62a2c1f6f8f804e5fd778e0d51a7d5680b9c6a5a3e351f1bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:24:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0482f0a9bea217a62a2c1f6f8f804e5fd778e0d51a7d5680b9c6a5a3e351f1bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:24:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0482f0a9bea217a62a2c1f6f8f804e5fd778e0d51a7d5680b9c6a5a3e351f1bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:24:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0482f0a9bea217a62a2c1f6f8f804e5fd778e0d51a7d5680b9c6a5a3e351f1bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:24:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0482f0a9bea217a62a2c1f6f8f804e5fd778e0d51a7d5680b9c6a5a3e351f1bf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:24:51 np0005464891 podman[152452]: 2025-10-01 16:24:51.853732224 +0000 UTC m=+0.411898355 container init 70e72015af848f84f44ffcea3278bcd6660f642590e4c38f9455f7896b1b00de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: INFO:__main__:Validating config file
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: INFO:__main__:Writing out command to execute
Oct  1 12:24:51 np0005464891 podman[152452]: 2025-10-01 16:24:51.863348024 +0000 UTC m=+0.421514125 container start 70e72015af848f84f44ffcea3278bcd6660f642590e4c38f9455f7896b1b00de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:24:51 np0005464891 systemd[1]: session-c1.scope: Deactivated successfully.
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: ++ cat /run_command
Oct  1 12:24:51 np0005464891 podman[152452]: 2025-10-01 16:24:51.868708819 +0000 UTC m=+0.426874940 container attach 70e72015af848f84f44ffcea3278bcd6660f642590e4c38f9455f7896b1b00de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: + ARGS=
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: + sudo kolla_copy_cacerts
Oct  1 12:24:51 np0005464891 systemd[1]: Started Session c2 of User root.
Oct  1 12:24:51 np0005464891 systemd[1]: session-c2.scope: Deactivated successfully.
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: + [[ ! -n '' ]]
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: + . kolla_extend_start
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: + umask 0022
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct  1 12:24:51 np0005464891 NetworkManager[44940]: <info>  [1759335891.9575] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct  1 12:24:51 np0005464891 NetworkManager[44940]: <info>  [1759335891.9582] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 12:24:51 np0005464891 NetworkManager[44940]: <info>  [1759335891.9594] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct  1 12:24:51 np0005464891 NetworkManager[44940]: <info>  [1759335891.9599] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct  1 12:24:51 np0005464891 NetworkManager[44940]: <info>  [1759335891.9603] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  1 12:24:51 np0005464891 kernel: br-int: entered promiscuous mode
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00021|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00022|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00023|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00024|main|INFO|OVS feature set changed, force recompute.
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  1 12:24:51 np0005464891 NetworkManager[44940]: <info>  [1759335891.9796] manager: (ovn-a98057-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct  1 12:24:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:24:51Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  1 12:24:51 np0005464891 kernel: genev_sys_6081: entered promiscuous mode
Oct  1 12:24:51 np0005464891 systemd-udevd[152615]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:24:51 np0005464891 systemd-udevd[152614]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:24:51 np0005464891 NetworkManager[44940]: <info>  [1759335891.9984] device (genev_sys_6081): carrier: link connected
Oct  1 12:24:51 np0005464891 NetworkManager[44940]: <info>  [1759335891.9987] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct  1 12:24:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:52 np0005464891 python3.9[152706]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:24:52 np0005464891 ovs-vsctl[152710]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct  1 12:24:53 np0005464891 strange_poincare[152531]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:24:53 np0005464891 strange_poincare[152531]: --> relative data size: 1.0
Oct  1 12:24:53 np0005464891 strange_poincare[152531]: --> All data devices are unavailable
Oct  1 12:24:53 np0005464891 systemd[1]: libpod-70e72015af848f84f44ffcea3278bcd6660f642590e4c38f9455f7896b1b00de.scope: Deactivated successfully.
Oct  1 12:24:53 np0005464891 systemd[1]: libpod-70e72015af848f84f44ffcea3278bcd6660f642590e4c38f9455f7896b1b00de.scope: Consumed 1.105s CPU time.
Oct  1 12:24:53 np0005464891 podman[152452]: 2025-10-01 16:24:53.042788571 +0000 UTC m=+1.600954682 container died 70e72015af848f84f44ffcea3278bcd6660f642590e4c38f9455f7896b1b00de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 12:24:53 np0005464891 systemd[1]: var-lib-containers-storage-overlay-0482f0a9bea217a62a2c1f6f8f804e5fd778e0d51a7d5680b9c6a5a3e351f1bf-merged.mount: Deactivated successfully.
Oct  1 12:24:53 np0005464891 podman[152452]: 2025-10-01 16:24:53.111904143 +0000 UTC m=+1.670070254 container remove 70e72015af848f84f44ffcea3278bcd6660f642590e4c38f9455f7896b1b00de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:24:53 np0005464891 systemd[1]: libpod-conmon-70e72015af848f84f44ffcea3278bcd6660f642590e4c38f9455f7896b1b00de.scope: Deactivated successfully.
Oct  1 12:24:53 np0005464891 python3.9[152897]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:24:53 np0005464891 ovs-vsctl[152973]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct  1 12:24:53 np0005464891 podman[153066]: 2025-10-01 16:24:53.833898504 +0000 UTC m=+0.057472068 container create 081e1fac45f3b003eac3e96ccbd2d61ce6cc861d292adb6b11ddc65650627a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_fermat, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 12:24:53 np0005464891 systemd[1]: Started libpod-conmon-081e1fac45f3b003eac3e96ccbd2d61ce6cc861d292adb6b11ddc65650627a2a.scope.
Oct  1 12:24:53 np0005464891 podman[153066]: 2025-10-01 16:24:53.812814063 +0000 UTC m=+0.036387637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:24:53 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:24:53 np0005464891 podman[153066]: 2025-10-01 16:24:53.929256626 +0000 UTC m=+0.152830260 container init 081e1fac45f3b003eac3e96ccbd2d61ce6cc861d292adb6b11ddc65650627a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_fermat, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 12:24:53 np0005464891 podman[153066]: 2025-10-01 16:24:53.940931872 +0000 UTC m=+0.164505466 container start 081e1fac45f3b003eac3e96ccbd2d61ce6cc861d292adb6b11ddc65650627a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 12:24:53 np0005464891 podman[153066]: 2025-10-01 16:24:53.944391846 +0000 UTC m=+0.167965420 container attach 081e1fac45f3b003eac3e96ccbd2d61ce6cc861d292adb6b11ddc65650627a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_fermat, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:24:53 np0005464891 determined_fermat[153127]: 167 167
Oct  1 12:24:53 np0005464891 systemd[1]: libpod-081e1fac45f3b003eac3e96ccbd2d61ce6cc861d292adb6b11ddc65650627a2a.scope: Deactivated successfully.
Oct  1 12:24:53 np0005464891 podman[153066]: 2025-10-01 16:24:53.950928742 +0000 UTC m=+0.174502336 container died 081e1fac45f3b003eac3e96ccbd2d61ce6cc861d292adb6b11ddc65650627a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:24:53 np0005464891 systemd[1]: var-lib-containers-storage-overlay-84f120aa9d0d377a52defa4e694a0220719b4e14b10f02f07c96b268732a2194-merged.mount: Deactivated successfully.
Oct  1 12:24:53 np0005464891 podman[153066]: 2025-10-01 16:24:53.997827173 +0000 UTC m=+0.221400727 container remove 081e1fac45f3b003eac3e96ccbd2d61ce6cc861d292adb6b11ddc65650627a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:24:54 np0005464891 systemd[1]: libpod-conmon-081e1fac45f3b003eac3e96ccbd2d61ce6cc861d292adb6b11ddc65650627a2a.scope: Deactivated successfully.
Oct  1 12:24:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:54 np0005464891 podman[153229]: 2025-10-01 16:24:54.190572582 +0000 UTC m=+0.046064559 container create 95d1568e610cd0eb5d527f3a6e0984dc86af498c9f213cdc356b175a88dc82cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:24:54 np0005464891 systemd[1]: Started libpod-conmon-95d1568e610cd0eb5d527f3a6e0984dc86af498c9f213cdc356b175a88dc82cb.scope.
Oct  1 12:24:54 np0005464891 podman[153229]: 2025-10-01 16:24:54.170027966 +0000 UTC m=+0.025519953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:24:54 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:24:54 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a409dbc8a08b3b37274cfed07ba8d8d57f47c7e1ff305a1cad55e4b30a1ef70a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:24:54 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a409dbc8a08b3b37274cfed07ba8d8d57f47c7e1ff305a1cad55e4b30a1ef70a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:24:54 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a409dbc8a08b3b37274cfed07ba8d8d57f47c7e1ff305a1cad55e4b30a1ef70a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:24:54 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a409dbc8a08b3b37274cfed07ba8d8d57f47c7e1ff305a1cad55e4b30a1ef70a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:24:54 np0005464891 podman[153229]: 2025-10-01 16:24:54.30351054 +0000 UTC m=+0.159002537 container init 95d1568e610cd0eb5d527f3a6e0984dc86af498c9f213cdc356b175a88dc82cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 12:24:54 np0005464891 podman[153229]: 2025-10-01 16:24:54.311369133 +0000 UTC m=+0.166861120 container start 95d1568e610cd0eb5d527f3a6e0984dc86af498c9f213cdc356b175a88dc82cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:24:54 np0005464891 podman[153229]: 2025-10-01 16:24:54.315814394 +0000 UTC m=+0.171306391 container attach 95d1568e610cd0eb5d527f3a6e0984dc86af498c9f213cdc356b175a88dc82cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:24:54 np0005464891 python3.9[153237]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:24:54 np0005464891 ovs-vsctl[153253]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct  1 12:24:54 np0005464891 systemd[1]: session-47.scope: Deactivated successfully.
Oct  1 12:24:54 np0005464891 systemd[1]: session-47.scope: Consumed 1min 674ms CPU time.
Oct  1 12:24:54 np0005464891 systemd-logind[801]: Session 47 logged out. Waiting for processes to exit.
Oct  1 12:24:54 np0005464891 systemd-logind[801]: Removed session 47.
Oct  1 12:24:55 np0005464891 amazing_easley[153248]: {
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:    "0": [
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:        {
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "devices": [
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "/dev/loop3"
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            ],
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "lv_name": "ceph_lv0",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "lv_size": "21470642176",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "name": "ceph_lv0",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "tags": {
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.cluster_name": "ceph",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.crush_device_class": "",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.encrypted": "0",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.osd_id": "0",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.type": "block",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.vdo": "0"
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            },
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "type": "block",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "vg_name": "ceph_vg0"
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:        }
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:    ],
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:    "1": [
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:        {
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "devices": [
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "/dev/loop4"
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            ],
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "lv_name": "ceph_lv1",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "lv_size": "21470642176",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "name": "ceph_lv1",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "tags": {
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.cluster_name": "ceph",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.crush_device_class": "",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.encrypted": "0",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.osd_id": "1",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.type": "block",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.vdo": "0"
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            },
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "type": "block",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "vg_name": "ceph_vg1"
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:        }
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:    ],
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:    "2": [
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:        {
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "devices": [
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "/dev/loop5"
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            ],
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "lv_name": "ceph_lv2",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "lv_size": "21470642176",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "name": "ceph_lv2",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "tags": {
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.cluster_name": "ceph",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.crush_device_class": "",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.encrypted": "0",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.osd_id": "2",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.type": "block",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:                "ceph.vdo": "0"
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            },
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "type": "block",
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:            "vg_name": "ceph_vg2"
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:        }
Oct  1 12:24:55 np0005464891 amazing_easley[153248]:    ]
Oct  1 12:24:55 np0005464891 amazing_easley[153248]: }
Oct  1 12:24:55 np0005464891 systemd[1]: libpod-95d1568e610cd0eb5d527f3a6e0984dc86af498c9f213cdc356b175a88dc82cb.scope: Deactivated successfully.
Oct  1 12:24:55 np0005464891 podman[153229]: 2025-10-01 16:24:55.112220319 +0000 UTC m=+0.967712326 container died 95d1568e610cd0eb5d527f3a6e0984dc86af498c9f213cdc356b175a88dc82cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:24:55 np0005464891 systemd[1]: var-lib-containers-storage-overlay-a409dbc8a08b3b37274cfed07ba8d8d57f47c7e1ff305a1cad55e4b30a1ef70a-merged.mount: Deactivated successfully.
Oct  1 12:24:55 np0005464891 podman[153229]: 2025-10-01 16:24:55.186826479 +0000 UTC m=+1.042318456 container remove 95d1568e610cd0eb5d527f3a6e0984dc86af498c9f213cdc356b175a88dc82cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 12:24:55 np0005464891 systemd[1]: libpod-conmon-95d1568e610cd0eb5d527f3a6e0984dc86af498c9f213cdc356b175a88dc82cb.scope: Deactivated successfully.
Oct  1 12:24:55 np0005464891 podman[153437]: 2025-10-01 16:24:55.806363016 +0000 UTC m=+0.040835057 container create e7457216642bf8d0d11c8d5be09329d5990acc1a5aa36e43773b31f5dc184900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 12:24:55 np0005464891 systemd[1]: Started libpod-conmon-e7457216642bf8d0d11c8d5be09329d5990acc1a5aa36e43773b31f5dc184900.scope.
Oct  1 12:24:55 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:24:55 np0005464891 podman[153437]: 2025-10-01 16:24:55.883048442 +0000 UTC m=+0.117520503 container init e7457216642bf8d0d11c8d5be09329d5990acc1a5aa36e43773b31f5dc184900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 12:24:55 np0005464891 podman[153437]: 2025-10-01 16:24:55.791311598 +0000 UTC m=+0.025783659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:24:55 np0005464891 podman[153437]: 2025-10-01 16:24:55.893595498 +0000 UTC m=+0.128067559 container start e7457216642bf8d0d11c8d5be09329d5990acc1a5aa36e43773b31f5dc184900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:24:55 np0005464891 podman[153437]: 2025-10-01 16:24:55.897356529 +0000 UTC m=+0.131828590 container attach e7457216642bf8d0d11c8d5be09329d5990acc1a5aa36e43773b31f5dc184900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:24:55 np0005464891 gracious_perlman[153454]: 167 167
Oct  1 12:24:55 np0005464891 systemd[1]: libpod-e7457216642bf8d0d11c8d5be09329d5990acc1a5aa36e43773b31f5dc184900.scope: Deactivated successfully.
Oct  1 12:24:55 np0005464891 podman[153437]: 2025-10-01 16:24:55.89957527 +0000 UTC m=+0.134047401 container died e7457216642bf8d0d11c8d5be09329d5990acc1a5aa36e43773b31f5dc184900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 12:24:55 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4d1fbce8f706d891c2d2f5d9841d1200f88d81b8eabad7e9b6e62f10e76ca234-merged.mount: Deactivated successfully.
Oct  1 12:24:55 np0005464891 podman[153437]: 2025-10-01 16:24:55.936788757 +0000 UTC m=+0.171260798 container remove e7457216642bf8d0d11c8d5be09329d5990acc1a5aa36e43773b31f5dc184900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:24:55 np0005464891 systemd[1]: libpod-conmon-e7457216642bf8d0d11c8d5be09329d5990acc1a5aa36e43773b31f5dc184900.scope: Deactivated successfully.
Oct  1 12:24:56 np0005464891 podman[153478]: 2025-10-01 16:24:56.102956527 +0000 UTC m=+0.045979476 container create 9df177bc78f0d2f646046354c9476801b04b7fad5592fe84bb29eb69f3b6fb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shockley, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:24:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:24:56 np0005464891 systemd[1]: Started libpod-conmon-9df177bc78f0d2f646046354c9476801b04b7fad5592fe84bb29eb69f3b6fb65.scope.
Oct  1 12:24:56 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:24:56 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395643463c9dbccf7bb2ed5e11890ecca8a4acbb43d9598929c6fbd556140989/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:24:56 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395643463c9dbccf7bb2ed5e11890ecca8a4acbb43d9598929c6fbd556140989/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:24:56 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395643463c9dbccf7bb2ed5e11890ecca8a4acbb43d9598929c6fbd556140989/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:24:56 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395643463c9dbccf7bb2ed5e11890ecca8a4acbb43d9598929c6fbd556140989/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:24:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:24:56 np0005464891 podman[153478]: 2025-10-01 16:24:56.088859315 +0000 UTC m=+0.031882284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:24:56 np0005464891 podman[153478]: 2025-10-01 16:24:56.188509473 +0000 UTC m=+0.131532442 container init 9df177bc78f0d2f646046354c9476801b04b7fad5592fe84bb29eb69f3b6fb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shockley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:24:56 np0005464891 podman[153478]: 2025-10-01 16:24:56.20019168 +0000 UTC m=+0.143214629 container start 9df177bc78f0d2f646046354c9476801b04b7fad5592fe84bb29eb69f3b6fb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shockley, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 12:24:56 np0005464891 podman[153478]: 2025-10-01 16:24:56.202917533 +0000 UTC m=+0.145940472 container attach 9df177bc78f0d2f646046354c9476801b04b7fad5592fe84bb29eb69f3b6fb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shockley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:24:57 np0005464891 zen_shockley[153494]: {
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:        "osd_id": 2,
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:        "type": "bluestore"
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:    },
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:        "osd_id": 0,
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:        "type": "bluestore"
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:    },
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:        "osd_id": 1,
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:        "type": "bluestore"
Oct  1 12:24:57 np0005464891 zen_shockley[153494]:    }
Oct  1 12:24:57 np0005464891 zen_shockley[153494]: }
Oct  1 12:24:57 np0005464891 systemd[1]: libpod-9df177bc78f0d2f646046354c9476801b04b7fad5592fe84bb29eb69f3b6fb65.scope: Deactivated successfully.
Oct  1 12:24:57 np0005464891 podman[153478]: 2025-10-01 16:24:57.150483712 +0000 UTC m=+1.093506671 container died 9df177bc78f0d2f646046354c9476801b04b7fad5592fe84bb29eb69f3b6fb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shockley, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 12:24:57 np0005464891 systemd[1]: var-lib-containers-storage-overlay-395643463c9dbccf7bb2ed5e11890ecca8a4acbb43d9598929c6fbd556140989-merged.mount: Deactivated successfully.
Oct  1 12:24:57 np0005464891 podman[153478]: 2025-10-01 16:24:57.211294649 +0000 UTC m=+1.154317598 container remove 9df177bc78f0d2f646046354c9476801b04b7fad5592fe84bb29eb69f3b6fb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 12:24:57 np0005464891 systemd[1]: libpod-conmon-9df177bc78f0d2f646046354c9476801b04b7fad5592fe84bb29eb69f3b6fb65.scope: Deactivated successfully.
Oct  1 12:24:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:24:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:24:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:24:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:24:57 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev a88fec99-2219-4ed5-9d85-2ef4c3915633 does not exist
Oct  1 12:24:57 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev cfbf3d61-6595-4849-a225-437c388b4884 does not exist
Oct  1 12:24:57 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:24:57 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:24:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:00 np0005464891 systemd-logind[801]: New session 49 of user zuul.
Oct  1 12:25:00 np0005464891 systemd[1]: Started Session 49 of User zuul.
Oct  1 12:25:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:25:01 np0005464891 python3.9[153742]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:25:02 np0005464891 systemd[1]: Stopping User Manager for UID 0...
Oct  1 12:25:02 np0005464891 systemd[152451]: Activating special unit Exit the Session...
Oct  1 12:25:02 np0005464891 systemd[152451]: Stopped target Main User Target.
Oct  1 12:25:02 np0005464891 systemd[152451]: Stopped target Basic System.
Oct  1 12:25:02 np0005464891 systemd[152451]: Stopped target Paths.
Oct  1 12:25:02 np0005464891 systemd[152451]: Stopped target Sockets.
Oct  1 12:25:02 np0005464891 systemd[152451]: Stopped target Timers.
Oct  1 12:25:02 np0005464891 systemd[152451]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  1 12:25:02 np0005464891 systemd[152451]: Closed D-Bus User Message Bus Socket.
Oct  1 12:25:02 np0005464891 systemd[152451]: Stopped Create User's Volatile Files and Directories.
Oct  1 12:25:02 np0005464891 systemd[152451]: Removed slice User Application Slice.
Oct  1 12:25:02 np0005464891 systemd[152451]: Reached target Shutdown.
Oct  1 12:25:02 np0005464891 systemd[152451]: Finished Exit the Session.
Oct  1 12:25:02 np0005464891 systemd[152451]: Reached target Exit the Session.
Oct  1 12:25:02 np0005464891 systemd[1]: user@0.service: Deactivated successfully.
Oct  1 12:25:02 np0005464891 systemd[1]: Stopped User Manager for UID 0.
Oct  1 12:25:02 np0005464891 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  1 12:25:02 np0005464891 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  1 12:25:02 np0005464891 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  1 12:25:02 np0005464891 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  1 12:25:02 np0005464891 systemd[1]: Removed slice User Slice of UID 0.
Oct  1 12:25:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:02 np0005464891 python3.9[153900]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:03 np0005464891 python3.9[154052]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:04 np0005464891 python3.9[154206]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:05 np0005464891 python3.9[154358]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:06 np0005464891 python3.9[154510]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:25:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:06 np0005464891 python3.9[154660]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:25:07 np0005464891 python3.9[154812]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  1 12:25:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:09 np0005464891 python3.9[154962]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:25:09 np0005464891 python3.9[155083]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759335908.4594185-86-260608904670760/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:10 np0005464891 python3.9[155234]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:25:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:25:11 np0005464891 python3.9[155355]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759335910.1462224-101-41293896420119/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:25:11
Oct  1 12:25:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:25:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:25:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'vms', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.control']
Oct  1 12:25:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:25:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:12 np0005464891 python3.9[155507]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 12:25:13 np0005464891 python3.9[155591]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:25:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:15 np0005464891 python3.9[155744]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 12:25:16 np0005464891 python3.9[155897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:25:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:25:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5445 writes, 23K keys, 5445 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5445 writes, 773 syncs, 7.04 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5445 writes, 23K keys, 5445 commit groups, 1.0 writes per commit group, ingest: 18.44 MB, 0.03 MB/s#012Interval WAL: 5445 writes, 773 syncs, 7.04 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5605ab5131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5605ab5131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  1 12:25:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:25:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:16 np0005464891 python3.9[156018]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759335915.5691462-138-92811025590135/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:17 np0005464891 python3.9[156168]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:25:17 np0005464891 python3.9[156289]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759335916.7552102-138-149112459756891/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:19 np0005464891 python3.9[156439]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:25:19 np0005464891 python3.9[156560]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759335918.531511-182-231581290798636/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:20 np0005464891 python3.9[156710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:25:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:20 np0005464891 python3.9[156831]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759335919.6698864-182-276906470116634/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:25:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:25:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6525 writes, 27K keys, 6525 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6525 writes, 1136 syncs, 5.74 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6525 writes, 27K keys, 6525 commit groups, 1.0 writes per commit group, ingest: 19.35 MB, 0.03 MB/s#012Interval WAL: 6525 writes, 1136 syncs, 5.74 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a66b2091f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a66b2091f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  1 12:25:21 np0005464891 python3.9[156981]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:25:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:25:21 np0005464891 ovn_controller[152409]: 2025-10-01T16:25:21Z|00025|memory|INFO|16384 kB peak resident set size after 30.0 seconds
Oct  1 12:25:21 np0005464891 ovn_controller[152409]: 2025-10-01T16:25:21Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Oct  1 12:25:21 np0005464891 podman[157107]: 2025-10-01 16:25:21.997423966 +0000 UTC m=+0.103138644 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller)
Oct  1 12:25:22 np0005464891 python3.9[157154]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:22 np0005464891 python3.9[157313]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:25:23 np0005464891 python3.9[157391]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:24 np0005464891 python3.9[157543]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:25:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:24 np0005464891 python3.9[157621]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:25 np0005464891 python3.9[157773]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:25:25 np0005464891 python3.9[157925]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:25:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:25:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:26 np0005464891 python3.9[158003]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:25:27 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:25:27 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.3 total, 600.0 interval#012Cumulative writes: 5480 writes, 23K keys, 5480 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5480 writes, 779 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5480 writes, 23K keys, 5480 commit groups, 1.0 writes per commit group, ingest: 18.46 MB, 0.03 MB/s#012Interval WAL: 5480 writes, 779 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56404cdaf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56404cdaf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  1 12:25:27 np0005464891 python3.9[158155]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:25:27 np0005464891 python3.9[158233]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:25:27 np0005464891 ceph-mgr[74592]: [devicehealth INFO root] Check health
Oct  1 12:25:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:28 np0005464891 python3.9[158385]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:25:28 np0005464891 systemd[1]: Reloading.
Oct  1 12:25:28 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:25:28 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:25:29 np0005464891 python3.9[158574]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:25:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:30 np0005464891 python3.9[158652]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:25:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:25:31 np0005464891 python3.9[158804]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:25:31 np0005464891 python3.9[158882]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:25:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:32 np0005464891 python3.9[159034]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:25:32 np0005464891 systemd[1]: Reloading.
Oct  1 12:25:32 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:25:32 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:25:32 np0005464891 systemd[1]: Starting Create netns directory...
Oct  1 12:25:32 np0005464891 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  1 12:25:32 np0005464891 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  1 12:25:32 np0005464891 systemd[1]: Finished Create netns directory.
Oct  1 12:25:33 np0005464891 python3.9[159226]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:34 np0005464891 python3.9[159378]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:25:35 np0005464891 python3.9[159501]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759335933.967028-333-101393587141491/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:25:36 np0005464891 python3.9[159653]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:25:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:36 np0005464891 python3.9[159805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:25:37 np0005464891 python3.9[159928]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759335936.3947983-358-148082574660898/.source.json _original_basename=.a297euk2 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:25:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:38 np0005464891 python3.9[160080]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:25:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:40 np0005464891 python3.9[160507]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct  1 12:25:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:25:41 np0005464891 python3.9[160659]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  1 12:25:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:25:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:25:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:25:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:25:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:25:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:25:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:42 np0005464891 python3.9[160811]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  1 12:25:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:44 np0005464891 python3[160989]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  1 12:25:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:25:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:25:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:54 np0005464891 podman[161069]: 2025-10-01 16:25:54.905305532 +0000 UTC m=+2.005566721 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  1 12:25:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:25:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:25:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  1 12:25:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 12:25:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:25:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:25:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:25:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:25:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:26:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:26:00 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 21c94c99-9c6d-4392-8ed0-24ab0e6ba48c does not exist
Oct  1 12:26:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:00 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev ea41f8a2-5ac1-41d5-8b12-2e8d5992bf85 does not exist
Oct  1 12:26:00 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 0dbc0acc-8946-40a3-93ce-d435120c1544 does not exist
Oct  1 12:26:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:26:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:26:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:26:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:26:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:26:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:26:01 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 12:26:01 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:26:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:26:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:03 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:26:03 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:26:03 np0005464891 podman[161004]: 2025-10-01 16:26:03.172607618 +0000 UTC m=+18.593297234 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:26:03 np0005464891 podman[161408]: 2025-10-01 16:26:03.274095917 +0000 UTC m=+0.034413710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:26:03 np0005464891 podman[161408]: 2025-10-01 16:26:03.572067799 +0000 UTC m=+0.332385562 container create caf39e355a39c8c0f581bed41ba802dd78ed4ddb7214b68a34f41f014bb51ad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chandrasekhar, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 12:26:03 np0005464891 systemd[1]: Started libpod-conmon-caf39e355a39c8c0f581bed41ba802dd78ed4ddb7214b68a34f41f014bb51ad3.scope.
Oct  1 12:26:03 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:26:03 np0005464891 podman[161408]: 2025-10-01 16:26:03.810086035 +0000 UTC m=+0.570403848 container init caf39e355a39c8c0f581bed41ba802dd78ed4ddb7214b68a34f41f014bb51ad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chandrasekhar, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:26:03 np0005464891 podman[161408]: 2025-10-01 16:26:03.822941855 +0000 UTC m=+0.583259618 container start caf39e355a39c8c0f581bed41ba802dd78ed4ddb7214b68a34f41f014bb51ad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chandrasekhar, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:26:03 np0005464891 strange_chandrasekhar[161452]: 167 167
Oct  1 12:26:03 np0005464891 systemd[1]: libpod-caf39e355a39c8c0f581bed41ba802dd78ed4ddb7214b68a34f41f014bb51ad3.scope: Deactivated successfully.
Oct  1 12:26:03 np0005464891 conmon[161452]: conmon caf39e355a39c8c0f581 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-caf39e355a39c8c0f581bed41ba802dd78ed4ddb7214b68a34f41f014bb51ad3.scope/container/memory.events
Oct  1 12:26:03 np0005464891 podman[161438]: 2025-10-01 16:26:03.761355444 +0000 UTC m=+0.461297329 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:26:03 np0005464891 podman[161408]: 2025-10-01 16:26:03.888519115 +0000 UTC m=+0.648836918 container attach caf39e355a39c8c0f581bed41ba802dd78ed4ddb7214b68a34f41f014bb51ad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chandrasekhar, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:26:03 np0005464891 podman[161408]: 2025-10-01 16:26:03.890487398 +0000 UTC m=+0.650805161 container died caf39e355a39c8c0f581bed41ba802dd78ed4ddb7214b68a34f41f014bb51ad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chandrasekhar, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 12:26:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:04 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ddba53a6b5d34b0241b2d4da4a67c379961c6b31a2684c311d9289b865907632-merged.mount: Deactivated successfully.
Oct  1 12:26:04 np0005464891 podman[161408]: 2025-10-01 16:26:04.730789269 +0000 UTC m=+1.491107002 container remove caf39e355a39c8c0f581bed41ba802dd78ed4ddb7214b68a34f41f014bb51ad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 12:26:04 np0005464891 podman[161438]: 2025-10-01 16:26:04.785910694 +0000 UTC m=+1.485852569 container create bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:26:04 np0005464891 python3[160989]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:26:04 np0005464891 systemd[1]: libpod-conmon-caf39e355a39c8c0f581bed41ba802dd78ed4ddb7214b68a34f41f014bb51ad3.scope: Deactivated successfully.
Oct  1 12:26:04 np0005464891 podman[161490]: 2025-10-01 16:26:04.933541223 +0000 UTC m=+0.052355670 container create d8509fd428f678b6162ca5a1d7f271ea2f197740066d282aa5709468895fccf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:26:04 np0005464891 systemd[1]: Started libpod-conmon-d8509fd428f678b6162ca5a1d7f271ea2f197740066d282aa5709468895fccf2.scope.
Oct  1 12:26:04 np0005464891 podman[161490]: 2025-10-01 16:26:04.904222962 +0000 UTC m=+0.023037429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:26:05 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:26:05 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac1d6027f01baf3cdd6fbf366bbf32a71c2dde6dea980078acee48e8ce7abeb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:26:05 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac1d6027f01baf3cdd6fbf366bbf32a71c2dde6dea980078acee48e8ce7abeb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:26:05 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac1d6027f01baf3cdd6fbf366bbf32a71c2dde6dea980078acee48e8ce7abeb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:26:05 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac1d6027f01baf3cdd6fbf366bbf32a71c2dde6dea980078acee48e8ce7abeb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:26:05 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac1d6027f01baf3cdd6fbf366bbf32a71c2dde6dea980078acee48e8ce7abeb1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:26:05 np0005464891 podman[161490]: 2025-10-01 16:26:05.044428988 +0000 UTC m=+0.163243455 container init d8509fd428f678b6162ca5a1d7f271ea2f197740066d282aa5709468895fccf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_robinson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:26:05 np0005464891 podman[161490]: 2025-10-01 16:26:05.056436006 +0000 UTC m=+0.175250453 container start d8509fd428f678b6162ca5a1d7f271ea2f197740066d282aa5709468895fccf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_robinson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:26:05 np0005464891 podman[161490]: 2025-10-01 16:26:05.137700634 +0000 UTC m=+0.256515101 container attach d8509fd428f678b6162ca5a1d7f271ea2f197740066d282aa5709468895fccf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_robinson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:26:05 np0005464891 python3.9[161675]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:26:06 np0005464891 festive_robinson[161540]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:26:06 np0005464891 festive_robinson[161540]: --> relative data size: 1.0
Oct  1 12:26:06 np0005464891 festive_robinson[161540]: --> All data devices are unavailable
Oct  1 12:26:06 np0005464891 systemd[1]: libpod-d8509fd428f678b6162ca5a1d7f271ea2f197740066d282aa5709468895fccf2.scope: Deactivated successfully.
Oct  1 12:26:06 np0005464891 systemd[1]: libpod-d8509fd428f678b6162ca5a1d7f271ea2f197740066d282aa5709468895fccf2.scope: Consumed 1.009s CPU time.
Oct  1 12:26:06 np0005464891 podman[161490]: 2025-10-01 16:26:06.137138727 +0000 UTC m=+1.255953184 container died d8509fd428f678b6162ca5a1d7f271ea2f197740066d282aa5709468895fccf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_robinson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:26:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:06 np0005464891 python3.9[161865]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:26:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:26:07 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ac1d6027f01baf3cdd6fbf366bbf32a71c2dde6dea980078acee48e8ce7abeb1-merged.mount: Deactivated successfully.
Oct  1 12:26:07 np0005464891 python3.9[161942]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:26:07 np0005464891 python3.9[162094]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759335967.2420495-446-117882534840538/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:26:08 np0005464891 podman[161490]: 2025-10-01 16:26:08.030242618 +0000 UTC m=+3.149057105 container remove d8509fd428f678b6162ca5a1d7f271ea2f197740066d282aa5709468895fccf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_robinson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:26:08 np0005464891 systemd[1]: libpod-conmon-d8509fd428f678b6162ca5a1d7f271ea2f197740066d282aa5709468895fccf2.scope: Deactivated successfully.
Oct  1 12:26:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:08 np0005464891 python3.9[162232]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 12:26:08 np0005464891 systemd[1]: Reloading.
Oct  1 12:26:08 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:26:08 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:26:08 np0005464891 podman[162313]: 2025-10-01 16:26:08.690740793 +0000 UTC m=+0.037463183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:26:08 np0005464891 podman[162313]: 2025-10-01 16:26:08.842849423 +0000 UTC m=+0.189571793 container create 8f7652207de179bc09804e0b37184da3884398cf18b4ee3d7718df96f27af12f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:26:09 np0005464891 systemd[1]: Started libpod-conmon-8f7652207de179bc09804e0b37184da3884398cf18b4ee3d7718df96f27af12f.scope.
Oct  1 12:26:09 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:26:09 np0005464891 podman[162313]: 2025-10-01 16:26:09.094885991 +0000 UTC m=+0.441608401 container init 8f7652207de179bc09804e0b37184da3884398cf18b4ee3d7718df96f27af12f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:26:09 np0005464891 podman[162313]: 2025-10-01 16:26:09.103183188 +0000 UTC m=+0.449905558 container start 8f7652207de179bc09804e0b37184da3884398cf18b4ee3d7718df96f27af12f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct  1 12:26:09 np0005464891 gallant_liskov[162363]: 167 167
Oct  1 12:26:09 np0005464891 systemd[1]: libpod-8f7652207de179bc09804e0b37184da3884398cf18b4ee3d7718df96f27af12f.scope: Deactivated successfully.
Oct  1 12:26:09 np0005464891 podman[162313]: 2025-10-01 16:26:09.127423829 +0000 UTC m=+0.474146269 container attach 8f7652207de179bc09804e0b37184da3884398cf18b4ee3d7718df96f27af12f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 12:26:09 np0005464891 podman[162313]: 2025-10-01 16:26:09.12894352 +0000 UTC m=+0.475665910 container died 8f7652207de179bc09804e0b37184da3884398cf18b4ee3d7718df96f27af12f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 12:26:09 np0005464891 systemd[1]: var-lib-containers-storage-overlay-69cb7494f0c0722b35eb635facce7ca250c53150d0297d63ab883fd14e10a5f1-merged.mount: Deactivated successfully.
Oct  1 12:26:09 np0005464891 podman[162313]: 2025-10-01 16:26:09.233289829 +0000 UTC m=+0.580012199 container remove 8f7652207de179bc09804e0b37184da3884398cf18b4ee3d7718df96f27af12f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:26:09 np0005464891 systemd[1]: libpod-conmon-8f7652207de179bc09804e0b37184da3884398cf18b4ee3d7718df96f27af12f.scope: Deactivated successfully.
Oct  1 12:26:09 np0005464891 podman[162461]: 2025-10-01 16:26:09.409629691 +0000 UTC m=+0.055009593 container create c0bfc1a7ea9b0ae0f67f8b285080cd205336d1532b474586cfa1fcbd4016c07b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_murdock, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 12:26:09 np0005464891 podman[162461]: 2025-10-01 16:26:09.38325416 +0000 UTC m=+0.028634102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:26:09 np0005464891 systemd[1]: Started libpod-conmon-c0bfc1a7ea9b0ae0f67f8b285080cd205336d1532b474586cfa1fcbd4016c07b.scope.
Oct  1 12:26:09 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:26:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60aed3d33a03ca246ba6fd28a400575c099cc7e0c06d3880ac7014156005fe67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:26:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60aed3d33a03ca246ba6fd28a400575c099cc7e0c06d3880ac7014156005fe67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:26:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60aed3d33a03ca246ba6fd28a400575c099cc7e0c06d3880ac7014156005fe67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:26:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60aed3d33a03ca246ba6fd28a400575c099cc7e0c06d3880ac7014156005fe67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:26:09 np0005464891 podman[162461]: 2025-10-01 16:26:09.62875647 +0000 UTC m=+0.274136422 container init c0bfc1a7ea9b0ae0f67f8b285080cd205336d1532b474586cfa1fcbd4016c07b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:26:09 np0005464891 podman[162461]: 2025-10-01 16:26:09.635423552 +0000 UTC m=+0.280803494 container start c0bfc1a7ea9b0ae0f67f8b285080cd205336d1532b474586cfa1fcbd4016c07b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:26:09 np0005464891 python3.9[162464]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:26:09 np0005464891 podman[162461]: 2025-10-01 16:26:09.710989714 +0000 UTC m=+0.356369626 container attach c0bfc1a7ea9b0ae0f67f8b285080cd205336d1532b474586cfa1fcbd4016c07b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_murdock, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:26:09 np0005464891 systemd[1]: Reloading.
Oct  1 12:26:09 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:26:09 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:26:10 np0005464891 systemd[1]: Starting ovn_metadata_agent container...
Oct  1 12:26:10 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:26:10 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21fb8bfc36708b31a758d2fe7bfc8b43169ba8891c422aa82ac80cd8b0802bf9/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct  1 12:26:10 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21fb8bfc36708b31a758d2fe7bfc8b43169ba8891c422aa82ac80cd8b0802bf9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:26:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:10 np0005464891 systemd[1]: Started /usr/bin/podman healthcheck run bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d.
Oct  1 12:26:10 np0005464891 podman[162525]: 2025-10-01 16:26:10.268153489 +0000 UTC m=+0.203111635 container init bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: + sudo -E kolla_set_configs
Oct  1 12:26:10 np0005464891 podman[162525]: 2025-10-01 16:26:10.301617732 +0000 UTC m=+0.236575878 container start bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:26:10 np0005464891 edpm-start-podman-container[162525]: ovn_metadata_agent
Oct  1 12:26:10 np0005464891 podman[162547]: 2025-10-01 16:26:10.385178192 +0000 UTC m=+0.065924740 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct  1 12:26:10 np0005464891 edpm-start-podman-container[162524]: Creating additional drop-in dependency for "ovn_metadata_agent" (bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d)
Oct  1 12:26:10 np0005464891 systemd[1]: Reloading.
Oct  1 12:26:10 np0005464891 objective_murdock[162480]: {
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:    "0": [
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:        {
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "devices": [
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "/dev/loop3"
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            ],
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "lv_name": "ceph_lv0",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "lv_size": "21470642176",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "name": "ceph_lv0",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "tags": {
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.cluster_name": "ceph",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.crush_device_class": "",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.encrypted": "0",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.osd_id": "0",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.type": "block",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.vdo": "0"
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            },
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "type": "block",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "vg_name": "ceph_vg0"
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:        }
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:    ],
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:    "1": [
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:        {
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "devices": [
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "/dev/loop4"
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            ],
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "lv_name": "ceph_lv1",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "lv_size": "21470642176",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "name": "ceph_lv1",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "tags": {
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.cluster_name": "ceph",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.crush_device_class": "",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.encrypted": "0",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.osd_id": "1",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.type": "block",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.vdo": "0"
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            },
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "type": "block",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "vg_name": "ceph_vg1"
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:        }
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:    ],
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:    "2": [
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:        {
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "devices": [
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "/dev/loop5"
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            ],
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "lv_name": "ceph_lv2",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "lv_size": "21470642176",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "name": "ceph_lv2",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "tags": {
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.cluster_name": "ceph",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.crush_device_class": "",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.encrypted": "0",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.osd_id": "2",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.type": "block",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:                "ceph.vdo": "0"
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            },
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "type": "block",
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:            "vg_name": "ceph_vg2"
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:        }
Oct  1 12:26:10 np0005464891 objective_murdock[162480]:    ]
Oct  1 12:26:10 np0005464891 objective_murdock[162480]: }
Oct  1 12:26:10 np0005464891 podman[162461]: 2025-10-01 16:26:10.442667161 +0000 UTC m=+1.088047053 container died c0bfc1a7ea9b0ae0f67f8b285080cd205336d1532b474586cfa1fcbd4016c07b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: INFO:__main__:Validating config file
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: INFO:__main__:Copying service configuration files
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: INFO:__main__:Writing out command to execute
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: INFO:__main__:Setting permission for /var/lib/neutron
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct  1 12:26:10 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct  1 12:26:10 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: ++ cat /run_command
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: + CMD=neutron-ovn-metadata-agent
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: + ARGS=
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: + sudo kolla_copy_cacerts
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: + [[ ! -n '' ]]
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: + . kolla_extend_start
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: Running command: 'neutron-ovn-metadata-agent'
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: + umask 0022
Oct  1 12:26:10 np0005464891 ovn_metadata_agent[162541]: + exec neutron-ovn-metadata-agent
Oct  1 12:26:10 np0005464891 systemd[1]: Started ovn_metadata_agent container.
Oct  1 12:26:10 np0005464891 systemd[1]: libpod-c0bfc1a7ea9b0ae0f67f8b285080cd205336d1532b474586cfa1fcbd4016c07b.scope: Deactivated successfully.
Oct  1 12:26:10 np0005464891 systemd[1]: var-lib-containers-storage-overlay-60aed3d33a03ca246ba6fd28a400575c099cc7e0c06d3880ac7014156005fe67-merged.mount: Deactivated successfully.
Oct  1 12:26:10 np0005464891 podman[162461]: 2025-10-01 16:26:10.973390864 +0000 UTC m=+1.618770776 container remove c0bfc1a7ea9b0ae0f67f8b285080cd205336d1532b474586cfa1fcbd4016c07b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_murdock, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:26:10 np0005464891 systemd[1]: libpod-conmon-c0bfc1a7ea9b0ae0f67f8b285080cd205336d1532b474586cfa1fcbd4016c07b.scope: Deactivated successfully.
Oct  1 12:26:11 np0005464891 systemd[1]: session-49.scope: Deactivated successfully.
Oct  1 12:26:11 np0005464891 systemd[1]: session-49.scope: Consumed 58.350s CPU time.
Oct  1 12:26:11 np0005464891 systemd-logind[801]: Session 49 logged out. Waiting for processes to exit.
Oct  1 12:26:11 np0005464891 systemd-logind[801]: Removed session 49.
Oct  1 12:26:11 np0005464891 podman[162809]: 2025-10-01 16:26:11.600872587 +0000 UTC m=+0.059605448 container create 3c827c6eb08892970e665f888eb4791df9159535ef395bebc3f0c31fa3a358c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_snyder, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:26:11 np0005464891 systemd[1]: Started libpod-conmon-3c827c6eb08892970e665f888eb4791df9159535ef395bebc3f0c31fa3a358c4.scope.
Oct  1 12:26:11 np0005464891 podman[162809]: 2025-10-01 16:26:11.568346569 +0000 UTC m=+0.027079520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:26:11 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:26:11 np0005464891 podman[162809]: 2025-10-01 16:26:11.697562115 +0000 UTC m=+0.156295006 container init 3c827c6eb08892970e665f888eb4791df9159535ef395bebc3f0c31fa3a358c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 12:26:11 np0005464891 podman[162809]: 2025-10-01 16:26:11.704649769 +0000 UTC m=+0.163382670 container start 3c827c6eb08892970e665f888eb4791df9159535ef395bebc3f0c31fa3a358c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 12:26:11 np0005464891 podman[162809]: 2025-10-01 16:26:11.709078049 +0000 UTC m=+0.167810940 container attach 3c827c6eb08892970e665f888eb4791df9159535ef395bebc3f0c31fa3a358c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct  1 12:26:11 np0005464891 inspiring_snyder[162826]: 167 167
Oct  1 12:26:11 np0005464891 systemd[1]: libpod-3c827c6eb08892970e665f888eb4791df9159535ef395bebc3f0c31fa3a358c4.scope: Deactivated successfully.
Oct  1 12:26:11 np0005464891 podman[162809]: 2025-10-01 16:26:11.713880761 +0000 UTC m=+0.172613632 container died 3c827c6eb08892970e665f888eb4791df9159535ef395bebc3f0c31fa3a358c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:26:11 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ece987557b35fe4112cd3dcbe9f02ddd4f9e1f60192123544eafa8206986a88d-merged.mount: Deactivated successfully.
Oct  1 12:26:11 np0005464891 podman[162809]: 2025-10-01 16:26:11.745623387 +0000 UTC m=+0.204356258 container remove 3c827c6eb08892970e665f888eb4791df9159535ef395bebc3f0c31fa3a358c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_snyder, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 12:26:11 np0005464891 systemd[1]: libpod-conmon-3c827c6eb08892970e665f888eb4791df9159535ef395bebc3f0c31fa3a358c4.scope: Deactivated successfully.
Oct  1 12:26:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:26:11 np0005464891 podman[162849]: 2025-10-01 16:26:11.936837085 +0000 UTC m=+0.056739370 container create 87ba6261af2ba3ae610325fd2a68a6fc91feb7f02c14f54d70122ab89beb82fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:26:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:26:11
Oct  1 12:26:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:26:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:26:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'images', 'backups']
Oct  1 12:26:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:26:11 np0005464891 systemd[1]: Started libpod-conmon-87ba6261af2ba3ae610325fd2a68a6fc91feb7f02c14f54d70122ab89beb82fa.scope.
Oct  1 12:26:12 np0005464891 podman[162849]: 2025-10-01 16:26:11.909424147 +0000 UTC m=+0.029326442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:26:12 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:26:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025be02f4409250ecc88b68927d7be4d49227c4031f6416505d478d31252581e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:26:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025be02f4409250ecc88b68927d7be4d49227c4031f6416505d478d31252581e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:26:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025be02f4409250ecc88b68927d7be4d49227c4031f6416505d478d31252581e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:26:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025be02f4409250ecc88b68927d7be4d49227c4031f6416505d478d31252581e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:26:12 np0005464891 podman[162849]: 2025-10-01 16:26:12.099307929 +0000 UTC m=+0.219210214 container init 87ba6261af2ba3ae610325fd2a68a6fc91feb7f02c14f54d70122ab89beb82fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 12:26:12 np0005464891 podman[162849]: 2025-10-01 16:26:12.106134345 +0000 UTC m=+0.226036630 container start 87ba6261af2ba3ae610325fd2a68a6fc91feb7f02c14f54d70122ab89beb82fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_albattani, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:26:12 np0005464891 podman[162849]: 2025-10-01 16:26:12.185607163 +0000 UTC m=+0.305509438 container attach 87ba6261af2ba3ae610325fd2a68a6fc91feb7f02c14f54d70122ab89beb82fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 12:26:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.375 162546 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.376 162546 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.376 162546 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.376 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.376 162546 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.376 162546 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.377 162546 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.377 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.377 162546 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.377 162546 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.377 162546 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.377 162546 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.377 162546 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.378 162546 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.378 162546 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.378 162546 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.378 162546 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.378 162546 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.378 162546 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.378 162546 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.379 162546 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.379 162546 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.379 162546 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.379 162546 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.379 162546 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.379 162546 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.379 162546 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.380 162546 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.380 162546 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.380 162546 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.380 162546 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.380 162546 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.380 162546 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.380 162546 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.381 162546 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.381 162546 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.381 162546 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.381 162546 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.381 162546 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.381 162546 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.381 162546 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.382 162546 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.382 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.382 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.382 162546 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.382 162546 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.382 162546 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.382 162546 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.382 162546 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.383 162546 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.383 162546 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.383 162546 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.383 162546 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.383 162546 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.383 162546 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.383 162546 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.384 162546 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.384 162546 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.384 162546 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.384 162546 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.384 162546 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.384 162546 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.384 162546 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.384 162546 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.385 162546 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.385 162546 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.385 162546 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.385 162546 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.385 162546 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.385 162546 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.385 162546 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.386 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.386 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.386 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.386 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.386 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.386 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.387 162546 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.387 162546 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.387 162546 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.387 162546 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.387 162546 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.387 162546 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.387 162546 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.387 162546 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.388 162546 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.388 162546 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.388 162546 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.388 162546 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.388 162546 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.388 162546 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.388 162546 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.389 162546 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.389 162546 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.389 162546 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.389 162546 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.389 162546 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.389 162546 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.389 162546 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.389 162546 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.390 162546 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.390 162546 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.390 162546 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.390 162546 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.390 162546 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.390 162546 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.390 162546 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.391 162546 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.391 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.391 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.391 162546 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.391 162546 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.391 162546 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.391 162546 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.392 162546 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.392 162546 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.392 162546 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.392 162546 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.392 162546 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.392 162546 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.392 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.393 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.393 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.393 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.393 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.393 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.393 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.393 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.394 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.394 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.394 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.394 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.394 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.394 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.394 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.395 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.395 162546 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.395 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.395 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.395 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.395 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.395 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.396 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.396 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.396 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.396 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.396 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.396 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.396 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.397 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.397 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.397 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.397 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.397 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.397 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.397 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.397 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.398 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.398 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.398 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.398 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.398 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.398 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.398 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.399 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.399 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.399 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.399 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.399 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.399 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.399 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.400 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.400 162546 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.400 162546 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.400 162546 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.400 162546 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.400 162546 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.401 162546 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.401 162546 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.401 162546 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.401 162546 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.402 162546 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.402 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.402 162546 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.402 162546 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.403 162546 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.403 162546 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.403 162546 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.403 162546 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.404 162546 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.404 162546 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.404 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.404 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.405 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.405 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.405 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.405 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.405 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.405 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.406 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.406 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.406 162546 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.406 162546 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.406 162546 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.406 162546 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.406 162546 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.406 162546 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.406 162546 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.406 162546 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.407 162546 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.407 162546 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.407 162546 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.407 162546 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.407 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.407 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.407 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.407 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.407 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.408 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.408 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.408 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.408 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.408 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.408 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.408 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.408 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.408 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.409 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.409 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.409 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.409 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.409 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.409 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.409 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.409 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.410 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.410 162546 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.410 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.410 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.410 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.410 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.410 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.411 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.411 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.411 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.411 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.411 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.411 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.411 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.412 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.412 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.412 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.412 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.412 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.412 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.412 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.413 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.413 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.413 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.413 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.413 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.413 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.413 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.414 162546 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.414 162546 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.414 162546 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.414 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.414 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.414 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.414 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.415 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.415 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.415 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.415 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.415 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.415 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.415 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.416 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.416 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.416 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.416 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.416 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.416 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.416 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.417 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.417 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.417 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.417 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.417 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.417 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.417 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.418 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.418 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.418 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.418 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.418 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.418 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.418 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.419 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.419 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.419 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.419 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.419 162546 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.419 162546 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.428 162546 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.429 162546 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.429 162546 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.429 162546 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.429 162546 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.440 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 7f6af0d3-69fd-4a3a-8e45-081fa1f83992 (UUID: 7f6af0d3-69fd-4a3a-8e45-081fa1f83992) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct  1 12:26:12 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.463 162546 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.463 162546 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.464 162546 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.464 162546 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.466 162546 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.471 162546 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.476 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '7f6af0d3-69fd-4a3a-8e45-081fa1f83992'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], external_ids={}, name=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, nb_cfg_timestamp=1759335900032, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.477 162546 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fc11d5a3310>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.478 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.478 162546 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.478 162546 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.478 162546 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.482 162546 DEBUG oslo_service.service [-] Started child 162872 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.485 162546 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpw9he1jrj/privsep.sock']#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.489 162872 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-899623'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.531 162872 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.532 162872 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.532 162872 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.536 162872 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.544 162872 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  1 12:26:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:12.553 162872 INFO eventlet.wsgi.server [-] (162872) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct  1 12:26:13 np0005464891 eager_albattani[162866]: {
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:        "osd_id": 2,
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:        "type": "bluestore"
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:    },
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:        "osd_id": 0,
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:        "type": "bluestore"
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:    },
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:        "osd_id": 1,
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:        "type": "bluestore"
Oct  1 12:26:13 np0005464891 eager_albattani[162866]:    }
Oct  1 12:26:13 np0005464891 eager_albattani[162866]: }
Oct  1 12:26:13 np0005464891 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct  1 12:26:13 np0005464891 systemd[1]: libpod-87ba6261af2ba3ae610325fd2a68a6fc91feb7f02c14f54d70122ab89beb82fa.scope: Deactivated successfully.
Oct  1 12:26:13 np0005464891 podman[162849]: 2025-10-01 16:26:13.073871454 +0000 UTC m=+1.193773709 container died 87ba6261af2ba3ae610325fd2a68a6fc91feb7f02c14f54d70122ab89beb82fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_albattani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:26:13 np0005464891 systemd[1]: var-lib-containers-storage-overlay-025be02f4409250ecc88b68927d7be4d49227c4031f6416505d478d31252581e-merged.mount: Deactivated successfully.
Oct  1 12:26:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:13.185 162546 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  1 12:26:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:13.186 162546 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpw9he1jrj/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  1 12:26:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:13.041 162906 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  1 12:26:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:13.045 162906 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  1 12:26:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:13.047 162906 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct  1 12:26:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:13.047 162906 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162906#033[00m
Oct  1 12:26:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:13.189 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd7aa0d-9f60-4bc8-83cf-ef1feb45b4d4]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:26:13 np0005464891 podman[162849]: 2025-10-01 16:26:13.239701669 +0000 UTC m=+1.359603914 container remove 87ba6261af2ba3ae610325fd2a68a6fc91feb7f02c14f54d70122ab89beb82fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_albattani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:26:13 np0005464891 systemd[1]: libpod-conmon-87ba6261af2ba3ae610325fd2a68a6fc91feb7f02c14f54d70122ab89beb82fa.scope: Deactivated successfully.
Oct  1 12:26:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:26:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:26:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:26:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:26:13 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev b1fa9439-f05c-46cb-9511-5d1b3495b33a does not exist
Oct  1 12:26:13 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 9453d504-0205-48a3-a5c2-cb14f1b90df7 does not exist
Oct  1 12:26:13 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:26:13 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:26:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:13.684 162906 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:26:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:13.684 162906 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:26:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:13.684 162906 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.192 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc37e42-0214-4803-a0d5-6acb047ed970]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.194 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, column=external_ids, values=({'neutron:ovn-metadata-id': '46ff91ea-cd45-52ef-93d0-578abf19e329'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:26:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.235 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.241 162546 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.242 162546 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.242 162546 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.242 162546 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.242 162546 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.242 162546 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.242 162546 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.243 162546 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.243 162546 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.243 162546 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.243 162546 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.243 162546 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.243 162546 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.244 162546 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.244 162546 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.244 162546 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.244 162546 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.244 162546 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.244 162546 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.245 162546 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.245 162546 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.245 162546 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.245 162546 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.245 162546 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.245 162546 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.246 162546 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.246 162546 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.246 162546 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.246 162546 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.246 162546 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.247 162546 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.247 162546 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.247 162546 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.247 162546 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.247 162546 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.247 162546 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.248 162546 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.248 162546 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.248 162546 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.248 162546 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.248 162546 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.249 162546 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.249 162546 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.249 162546 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.249 162546 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.249 162546 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.249 162546 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.249 162546 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.250 162546 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.250 162546 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.250 162546 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.250 162546 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.250 162546 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.250 162546 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.250 162546 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.251 162546 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.251 162546 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.251 162546 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.251 162546 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.251 162546 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.251 162546 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.252 162546 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.252 162546 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.252 162546 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.252 162546 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.252 162546 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.252 162546 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.253 162546 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.253 162546 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.253 162546 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.253 162546 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.253 162546 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.253 162546 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.254 162546 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.254 162546 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.254 162546 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.254 162546 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.254 162546 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.254 162546 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.255 162546 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.255 162546 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.255 162546 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.255 162546 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.255 162546 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.256 162546 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.256 162546 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.256 162546 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.256 162546 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.256 162546 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.256 162546 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.256 162546 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.257 162546 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.257 162546 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.257 162546 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.257 162546 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.257 162546 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.257 162546 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.257 162546 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.258 162546 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.258 162546 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.258 162546 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.258 162546 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.258 162546 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.258 162546 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.258 162546 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.259 162546 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.259 162546 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.259 162546 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.259 162546 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.259 162546 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.259 162546 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.260 162546 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.260 162546 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.260 162546 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.260 162546 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.260 162546 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.260 162546 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.261 162546 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.261 162546 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.261 162546 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.261 162546 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.261 162546 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.261 162546 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.262 162546 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.262 162546 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.262 162546 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.262 162546 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.262 162546 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.262 162546 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.263 162546 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.263 162546 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.263 162546 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.263 162546 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.263 162546 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.263 162546 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.263 162546 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.264 162546 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.264 162546 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.264 162546 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.264 162546 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.264 162546 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.264 162546 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.265 162546 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.265 162546 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.265 162546 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.265 162546 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.265 162546 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.265 162546 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.266 162546 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.266 162546 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.266 162546 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.266 162546 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.266 162546 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.266 162546 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.267 162546 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.267 162546 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.267 162546 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.267 162546 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.267 162546 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.267 162546 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.267 162546 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.267 162546 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.268 162546 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.268 162546 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.268 162546 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.268 162546 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.268 162546 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.268 162546 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.268 162546 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.269 162546 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.269 162546 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.269 162546 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.269 162546 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.269 162546 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.269 162546 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.270 162546 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.270 162546 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.270 162546 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.270 162546 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.270 162546 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.270 162546 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.270 162546 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.271 162546 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.271 162546 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.271 162546 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.271 162546 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.271 162546 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.271 162546 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.272 162546 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.272 162546 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.272 162546 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.272 162546 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.272 162546 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.272 162546 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.272 162546 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.273 162546 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.273 162546 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.273 162546 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.273 162546 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.273 162546 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.273 162546 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.273 162546 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.274 162546 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.274 162546 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.274 162546 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.274 162546 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.274 162546 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.274 162546 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.274 162546 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.275 162546 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.275 162546 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.275 162546 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.275 162546 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.275 162546 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.275 162546 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.276 162546 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.276 162546 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.276 162546 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.276 162546 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.276 162546 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.276 162546 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.276 162546 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.276 162546 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.277 162546 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.277 162546 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.277 162546 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.277 162546 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.277 162546 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.277 162546 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.277 162546 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.277 162546 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.278 162546 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.278 162546 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.278 162546 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.278 162546 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.278 162546 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.278 162546 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.278 162546 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.279 162546 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.279 162546 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.279 162546 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.279 162546 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.279 162546 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.279 162546 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.280 162546 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.280 162546 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.280 162546 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.280 162546 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.280 162546 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.280 162546 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.281 162546 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.281 162546 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.281 162546 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.281 162546 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.281 162546 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.281 162546 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.281 162546 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.282 162546 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.282 162546 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.282 162546 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.282 162546 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.282 162546 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.282 162546 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.283 162546 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.283 162546 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.283 162546 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.283 162546 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.283 162546 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.283 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.283 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.284 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.284 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.284 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.284 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.284 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.284 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.285 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.285 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.285 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.285 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.285 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.285 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.285 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.285 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.286 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.286 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.286 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.286 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.286 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.286 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.286 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.286 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.287 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.287 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.287 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.287 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.287 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.287 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.287 162546 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.288 162546 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.288 162546 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.288 162546 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.288 162546 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:26:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:26:14.288 162546 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  1 12:26:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:26:17 np0005464891 systemd-logind[801]: New session 50 of user zuul.
Oct  1 12:26:17 np0005464891 systemd[1]: Started Session 50 of User zuul.
Oct  1 12:26:18 np0005464891 python3.9[163128]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:26:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:19 np0005464891 python3.9[163284]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:26:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:21 np0005464891 python3.9[163450]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 12:26:21 np0005464891 systemd[1]: Reloading.
Oct  1 12:26:21 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:26:21 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:26:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:26:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:26:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:22 np0005464891 python3.9[163635]: ansible-ansible.builtin.service_facts Invoked
Oct  1 12:26:22 np0005464891 network[163652]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 12:26:22 np0005464891 network[163653]: 'network-scripts' will be removed from distribution in near future.
Oct  1 12:26:22 np0005464891 network[163654]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 12:26:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:26:27 np0005464891 podman[163759]: 2025-10-01 16:26:27.034923456 +0000 UTC m=+0.133663899 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:26:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:28 np0005464891 python3.9[163945]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:26:29 np0005464891 python3.9[164098]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:26:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:30 np0005464891 python3.9[164251]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:26:31 np0005464891 python3.9[164404]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:26:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:26:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:32 np0005464891 python3.9[164557]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:26:33 np0005464891 python3.9[164710]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:26:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:34 np0005464891 python3.9[164863]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:26:35 np0005464891 python3.9[165016]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:26:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:36 np0005464891 python3.9[165168]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:26:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:26:37 np0005464891 python3.9[165320]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:26:37 np0005464891 python3.9[165472]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:26:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:38 np0005464891 python3.9[165624]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:26:39 np0005464891 python3.9[165776]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:26:39 np0005464891 python3.9[165928]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:26:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:40 np0005464891 python3.9[166080]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:26:40 np0005464891 podman[166157]: 2025-10-01 16:26:40.965100616 +0000 UTC m=+0.070730890 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:26:41 np0005464891 python3.9[166252]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:26:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:26:42 np0005464891 python3.9[166404]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:26:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:26:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:26:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:26:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:26:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:26:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:26:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:42 np0005464891 python3.9[166556]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:26:43 np0005464891 python3.9[166708]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:26:44 np0005464891 python3.9[166860]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:26:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:44 np0005464891 python3.9[167012]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:26:45 np0005464891 python3.9[167164]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:26:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:26:46 np0005464891 python3.9[167316]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  1 12:26:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:26:47 np0005464891 python3.9[167468]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 12:26:47 np0005464891 systemd[1]: Reloading.
Oct  1 12:26:47 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:26:47 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:26:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 12:26:48 np0005464891 python3.9[167654]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:26:49 np0005464891 python3.9[167807]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:26:50 np0005464891 python3.9[167960]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:26:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 12:26:50 np0005464891 python3.9[168113]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:26:51 np0005464891 python3.9[168266]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:26:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:26:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 12:26:52 np0005464891 python3.9[168419]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:26:53 np0005464891 python3.9[168572]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:26:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 12:26:54 np0005464891 python3.9[168725]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct  1 12:26:55 np0005464891 python3.9[168878]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  1 12:26:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 12:26:56 np0005464891 python3.9[169036]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  1 12:26:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:26:57 np0005464891 podman[169168]: 2025-10-01 16:26:57.178132263 +0000 UTC m=+0.092934585 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  1 12:26:57 np0005464891 python3.9[169220]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 12:26:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 12:26:58 np0005464891 python3.9[169305]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:27:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:27:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:27:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:27:11 np0005464891 podman[169455]: 2025-10-01 16:27:11.961083028 +0000 UTC m=+0.064862552 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:27:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:27:11
Oct  1 12:27:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:27:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:27:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'volumes', 'backups', 'vms', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr']
Oct  1 12:27:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:27:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:27:12.421 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:27:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:27:12.421 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:27:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:27:12.422 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:27:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:27:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:27:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:27:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:27:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:27:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:27:14 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 9a9a628c-2e01-4db3-abaf-a1bfd1f60b6f does not exist
Oct  1 12:27:14 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 03443b38-2c76-4690-a643-5d652249c5ac does not exist
Oct  1 12:27:14 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 3db7b5fd-bd5a-4381-bee9-d56c80867976 does not exist
Oct  1 12:27:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:27:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:27:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:27:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:27:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:27:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:27:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:14 np0005464891 podman[169779]: 2025-10-01 16:27:14.896924474 +0000 UTC m=+0.067263203 container create 18c34902497ad1ad08c8edd4643ef6385e82971004b9afaff6e0c639a6c155de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_liskov, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 12:27:14 np0005464891 systemd[1]: Started libpod-conmon-18c34902497ad1ad08c8edd4643ef6385e82971004b9afaff6e0c639a6c155de.scope.
Oct  1 12:27:14 np0005464891 podman[169779]: 2025-10-01 16:27:14.863786422 +0000 UTC m=+0.034125141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:27:14 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:27:14 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:27:14 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:27:14 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:27:15 np0005464891 podman[169779]: 2025-10-01 16:27:15.048802705 +0000 UTC m=+0.219141474 container init 18c34902497ad1ad08c8edd4643ef6385e82971004b9afaff6e0c639a6c155de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_liskov, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 12:27:15 np0005464891 podman[169779]: 2025-10-01 16:27:15.061216399 +0000 UTC m=+0.231555098 container start 18c34902497ad1ad08c8edd4643ef6385e82971004b9afaff6e0c639a6c155de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 12:27:15 np0005464891 podman[169779]: 2025-10-01 16:27:15.072195311 +0000 UTC m=+0.242534100 container attach 18c34902497ad1ad08c8edd4643ef6385e82971004b9afaff6e0c639a6c155de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:27:15 np0005464891 optimistic_liskov[169796]: 167 167
Oct  1 12:27:15 np0005464891 systemd[1]: libpod-18c34902497ad1ad08c8edd4643ef6385e82971004b9afaff6e0c639a6c155de.scope: Deactivated successfully.
Oct  1 12:27:15 np0005464891 podman[169779]: 2025-10-01 16:27:15.090885028 +0000 UTC m=+0.261223717 container died 18c34902497ad1ad08c8edd4643ef6385e82971004b9afaff6e0c639a6c155de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 12:27:15 np0005464891 systemd[1]: var-lib-containers-storage-overlay-3952e664e219735fad8b6df072a5dbb0eabab628f05cd82949da8c5ae78f219a-merged.mount: Deactivated successfully.
Oct  1 12:27:15 np0005464891 podman[169779]: 2025-10-01 16:27:15.193176137 +0000 UTC m=+0.363514846 container remove 18c34902497ad1ad08c8edd4643ef6385e82971004b9afaff6e0c639a6c155de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:27:15 np0005464891 systemd[1]: libpod-conmon-18c34902497ad1ad08c8edd4643ef6385e82971004b9afaff6e0c639a6c155de.scope: Deactivated successfully.
Oct  1 12:27:15 np0005464891 podman[169820]: 2025-10-01 16:27:15.376389806 +0000 UTC m=+0.056747504 container create 88aad38047424573de0cb9d97a56f20ac8f478c733deb7ae1cd3d16de8096e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gagarin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 12:27:15 np0005464891 systemd[1]: Started libpod-conmon-88aad38047424573de0cb9d97a56f20ac8f478c733deb7ae1cd3d16de8096e42.scope.
Oct  1 12:27:15 np0005464891 podman[169820]: 2025-10-01 16:27:15.346921283 +0000 UTC m=+0.027279001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:27:15 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:27:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41f6e701e58d4b009661e696758663c9a6e6e6808ce280b40f06efeaa67656be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:27:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41f6e701e58d4b009661e696758663c9a6e6e6808ce280b40f06efeaa67656be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:27:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41f6e701e58d4b009661e696758663c9a6e6e6808ce280b40f06efeaa67656be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:27:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41f6e701e58d4b009661e696758663c9a6e6e6808ce280b40f06efeaa67656be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:27:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41f6e701e58d4b009661e696758663c9a6e6e6808ce280b40f06efeaa67656be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:27:15 np0005464891 podman[169820]: 2025-10-01 16:27:15.484807404 +0000 UTC m=+0.165165112 container init 88aad38047424573de0cb9d97a56f20ac8f478c733deb7ae1cd3d16de8096e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gagarin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 12:27:15 np0005464891 podman[169820]: 2025-10-01 16:27:15.491797319 +0000 UTC m=+0.172155017 container start 88aad38047424573de0cb9d97a56f20ac8f478c733deb7ae1cd3d16de8096e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:27:15 np0005464891 podman[169820]: 2025-10-01 16:27:15.507075956 +0000 UTC m=+0.187433764 container attach 88aad38047424573de0cb9d97a56f20ac8f478c733deb7ae1cd3d16de8096e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:27:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:16 np0005464891 condescending_gagarin[169836]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:27:16 np0005464891 condescending_gagarin[169836]: --> relative data size: 1.0
Oct  1 12:27:16 np0005464891 condescending_gagarin[169836]: --> All data devices are unavailable
Oct  1 12:27:16 np0005464891 systemd[1]: libpod-88aad38047424573de0cb9d97a56f20ac8f478c733deb7ae1cd3d16de8096e42.scope: Deactivated successfully.
Oct  1 12:27:16 np0005464891 systemd[1]: libpod-88aad38047424573de0cb9d97a56f20ac8f478c733deb7ae1cd3d16de8096e42.scope: Consumed 1.104s CPU time.
Oct  1 12:27:16 np0005464891 podman[169820]: 2025-10-01 16:27:16.812337732 +0000 UTC m=+1.492695430 container died 88aad38047424573de0cb9d97a56f20ac8f478c733deb7ae1cd3d16de8096e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:27:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:27:16 np0005464891 systemd[1]: var-lib-containers-storage-overlay-41f6e701e58d4b009661e696758663c9a6e6e6808ce280b40f06efeaa67656be-merged.mount: Deactivated successfully.
Oct  1 12:27:17 np0005464891 podman[169820]: 2025-10-01 16:27:16.999954591 +0000 UTC m=+1.680312299 container remove 88aad38047424573de0cb9d97a56f20ac8f478c733deb7ae1cd3d16de8096e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_gagarin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:27:17 np0005464891 systemd[1]: libpod-conmon-88aad38047424573de0cb9d97a56f20ac8f478c733deb7ae1cd3d16de8096e42.scope: Deactivated successfully.
Oct  1 12:27:17 np0005464891 podman[170018]: 2025-10-01 16:27:17.748599782 +0000 UTC m=+0.075327929 container create ab15e35e615c529619826a7e2a7f5e151b0f8945183572de6a67ae268ed7dd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:27:17 np0005464891 systemd[1]: Started libpod-conmon-ab15e35e615c529619826a7e2a7f5e151b0f8945183572de6a67ae268ed7dd59.scope.
Oct  1 12:27:17 np0005464891 podman[170018]: 2025-10-01 16:27:17.71404941 +0000 UTC m=+0.040777567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:27:17 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:27:17 np0005464891 podman[170018]: 2025-10-01 16:27:17.843578566 +0000 UTC m=+0.170306763 container init ab15e35e615c529619826a7e2a7f5e151b0f8945183572de6a67ae268ed7dd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 12:27:17 np0005464891 podman[170018]: 2025-10-01 16:27:17.854131165 +0000 UTC m=+0.180859282 container start ab15e35e615c529619826a7e2a7f5e151b0f8945183572de6a67ae268ed7dd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:27:17 np0005464891 happy_clarke[170035]: 167 167
Oct  1 12:27:17 np0005464891 systemd[1]: libpod-ab15e35e615c529619826a7e2a7f5e151b0f8945183572de6a67ae268ed7dd59.scope: Deactivated successfully.
Oct  1 12:27:17 np0005464891 podman[170018]: 2025-10-01 16:27:17.861958135 +0000 UTC m=+0.188686342 container attach ab15e35e615c529619826a7e2a7f5e151b0f8945183572de6a67ae268ed7dd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:27:17 np0005464891 podman[170018]: 2025-10-01 16:27:17.862548852 +0000 UTC m=+0.189277019 container died ab15e35e615c529619826a7e2a7f5e151b0f8945183572de6a67ae268ed7dd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 12:27:17 np0005464891 systemd[1]: var-lib-containers-storage-overlay-5da9aef92dcce32ffa278418437d700da7f9d513dd5e8479b77502886e56c866-merged.mount: Deactivated successfully.
Oct  1 12:27:17 np0005464891 podman[170018]: 2025-10-01 16:27:17.927275169 +0000 UTC m=+0.254003326 container remove ab15e35e615c529619826a7e2a7f5e151b0f8945183572de6a67ae268ed7dd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 12:27:17 np0005464891 systemd[1]: libpod-conmon-ab15e35e615c529619826a7e2a7f5e151b0f8945183572de6a67ae268ed7dd59.scope: Deactivated successfully.
Oct  1 12:27:18 np0005464891 podman[170058]: 2025-10-01 16:27:18.10782873 +0000 UTC m=+0.060769381 container create 1f394ba075f434c1bb2ee1d0870129c8d810631b43d036b05f3c650624c78a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:27:18 np0005464891 systemd[1]: Started libpod-conmon-1f394ba075f434c1bb2ee1d0870129c8d810631b43d036b05f3c650624c78a39.scope.
Oct  1 12:27:18 np0005464891 podman[170058]: 2025-10-01 16:27:18.078312146 +0000 UTC m=+0.031252897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:27:18 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:27:18 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bce69061f1583c083ab79ab44f512c0e65eb56a519b17b9db7f8a1cb39030ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:27:18 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bce69061f1583c083ab79ab44f512c0e65eb56a519b17b9db7f8a1cb39030ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:27:18 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bce69061f1583c083ab79ab44f512c0e65eb56a519b17b9db7f8a1cb39030ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:27:18 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bce69061f1583c083ab79ab44f512c0e65eb56a519b17b9db7f8a1cb39030ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:27:18 np0005464891 podman[170058]: 2025-10-01 16:27:18.207890183 +0000 UTC m=+0.160830874 container init 1f394ba075f434c1bb2ee1d0870129c8d810631b43d036b05f3c650624c78a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 12:27:18 np0005464891 podman[170058]: 2025-10-01 16:27:18.219814923 +0000 UTC m=+0.172755584 container start 1f394ba075f434c1bb2ee1d0870129c8d810631b43d036b05f3c650624c78a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 12:27:18 np0005464891 podman[170058]: 2025-10-01 16:27:18.230888237 +0000 UTC m=+0.183828888 container attach 1f394ba075f434c1bb2ee1d0870129c8d810631b43d036b05f3c650624c78a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:27:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]: {
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:    "0": [
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:        {
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "devices": [
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "/dev/loop3"
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            ],
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "lv_name": "ceph_lv0",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "lv_size": "21470642176",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "name": "ceph_lv0",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "tags": {
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.cluster_name": "ceph",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.crush_device_class": "",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.encrypted": "0",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.osd_id": "0",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.type": "block",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.vdo": "0"
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            },
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "type": "block",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "vg_name": "ceph_vg0"
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:        }
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:    ],
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:    "1": [
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:        {
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "devices": [
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "/dev/loop4"
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            ],
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "lv_name": "ceph_lv1",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "lv_size": "21470642176",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "name": "ceph_lv1",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "tags": {
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.cluster_name": "ceph",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.crush_device_class": "",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.encrypted": "0",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.osd_id": "1",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.type": "block",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.vdo": "0"
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            },
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "type": "block",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "vg_name": "ceph_vg1"
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:        }
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:    ],
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:    "2": [
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:        {
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "devices": [
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "/dev/loop5"
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            ],
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "lv_name": "ceph_lv2",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "lv_size": "21470642176",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "name": "ceph_lv2",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "tags": {
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.cluster_name": "ceph",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.crush_device_class": "",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.encrypted": "0",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.osd_id": "2",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.type": "block",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:                "ceph.vdo": "0"
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            },
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "type": "block",
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:            "vg_name": "ceph_vg2"
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:        }
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]:    ]
Oct  1 12:27:18 np0005464891 mystifying_bhabha[170075]: }
Oct  1 12:27:18 np0005464891 systemd[1]: libpod-1f394ba075f434c1bb2ee1d0870129c8d810631b43d036b05f3c650624c78a39.scope: Deactivated successfully.
Oct  1 12:27:19 np0005464891 podman[170058]: 2025-10-01 16:27:19.000134423 +0000 UTC m=+0.953075084 container died 1f394ba075f434c1bb2ee1d0870129c8d810631b43d036b05f3c650624c78a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 12:27:19 np0005464891 systemd[1]: var-lib-containers-storage-overlay-5bce69061f1583c083ab79ab44f512c0e65eb56a519b17b9db7f8a1cb39030ce-merged.mount: Deactivated successfully.
Oct  1 12:27:19 np0005464891 podman[170058]: 2025-10-01 16:27:19.098196507 +0000 UTC m=+1.051137158 container remove 1f394ba075f434c1bb2ee1d0870129c8d810631b43d036b05f3c650624c78a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:27:19 np0005464891 systemd[1]: libpod-conmon-1f394ba075f434c1bb2ee1d0870129c8d810631b43d036b05f3c650624c78a39.scope: Deactivated successfully.
Oct  1 12:27:19 np0005464891 podman[170239]: 2025-10-01 16:27:19.785983455 +0000 UTC m=+0.055594650 container create 7bac43dffa0826407aebcacaf2c05e45735a6ed49c681381d083593276a1e82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:27:19 np0005464891 systemd[1]: Started libpod-conmon-7bac43dffa0826407aebcacaf2c05e45735a6ed49c681381d083593276a1e82c.scope.
Oct  1 12:27:19 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:27:19 np0005464891 podman[170239]: 2025-10-01 16:27:19.756611224 +0000 UTC m=+0.026222449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:27:19 np0005464891 podman[170239]: 2025-10-01 16:27:19.87484985 +0000 UTC m=+0.144461035 container init 7bac43dffa0826407aebcacaf2c05e45735a6ed49c681381d083593276a1e82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:27:19 np0005464891 podman[170239]: 2025-10-01 16:27:19.881604628 +0000 UTC m=+0.151215813 container start 7bac43dffa0826407aebcacaf2c05e45735a6ed49c681381d083593276a1e82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 12:27:19 np0005464891 peaceful_dijkstra[170255]: 167 167
Oct  1 12:27:19 np0005464891 systemd[1]: libpod-7bac43dffa0826407aebcacaf2c05e45735a6ed49c681381d083593276a1e82c.scope: Deactivated successfully.
Oct  1 12:27:19 np0005464891 podman[170239]: 2025-10-01 16:27:19.888314254 +0000 UTC m=+0.157925459 container attach 7bac43dffa0826407aebcacaf2c05e45735a6ed49c681381d083593276a1e82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct  1 12:27:19 np0005464891 podman[170239]: 2025-10-01 16:27:19.889146298 +0000 UTC m=+0.158757473 container died 7bac43dffa0826407aebcacaf2c05e45735a6ed49c681381d083593276a1e82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:27:19 np0005464891 systemd[1]: var-lib-containers-storage-overlay-cf1129ce0d9aad875b179c9b822896d96e00f35eb6b6c8b97db15658785bd5a8-merged.mount: Deactivated successfully.
Oct  1 12:27:20 np0005464891 podman[170239]: 2025-10-01 16:27:20.09289623 +0000 UTC m=+0.362507455 container remove 7bac43dffa0826407aebcacaf2c05e45735a6ed49c681381d083593276a1e82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 12:27:20 np0005464891 systemd[1]: libpod-conmon-7bac43dffa0826407aebcacaf2c05e45735a6ed49c681381d083593276a1e82c.scope: Deactivated successfully.
Oct  1 12:27:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:20 np0005464891 podman[170285]: 2025-10-01 16:27:20.26656095 +0000 UTC m=+0.057067053 container create 89b011f9adf12a8a0a1de16b072f56fd704a5cadb1fa91dd474f00cc5b656df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_swartz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:27:20 np0005464891 systemd[1]: Started libpod-conmon-89b011f9adf12a8a0a1de16b072f56fd704a5cadb1fa91dd474f00cc5b656df9.scope.
Oct  1 12:27:20 np0005464891 podman[170285]: 2025-10-01 16:27:20.239195018 +0000 UTC m=+0.029701201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:27:20 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:27:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9183306f434c4306138d7449f8a0570ef189043b738c1f028976dec88558f8eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:27:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9183306f434c4306138d7449f8a0570ef189043b738c1f028976dec88558f8eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:27:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9183306f434c4306138d7449f8a0570ef189043b738c1f028976dec88558f8eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:27:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9183306f434c4306138d7449f8a0570ef189043b738c1f028976dec88558f8eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:27:20 np0005464891 podman[170285]: 2025-10-01 16:27:20.386614999 +0000 UTC m=+0.177121122 container init 89b011f9adf12a8a0a1de16b072f56fd704a5cadb1fa91dd474f00cc5b656df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_swartz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 12:27:20 np0005464891 podman[170285]: 2025-10-01 16:27:20.399023272 +0000 UTC m=+0.189529405 container start 89b011f9adf12a8a0a1de16b072f56fd704a5cadb1fa91dd474f00cc5b656df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:27:20 np0005464891 podman[170285]: 2025-10-01 16:27:20.402862375 +0000 UTC m=+0.193368498 container attach 89b011f9adf12a8a0a1de16b072f56fd704a5cadb1fa91dd474f00cc5b656df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_swartz, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 12:27:21 np0005464891 elated_swartz[170302]: {
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:        "osd_id": 2,
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:        "type": "bluestore"
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:    },
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:        "osd_id": 0,
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:        "type": "bluestore"
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:    },
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:        "osd_id": 1,
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:        "type": "bluestore"
Oct  1 12:27:21 np0005464891 elated_swartz[170302]:    }
Oct  1 12:27:21 np0005464891 elated_swartz[170302]: }
Oct  1 12:27:21 np0005464891 systemd[1]: libpod-89b011f9adf12a8a0a1de16b072f56fd704a5cadb1fa91dd474f00cc5b656df9.scope: Deactivated successfully.
Oct  1 12:27:21 np0005464891 systemd[1]: libpod-89b011f9adf12a8a0a1de16b072f56fd704a5cadb1fa91dd474f00cc5b656df9.scope: Consumed 1.121s CPU time.
Oct  1 12:27:21 np0005464891 podman[170285]: 2025-10-01 16:27:21.522192711 +0000 UTC m=+1.312698844 container died 89b011f9adf12a8a0a1de16b072f56fd704a5cadb1fa91dd474f00cc5b656df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 12:27:21 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9183306f434c4306138d7449f8a0570ef189043b738c1f028976dec88558f8eb-merged.mount: Deactivated successfully.
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:27:21 np0005464891 podman[170285]: 2025-10-01 16:27:21.634890004 +0000 UTC m=+1.425396147 container remove 89b011f9adf12a8a0a1de16b072f56fd704a5cadb1fa91dd474f00cc5b656df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:27:21 np0005464891 systemd[1]: libpod-conmon-89b011f9adf12a8a0a1de16b072f56fd704a5cadb1fa91dd474f00cc5b656df9.scope: Deactivated successfully.
Oct  1 12:27:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:27:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:27:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:27:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 68a0f64d-352c-44b6-9254-a2083a8ca77e does not exist
Oct  1 12:27:21 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 0ce69b3b-8a61-4708-833a-7b873ca771a3 does not exist
Oct  1 12:27:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:27:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:22 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:27:22 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:27:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:27:27 np0005464891 podman[170400]: 2025-10-01 16:27:27.992517657 +0000 UTC m=+0.094607924 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct  1 12:27:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:27:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:32 np0005464891 kernel: SELinux:  Converting 2765 SID table entries...
Oct  1 12:27:32 np0005464891 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 12:27:32 np0005464891 kernel: SELinux:  policy capability open_perms=1
Oct  1 12:27:32 np0005464891 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 12:27:32 np0005464891 kernel: SELinux:  policy capability always_check_network=0
Oct  1 12:27:32 np0005464891 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 12:27:32 np0005464891 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 12:27:32 np0005464891 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 12:27:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:27:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:27:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:27:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:27:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:27:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:27:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:27:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:27:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:42 np0005464891 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct  1 12:27:42 np0005464891 kernel: SELinux:  Converting 2765 SID table entries...
Oct  1 12:27:42 np0005464891 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 12:27:42 np0005464891 kernel: SELinux:  policy capability open_perms=1
Oct  1 12:27:42 np0005464891 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 12:27:42 np0005464891 kernel: SELinux:  policy capability always_check_network=0
Oct  1 12:27:42 np0005464891 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 12:27:42 np0005464891 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 12:27:42 np0005464891 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 12:27:42 np0005464891 podman[170440]: 2025-10-01 16:27:42.991121244 +0000 UTC m=+0.084291001 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:27:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:27:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:27:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:27:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:27:58 np0005464891 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct  1 12:27:59 np0005464891 podman[172415]: 2025-10-01 16:27:59.032975952 +0000 UTC m=+0.123994806 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  1 12:28:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:28:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:28:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:28:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:28:11
Oct  1 12:28:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:28:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:28:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', '.rgw.root', 'backups', 'images', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.data']
Oct  1 12:28:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:28:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:28:12.422 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:28:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:28:12.423 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:28:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:28:12.423 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:28:13 np0005464891 podman[180716]: 2025-10-01 16:28:13.963722573 +0000 UTC m=+0.070118115 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 12:28:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:28:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:18 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct  1 12:28:18 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:28:18.721797) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 12:28:18 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct  1 12:28:18 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336098721864, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2037, "num_deletes": 251, "total_data_size": 3492696, "memory_usage": 3541712, "flush_reason": "Manual Compaction"}
Oct  1 12:28:18 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct  1 12:28:19 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336099115574, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3428171, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9742, "largest_seqno": 11778, "table_properties": {"data_size": 3418910, "index_size": 5883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17746, "raw_average_key_size": 19, "raw_value_size": 3400579, "raw_average_value_size": 3724, "num_data_blocks": 266, "num_entries": 913, "num_filter_entries": 913, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335861, "oldest_key_time": 1759335861, "file_creation_time": 1759336098, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:28:19 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 393814 microseconds, and 9029 cpu microseconds.
Oct  1 12:28:19 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:28:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:28:19.115624) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3428171 bytes OK
Oct  1 12:28:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:28:19.115644) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct  1 12:28:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:28:19.738205) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct  1 12:28:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:28:19.738250) EVENT_LOG_v1 {"time_micros": 1759336099738240, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 12:28:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:28:19.738273) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 12:28:20 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3484210, prev total WAL file size 3484846, number of live WAL files 2.
Oct  1 12:28:20 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:28:20 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:28:20.518196) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct  1 12:28:20 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 12:28:20 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3347KB)], [26(6168KB)]
Oct  1 12:28:20 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336100518431, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9744386, "oldest_snapshot_seqno": -1}
Oct  1 12:28:21 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3716 keys, 7994341 bytes, temperature: kUnknown
Oct  1 12:28:21 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336101595379, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7994341, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7965632, "index_size": 18295, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9349, "raw_key_size": 89374, "raw_average_key_size": 24, "raw_value_size": 7894715, "raw_average_value_size": 2124, "num_data_blocks": 791, "num_entries": 3716, "num_filter_entries": 3716, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759336100, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:28:21 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:28:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:28:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:28:21.595608) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7994341 bytes
Oct  1 12:28:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:28:22.033203) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 9.0 rd, 7.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(5.2) write-amplify(2.3) OK, records in: 4230, records dropped: 514 output_compression: NoCompression
Oct  1 12:28:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:28:22.033242) EVENT_LOG_v1 {"time_micros": 1759336102033226, "job": 10, "event": "compaction_finished", "compaction_time_micros": 1077001, "compaction_time_cpu_micros": 23736, "output_level": 6, "num_output_files": 1, "total_output_size": 7994341, "num_input_records": 4230, "num_output_records": 3716, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 12:28:22 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:28:22 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336102034033, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct  1 12:28:22 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:28:22 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336102035379, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct  1 12:28:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:28:20.518129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:28:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:28:22.035446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:28:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:28:22.035505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:28:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:28:22.035507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:28:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:28:22.035509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:28:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:28:22.035510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:28:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:28:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:25 np0005464891 podman[185961]: 2025-10-01 16:28:25.625429939 +0000 UTC m=+2.995333589 container exec 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 12:28:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:28 np0005464891 podman[185961]: 2025-10-01 16:28:28.085978173 +0000 UTC m=+5.455881823 container exec_died 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 12:28:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:28:30 np0005464891 podman[187449]: 2025-10-01 16:28:30.006484827 +0000 UTC m=+0.112562978 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  1 12:28:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:28:30 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:28:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:28:30 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:28:31 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev ab35459d-5d3e-4765-82b9-81a1ee1e6d1f does not exist
Oct  1 12:28:31 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev c444f783-a1bd-4416-ae3b-d16e115d4433 does not exist
Oct  1 12:28:31 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 8e6ac8e1-9000-4fdb-8f85-2872cc728545 does not exist
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:28:31 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:28:31 np0005464891 podman[187875]: 2025-10-01 16:28:31.970725859 +0000 UTC m=+0.072800040 container create 6c3ab171d0f95ad0e177a5273dcab262b7f4d3dfafcfac59e3b438ea715b9773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:28:32 np0005464891 systemd[1]: Started libpod-conmon-6c3ab171d0f95ad0e177a5273dcab262b7f4d3dfafcfac59e3b438ea715b9773.scope.
Oct  1 12:28:32 np0005464891 podman[187875]: 2025-10-01 16:28:31.936383852 +0000 UTC m=+0.038458033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:28:32 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:28:32 np0005464891 podman[187875]: 2025-10-01 16:28:32.093081379 +0000 UTC m=+0.195155640 container init 6c3ab171d0f95ad0e177a5273dcab262b7f4d3dfafcfac59e3b438ea715b9773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 12:28:32 np0005464891 podman[187875]: 2025-10-01 16:28:32.102386788 +0000 UTC m=+0.204460979 container start 6c3ab171d0f95ad0e177a5273dcab262b7f4d3dfafcfac59e3b438ea715b9773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 12:28:32 np0005464891 goofy_pasteur[187894]: 167 167
Oct  1 12:28:32 np0005464891 systemd[1]: libpod-6c3ab171d0f95ad0e177a5273dcab262b7f4d3dfafcfac59e3b438ea715b9773.scope: Deactivated successfully.
Oct  1 12:28:32 np0005464891 podman[187875]: 2025-10-01 16:28:32.132675912 +0000 UTC m=+0.234750113 container attach 6c3ab171d0f95ad0e177a5273dcab262b7f4d3dfafcfac59e3b438ea715b9773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:28:32 np0005464891 podman[187875]: 2025-10-01 16:28:32.133119005 +0000 UTC m=+0.235193206 container died 6c3ab171d0f95ad0e177a5273dcab262b7f4d3dfafcfac59e3b438ea715b9773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:28:32 np0005464891 systemd[1]: var-lib-containers-storage-overlay-5e6f22cc56141d8da9aa7cae44c28d796c3c540281dc83fb3453f17f1face720-merged.mount: Deactivated successfully.
Oct  1 12:28:32 np0005464891 podman[187875]: 2025-10-01 16:28:32.281209552 +0000 UTC m=+0.383283753 container remove 6c3ab171d0f95ad0e177a5273dcab262b7f4d3dfafcfac59e3b438ea715b9773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:28:32 np0005464891 systemd[1]: libpod-conmon-6c3ab171d0f95ad0e177a5273dcab262b7f4d3dfafcfac59e3b438ea715b9773.scope: Deactivated successfully.
Oct  1 12:28:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:32 np0005464891 podman[187921]: 2025-10-01 16:28:32.504042602 +0000 UTC m=+0.064769306 container create 2e2aed92c384cbdb10d57eac8a6bb081a38b39149390e9896dce2ec8828b959f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ganguly, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:28:32 np0005464891 systemd[1]: Started libpod-conmon-2e2aed92c384cbdb10d57eac8a6bb081a38b39149390e9896dce2ec8828b959f.scope.
Oct  1 12:28:32 np0005464891 podman[187921]: 2025-10-01 16:28:32.468659136 +0000 UTC m=+0.029385940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:28:32 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:28:32 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64395d923705479b6a0228deaf75e9d92c97e946592092ee26cd35889886f576/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:28:32 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64395d923705479b6a0228deaf75e9d92c97e946592092ee26cd35889886f576/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:28:32 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64395d923705479b6a0228deaf75e9d92c97e946592092ee26cd35889886f576/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:28:32 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64395d923705479b6a0228deaf75e9d92c97e946592092ee26cd35889886f576/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:28:32 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64395d923705479b6a0228deaf75e9d92c97e946592092ee26cd35889886f576/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:28:32 np0005464891 podman[187921]: 2025-10-01 16:28:32.611886818 +0000 UTC m=+0.172613522 container init 2e2aed92c384cbdb10d57eac8a6bb081a38b39149390e9896dce2ec8828b959f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 12:28:32 np0005464891 podman[187921]: 2025-10-01 16:28:32.619238012 +0000 UTC m=+0.179964716 container start 2e2aed92c384cbdb10d57eac8a6bb081a38b39149390e9896dce2ec8828b959f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:28:32 np0005464891 podman[187921]: 2025-10-01 16:28:32.626764692 +0000 UTC m=+0.187491426 container attach 2e2aed92c384cbdb10d57eac8a6bb081a38b39149390e9896dce2ec8828b959f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ganguly, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 12:28:33 np0005464891 wizardly_ganguly[187938]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:28:33 np0005464891 wizardly_ganguly[187938]: --> relative data size: 1.0
Oct  1 12:28:33 np0005464891 wizardly_ganguly[187938]: --> All data devices are unavailable
Oct  1 12:28:33 np0005464891 systemd[1]: libpod-2e2aed92c384cbdb10d57eac8a6bb081a38b39149390e9896dce2ec8828b959f.scope: Deactivated successfully.
Oct  1 12:28:33 np0005464891 systemd[1]: libpod-2e2aed92c384cbdb10d57eac8a6bb081a38b39149390e9896dce2ec8828b959f.scope: Consumed 1.008s CPU time.
Oct  1 12:28:33 np0005464891 podman[187921]: 2025-10-01 16:28:33.709548789 +0000 UTC m=+1.270275503 container died 2e2aed92c384cbdb10d57eac8a6bb081a38b39149390e9896dce2ec8828b959f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ganguly, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 12:28:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:28:33 np0005464891 systemd[1]: var-lib-containers-storage-overlay-64395d923705479b6a0228deaf75e9d92c97e946592092ee26cd35889886f576-merged.mount: Deactivated successfully.
Oct  1 12:28:34 np0005464891 podman[187921]: 2025-10-01 16:28:34.149542841 +0000 UTC m=+1.710269545 container remove 2e2aed92c384cbdb10d57eac8a6bb081a38b39149390e9896dce2ec8828b959f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:28:34 np0005464891 systemd[1]: libpod-conmon-2e2aed92c384cbdb10d57eac8a6bb081a38b39149390e9896dce2ec8828b959f.scope: Deactivated successfully.
Oct  1 12:28:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:34 np0005464891 podman[188129]: 2025-10-01 16:28:34.818276098 +0000 UTC m=+0.033507945 container create 401c00229fdada862093a88a2bee30510c8290d51c8b04926f5b5067df0988b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:28:34 np0005464891 systemd[1]: Started libpod-conmon-401c00229fdada862093a88a2bee30510c8290d51c8b04926f5b5067df0988b0.scope.
Oct  1 12:28:34 np0005464891 podman[188129]: 2025-10-01 16:28:34.803681271 +0000 UTC m=+0.018913138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:28:34 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:28:34 np0005464891 podman[188129]: 2025-10-01 16:28:34.957821677 +0000 UTC m=+0.173053554 container init 401c00229fdada862093a88a2bee30510c8290d51c8b04926f5b5067df0988b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 12:28:34 np0005464891 podman[188129]: 2025-10-01 16:28:34.965480461 +0000 UTC m=+0.180712318 container start 401c00229fdada862093a88a2bee30510c8290d51c8b04926f5b5067df0988b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_euler, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:28:34 np0005464891 vigilant_euler[188146]: 167 167
Oct  1 12:28:34 np0005464891 systemd[1]: libpod-401c00229fdada862093a88a2bee30510c8290d51c8b04926f5b5067df0988b0.scope: Deactivated successfully.
Oct  1 12:28:35 np0005464891 podman[188129]: 2025-10-01 16:28:35.038328951 +0000 UTC m=+0.253560808 container attach 401c00229fdada862093a88a2bee30510c8290d51c8b04926f5b5067df0988b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:28:35 np0005464891 podman[188129]: 2025-10-01 16:28:35.038759203 +0000 UTC m=+0.253991050 container died 401c00229fdada862093a88a2bee30510c8290d51c8b04926f5b5067df0988b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_euler, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:28:35 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ff7200e9dd5bfaa8afe2510faaf30ee68da361100d1a0952f477e90f87de74b0-merged.mount: Deactivated successfully.
Oct  1 12:28:35 np0005464891 podman[188129]: 2025-10-01 16:28:35.210607462 +0000 UTC m=+0.425839329 container remove 401c00229fdada862093a88a2bee30510c8290d51c8b04926f5b5067df0988b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_euler, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct  1 12:28:35 np0005464891 systemd[1]: libpod-conmon-401c00229fdada862093a88a2bee30510c8290d51c8b04926f5b5067df0988b0.scope: Deactivated successfully.
Oct  1 12:28:35 np0005464891 podman[188168]: 2025-10-01 16:28:35.387365919 +0000 UTC m=+0.021729907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:28:35 np0005464891 podman[188168]: 2025-10-01 16:28:35.500246275 +0000 UTC m=+0.134610133 container create fb507d406bd73ea265edca0edc19a4a071216f8106524ed24ec4716d518e8cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 12:28:35 np0005464891 systemd[1]: Started libpod-conmon-fb507d406bd73ea265edca0edc19a4a071216f8106524ed24ec4716d518e8cbd.scope.
Oct  1 12:28:35 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:28:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5d8ef52dc016e7f7e1a86b91387fc61ba12ae4edde22811214f775f4ce5fda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:28:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5d8ef52dc016e7f7e1a86b91387fc61ba12ae4edde22811214f775f4ce5fda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:28:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5d8ef52dc016e7f7e1a86b91387fc61ba12ae4edde22811214f775f4ce5fda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:28:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5d8ef52dc016e7f7e1a86b91387fc61ba12ae4edde22811214f775f4ce5fda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:28:35 np0005464891 podman[188168]: 2025-10-01 16:28:35.610102206 +0000 UTC m=+0.244466084 container init fb507d406bd73ea265edca0edc19a4a071216f8106524ed24ec4716d518e8cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:28:35 np0005464891 podman[188168]: 2025-10-01 16:28:35.621970816 +0000 UTC m=+0.256334634 container start fb507d406bd73ea265edca0edc19a4a071216f8106524ed24ec4716d518e8cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:28:35 np0005464891 podman[188168]: 2025-10-01 16:28:35.655489561 +0000 UTC m=+0.289853469 container attach fb507d406bd73ea265edca0edc19a4a071216f8106524ed24ec4716d518e8cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:28:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]: {
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:    "0": [
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:        {
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "devices": [
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "/dev/loop3"
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            ],
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "lv_name": "ceph_lv0",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "lv_size": "21470642176",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "name": "ceph_lv0",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "tags": {
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.cluster_name": "ceph",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.crush_device_class": "",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.encrypted": "0",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.osd_id": "0",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.type": "block",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.vdo": "0"
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            },
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "type": "block",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "vg_name": "ceph_vg0"
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:        }
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:    ],
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:    "1": [
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:        {
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "devices": [
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "/dev/loop4"
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            ],
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "lv_name": "ceph_lv1",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "lv_size": "21470642176",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "name": "ceph_lv1",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "tags": {
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.cluster_name": "ceph",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.crush_device_class": "",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.encrypted": "0",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.osd_id": "1",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.type": "block",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.vdo": "0"
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            },
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "type": "block",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "vg_name": "ceph_vg1"
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:        }
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:    ],
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:    "2": [
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:        {
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "devices": [
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "/dev/loop5"
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            ],
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "lv_name": "ceph_lv2",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "lv_size": "21470642176",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "name": "ceph_lv2",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "tags": {
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.cluster_name": "ceph",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.crush_device_class": "",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.encrypted": "0",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.osd_id": "2",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.type": "block",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:                "ceph.vdo": "0"
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            },
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "type": "block",
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:            "vg_name": "ceph_vg2"
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:        }
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]:    ]
Oct  1 12:28:36 np0005464891 affectionate_kare[188185]: }
Oct  1 12:28:36 np0005464891 systemd[1]: libpod-fb507d406bd73ea265edca0edc19a4a071216f8106524ed24ec4716d518e8cbd.scope: Deactivated successfully.
Oct  1 12:28:36 np0005464891 podman[188168]: 2025-10-01 16:28:36.489261337 +0000 UTC m=+1.123625155 container died fb507d406bd73ea265edca0edc19a4a071216f8106524ed24ec4716d518e8cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:28:36 np0005464891 systemd[1]: var-lib-containers-storage-overlay-8c5d8ef52dc016e7f7e1a86b91387fc61ba12ae4edde22811214f775f4ce5fda-merged.mount: Deactivated successfully.
Oct  1 12:28:36 np0005464891 podman[188168]: 2025-10-01 16:28:36.735606752 +0000 UTC m=+1.369970610 container remove fb507d406bd73ea265edca0edc19a4a071216f8106524ed24ec4716d518e8cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:28:36 np0005464891 systemd[1]: libpod-conmon-fb507d406bd73ea265edca0edc19a4a071216f8106524ed24ec4716d518e8cbd.scope: Deactivated successfully.
Oct  1 12:28:37 np0005464891 podman[188344]: 2025-10-01 16:28:37.40184164 +0000 UTC m=+0.061468994 container create d365afb347f1d4f9dd3b751dc96c74ea7ae99291e46c634f7ba76f65649b4b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goodall, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 12:28:37 np0005464891 systemd[1]: Started libpod-conmon-d365afb347f1d4f9dd3b751dc96c74ea7ae99291e46c634f7ba76f65649b4b2f.scope.
Oct  1 12:28:37 np0005464891 podman[188344]: 2025-10-01 16:28:37.360694303 +0000 UTC m=+0.020321687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:28:37 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:28:37 np0005464891 podman[188344]: 2025-10-01 16:28:37.529066995 +0000 UTC m=+0.188694349 container init d365afb347f1d4f9dd3b751dc96c74ea7ae99291e46c634f7ba76f65649b4b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:28:37 np0005464891 podman[188344]: 2025-10-01 16:28:37.541751439 +0000 UTC m=+0.201378793 container start d365afb347f1d4f9dd3b751dc96c74ea7ae99291e46c634f7ba76f65649b4b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goodall, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:28:37 np0005464891 frosty_goodall[188360]: 167 167
Oct  1 12:28:37 np0005464891 systemd[1]: libpod-d365afb347f1d4f9dd3b751dc96c74ea7ae99291e46c634f7ba76f65649b4b2f.scope: Deactivated successfully.
Oct  1 12:28:37 np0005464891 podman[188344]: 2025-10-01 16:28:37.562628021 +0000 UTC m=+0.222255375 container attach d365afb347f1d4f9dd3b751dc96c74ea7ae99291e46c634f7ba76f65649b4b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goodall, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:28:37 np0005464891 podman[188344]: 2025-10-01 16:28:37.56296046 +0000 UTC m=+0.222587804 container died d365afb347f1d4f9dd3b751dc96c74ea7ae99291e46c634f7ba76f65649b4b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:28:37 np0005464891 systemd[1]: var-lib-containers-storage-overlay-5932836c07a7380de816d479df8ae56f0e3b166fa8d027e69df172f523e41d60-merged.mount: Deactivated successfully.
Oct  1 12:28:37 np0005464891 podman[188344]: 2025-10-01 16:28:37.696284176 +0000 UTC m=+0.355911530 container remove d365afb347f1d4f9dd3b751dc96c74ea7ae99291e46c634f7ba76f65649b4b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goodall, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:28:37 np0005464891 systemd[1]: libpod-conmon-d365afb347f1d4f9dd3b751dc96c74ea7ae99291e46c634f7ba76f65649b4b2f.scope: Deactivated successfully.
Oct  1 12:28:37 np0005464891 podman[188386]: 2025-10-01 16:28:37.870378968 +0000 UTC m=+0.059074298 container create c4c6884d51c50566adf3c3af5aff85007b3211a661848534a3deb703f81808d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:28:37 np0005464891 systemd[1]: Started libpod-conmon-c4c6884d51c50566adf3c3af5aff85007b3211a661848534a3deb703f81808d1.scope.
Oct  1 12:28:37 np0005464891 podman[188386]: 2025-10-01 16:28:37.840202086 +0000 UTC m=+0.028897486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:28:37 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:28:37 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ef91437b44e02f4ac83a1acf0540480b8fc2b53831e7764745d5355a25b2b8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:28:37 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ef91437b44e02f4ac83a1acf0540480b8fc2b53831e7764745d5355a25b2b8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:28:37 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ef91437b44e02f4ac83a1acf0540480b8fc2b53831e7764745d5355a25b2b8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:28:37 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ef91437b44e02f4ac83a1acf0540480b8fc2b53831e7764745d5355a25b2b8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:28:37 np0005464891 podman[188386]: 2025-10-01 16:28:37.997762377 +0000 UTC m=+0.186457767 container init c4c6884d51c50566adf3c3af5aff85007b3211a661848534a3deb703f81808d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:28:38 np0005464891 podman[188386]: 2025-10-01 16:28:38.009066252 +0000 UTC m=+0.197761602 container start c4c6884d51c50566adf3c3af5aff85007b3211a661848534a3deb703f81808d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 12:28:38 np0005464891 podman[188386]: 2025-10-01 16:28:38.017824617 +0000 UTC m=+0.206520057 container attach c4c6884d51c50566adf3c3af5aff85007b3211a661848534a3deb703f81808d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 12:28:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]: {
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:        "osd_id": 2,
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:        "type": "bluestore"
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:    },
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:        "osd_id": 0,
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:        "type": "bluestore"
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:    },
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:        "osd_id": 1,
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:        "type": "bluestore"
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]:    }
Oct  1 12:28:39 np0005464891 heuristic_greider[188403]: }
Oct  1 12:28:39 np0005464891 systemd[1]: libpod-c4c6884d51c50566adf3c3af5aff85007b3211a661848534a3deb703f81808d1.scope: Deactivated successfully.
Oct  1 12:28:39 np0005464891 systemd[1]: libpod-c4c6884d51c50566adf3c3af5aff85007b3211a661848534a3deb703f81808d1.scope: Consumed 1.047s CPU time.
Oct  1 12:28:39 np0005464891 podman[188386]: 2025-10-01 16:28:39.054442807 +0000 UTC m=+1.243138157 container died c4c6884d51c50566adf3c3af5aff85007b3211a661848534a3deb703f81808d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:28:39 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6ef91437b44e02f4ac83a1acf0540480b8fc2b53831e7764745d5355a25b2b8c-merged.mount: Deactivated successfully.
Oct  1 12:28:39 np0005464891 podman[188386]: 2025-10-01 16:28:39.154951847 +0000 UTC m=+1.343647187 container remove c4c6884d51c50566adf3c3af5aff85007b3211a661848534a3deb703f81808d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:28:39 np0005464891 systemd[1]: libpod-conmon-c4c6884d51c50566adf3c3af5aff85007b3211a661848534a3deb703f81808d1.scope: Deactivated successfully.
Oct  1 12:28:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:28:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:28:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:28:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:28:39 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev e5bb0ca7-8c4b-4108-b276-9ae2f3e2c35a does not exist
Oct  1 12:28:39 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 424e656c-9e33-40f7-839e-f97db91d7332 does not exist
Oct  1 12:28:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:28:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:28:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:28:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:28:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:28:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:28:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:28:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:28:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:43 np0005464891 kernel: SELinux:  Converting 2766 SID table entries...
Oct  1 12:28:43 np0005464891 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 12:28:43 np0005464891 kernel: SELinux:  policy capability open_perms=1
Oct  1 12:28:43 np0005464891 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 12:28:43 np0005464891 kernel: SELinux:  policy capability always_check_network=0
Oct  1 12:28:43 np0005464891 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 12:28:43 np0005464891 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 12:28:43 np0005464891 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 12:28:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:28:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:44 np0005464891 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct  1 12:28:44 np0005464891 podman[188507]: 2025-10-01 16:28:44.639494038 +0000 UTC m=+0.058999415 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 12:28:45 np0005464891 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct  1 12:28:45 np0005464891 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct  1 12:28:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:28:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:53 np0005464891 systemd[1]: Stopping OpenSSH server daemon...
Oct  1 12:28:53 np0005464891 systemd[1]: sshd.service: Deactivated successfully.
Oct  1 12:28:53 np0005464891 systemd[1]: Stopped OpenSSH server daemon.
Oct  1 12:28:53 np0005464891 systemd[1]: sshd.service: Consumed 2.736s CPU time, read 0B from disk, written 16.0K to disk.
Oct  1 12:28:53 np0005464891 systemd[1]: Stopped target sshd-keygen.target.
Oct  1 12:28:53 np0005464891 systemd[1]: Stopping sshd-keygen.target...
Oct  1 12:28:53 np0005464891 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  1 12:28:53 np0005464891 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  1 12:28:53 np0005464891 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  1 12:28:53 np0005464891 systemd[1]: Reached target sshd-keygen.target.
Oct  1 12:28:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:28:53 np0005464891 systemd[1]: Starting OpenSSH server daemon...
Oct  1 12:28:53 np0005464891 systemd[1]: Started OpenSSH server daemon.
Oct  1 12:28:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:55 np0005464891 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 12:28:55 np0005464891 systemd[1]: Starting man-db-cache-update.service...
Oct  1 12:28:56 np0005464891 systemd[1]: Reloading.
Oct  1 12:28:56 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:28:56 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:28:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:56 np0005464891 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  1 12:28:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:28:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:29:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:00 np0005464891 systemd[1]: Starting PackageKit Daemon...
Oct  1 12:29:01 np0005464891 systemd[1]: Started PackageKit Daemon.
Oct  1 12:29:01 np0005464891 podman[193877]: 2025-10-01 16:29:01.048794017 +0000 UTC m=+0.149874289 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  1 12:29:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:02 np0005464891 auditd[709]: Audit daemon rotating log files
Oct  1 12:29:02 np0005464891 python3.9[195633]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 12:29:02 np0005464891 systemd[1]: Reloading.
Oct  1 12:29:03 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:29:03 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:29:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:29:04 np0005464891 python3.9[196881]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 12:29:04 np0005464891 systemd[1]: Reloading.
Oct  1 12:29:04 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:29:04 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:29:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:05 np0005464891 python3.9[197932]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 12:29:05 np0005464891 systemd[1]: Reloading.
Oct  1 12:29:05 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:29:05 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:29:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:06 np0005464891 python3.9[198626]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 12:29:06 np0005464891 systemd[1]: Reloading.
Oct  1 12:29:06 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:29:06 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:29:07 np0005464891 python3.9[198824]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:07 np0005464891 systemd[1]: Reloading.
Oct  1 12:29:07 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:29:07 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:29:08 np0005464891 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 12:29:08 np0005464891 systemd[1]: Finished man-db-cache-update.service.
Oct  1 12:29:08 np0005464891 systemd[1]: man-db-cache-update.service: Consumed 11.544s CPU time.
Oct  1 12:29:08 np0005464891 systemd[1]: run-re91fa802c14645ad9047c5f27e04c3c2.service: Deactivated successfully.
Oct  1 12:29:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:29:09 np0005464891 python3.9[199015]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:09 np0005464891 systemd[1]: Reloading.
Oct  1 12:29:09 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:29:09 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:29:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:10 np0005464891 python3.9[199205]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:10 np0005464891 systemd[1]: Reloading.
Oct  1 12:29:10 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:29:10 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:29:11 np0005464891 python3.9[199395]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:29:11
Oct  1 12:29:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:29:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:29:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.mgr', 'vms', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'backups']
Oct  1 12:29:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:29:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:29:12.423 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:29:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:29:12.424 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:29:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:29:12.425 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:29:12 np0005464891 python3.9[199550]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:12 np0005464891 systemd[1]: Reloading.
Oct  1 12:29:12 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:29:12 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:29:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:29:14 np0005464891 python3.9[199740]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 12:29:14 np0005464891 systemd[1]: Reloading.
Oct  1 12:29:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:14 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:29:14 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:29:14 np0005464891 systemd[1]: Listening on libvirt proxy daemon socket.
Oct  1 12:29:14 np0005464891 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct  1 12:29:15 np0005464891 podman[199847]: 2025-10-01 16:29:14.999621486 +0000 UTC m=+0.105413904 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  1 12:29:15 np0005464891 python3.9[199953]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:16 np0005464891 python3.9[200108]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:17 np0005464891 python3.9[200263]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:18 np0005464891 python3.9[200418]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:29:19 np0005464891 python3.9[200573]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:20 np0005464891 python3.9[200728]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:21 np0005464891 python3.9[200883]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:29:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:29:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:22 np0005464891 python3.9[201038]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:23 np0005464891 python3.9[201193]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:29:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:24 np0005464891 python3.9[201348]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:25 np0005464891 python3.9[201503]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:26 np0005464891 python3.9[201658]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:28 np0005464891 python3.9[201813]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:29:29 np0005464891 python3.9[201968]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 12:29:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:31 np0005464891 python3.9[202123]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:29:31 np0005464891 podman[202247]: 2025-10-01 16:29:31.667632481 +0000 UTC m=+0.148736550 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 12:29:31 np0005464891 python3.9[202294]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:29:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:32 np0005464891 python3.9[202453]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:29:33 np0005464891 python3.9[202605]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:29:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:29:34 np0005464891 python3.9[202757]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:29:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:34 np0005464891 python3.9[202909]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:29:35 np0005464891 python3.9[203061]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:29:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:36 np0005464891 python3.9[203186]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759336175.1055007-554-198828559818596/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:37 np0005464891 python3.9[203338]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:29:38 np0005464891 python3.9[203463]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759336176.8143141-554-27733793252483/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:38 np0005464891 python3.9[203615]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:29:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:29:39 np0005464891 python3.9[203740]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759336178.162607-554-211225495151003/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:39 np0005464891 python3.9[203992]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:29:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:29:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:29:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:29:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:29:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:29:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:29:40 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 293d0476-4ccb-467b-972a-b090a077656e does not exist
Oct  1 12:29:40 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev b39b68e9-60c3-4d0e-97a1-47abaf46fd59 does not exist
Oct  1 12:29:40 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 9c0bc60f-1d7b-4b44-8c74-29bd178ef915 does not exist
Oct  1 12:29:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:29:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:29:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:29:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:29:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:29:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:29:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:40 np0005464891 python3.9[204197]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759336179.4063299-554-167047547126050/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:40 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:29:40 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:29:40 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:29:40 np0005464891 podman[204342]: 2025-10-01 16:29:40.646911358 +0000 UTC m=+0.058104187 container create dacb1ee4b3112a0394f915b2a2f9a05d5953edddeb12b76d7cf9ccbe3c857f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:29:40 np0005464891 systemd[1]: Started libpod-conmon-dacb1ee4b3112a0394f915b2a2f9a05d5953edddeb12b76d7cf9ccbe3c857f76.scope.
Oct  1 12:29:40 np0005464891 podman[204342]: 2025-10-01 16:29:40.619975823 +0000 UTC m=+0.031168722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:29:40 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:29:40 np0005464891 podman[204342]: 2025-10-01 16:29:40.751963151 +0000 UTC m=+0.163155990 container init dacb1ee4b3112a0394f915b2a2f9a05d5953edddeb12b76d7cf9ccbe3c857f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:29:40 np0005464891 podman[204342]: 2025-10-01 16:29:40.761635278 +0000 UTC m=+0.172828077 container start dacb1ee4b3112a0394f915b2a2f9a05d5953edddeb12b76d7cf9ccbe3c857f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:29:40 np0005464891 podman[204342]: 2025-10-01 16:29:40.765974108 +0000 UTC m=+0.177166937 container attach dacb1ee4b3112a0394f915b2a2f9a05d5953edddeb12b76d7cf9ccbe3c857f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 12:29:40 np0005464891 nostalgic_pascal[204400]: 167 167
Oct  1 12:29:40 np0005464891 systemd[1]: libpod-dacb1ee4b3112a0394f915b2a2f9a05d5953edddeb12b76d7cf9ccbe3c857f76.scope: Deactivated successfully.
Oct  1 12:29:40 np0005464891 podman[204342]: 2025-10-01 16:29:40.767397707 +0000 UTC m=+0.178590506 container died dacb1ee4b3112a0394f915b2a2f9a05d5953edddeb12b76d7cf9ccbe3c857f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:29:40 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c514ff5544057d4c313caf7ebd80795d0633425e763d38df3c3c22b5f7fa46f3-merged.mount: Deactivated successfully.
Oct  1 12:29:40 np0005464891 podman[204342]: 2025-10-01 16:29:40.931235374 +0000 UTC m=+0.342428213 container remove dacb1ee4b3112a0394f915b2a2f9a05d5953edddeb12b76d7cf9ccbe3c857f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 12:29:40 np0005464891 systemd[1]: libpod-conmon-dacb1ee4b3112a0394f915b2a2f9a05d5953edddeb12b76d7cf9ccbe3c857f76.scope: Deactivated successfully.
Oct  1 12:29:41 np0005464891 python3.9[204473]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:29:41 np0005464891 podman[204481]: 2025-10-01 16:29:41.149023721 +0000 UTC m=+0.052604084 container create 1b76315edde783f19c33b71f815a823a0e0e906ae14673adebe68ef82b55aaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 12:29:41 np0005464891 systemd[1]: Started libpod-conmon-1b76315edde783f19c33b71f815a823a0e0e906ae14673adebe68ef82b55aaba.scope.
Oct  1 12:29:41 np0005464891 podman[204481]: 2025-10-01 16:29:41.118812256 +0000 UTC m=+0.022392649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:29:41 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:29:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6376c4cfda9f35425eec7349ebc405cdd308e5bbb9873df4f0b7cb1a6facee8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:29:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6376c4cfda9f35425eec7349ebc405cdd308e5bbb9873df4f0b7cb1a6facee8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:29:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6376c4cfda9f35425eec7349ebc405cdd308e5bbb9873df4f0b7cb1a6facee8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:29:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6376c4cfda9f35425eec7349ebc405cdd308e5bbb9873df4f0b7cb1a6facee8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:29:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6376c4cfda9f35425eec7349ebc405cdd308e5bbb9873df4f0b7cb1a6facee8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:29:41 np0005464891 podman[204481]: 2025-10-01 16:29:41.243853432 +0000 UTC m=+0.147433815 container init 1b76315edde783f19c33b71f815a823a0e0e906ae14673adebe68ef82b55aaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:29:41 np0005464891 podman[204481]: 2025-10-01 16:29:41.255913344 +0000 UTC m=+0.159493707 container start 1b76315edde783f19c33b71f815a823a0e0e906ae14673adebe68ef82b55aaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_almeida, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 12:29:41 np0005464891 podman[204481]: 2025-10-01 16:29:41.265269704 +0000 UTC m=+0.168850067 container attach 1b76315edde783f19c33b71f815a823a0e0e906ae14673adebe68ef82b55aaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:29:41 np0005464891 python3.9[204627]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759336180.555609-554-189412173340463/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:29:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:29:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:29:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:29:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:29:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:29:42 np0005464891 wizardly_almeida[204523]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:29:42 np0005464891 wizardly_almeida[204523]: --> relative data size: 1.0
Oct  1 12:29:42 np0005464891 wizardly_almeida[204523]: --> All data devices are unavailable
Oct  1 12:29:42 np0005464891 systemd[1]: libpod-1b76315edde783f19c33b71f815a823a0e0e906ae14673adebe68ef82b55aaba.scope: Deactivated successfully.
Oct  1 12:29:42 np0005464891 podman[204481]: 2025-10-01 16:29:42.257721816 +0000 UTC m=+1.161302179 container died 1b76315edde783f19c33b71f815a823a0e0e906ae14673adebe68ef82b55aaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:29:42 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c6376c4cfda9f35425eec7349ebc405cdd308e5bbb9873df4f0b7cb1a6facee8-merged.mount: Deactivated successfully.
Oct  1 12:29:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:42 np0005464891 podman[204481]: 2025-10-01 16:29:42.332207074 +0000 UTC m=+1.235787437 container remove 1b76315edde783f19c33b71f815a823a0e0e906ae14673adebe68ef82b55aaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_almeida, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:29:42 np0005464891 systemd[1]: libpod-conmon-1b76315edde783f19c33b71f815a823a0e0e906ae14673adebe68ef82b55aaba.scope: Deactivated successfully.
Oct  1 12:29:42 np0005464891 python3.9[204803]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:29:43 np0005464891 podman[205082]: 2025-10-01 16:29:43.05987217 +0000 UTC m=+0.056299027 container create e1f4bf1094bc78194ee033333084ac71eaa6a78a3a9dbd419230b2b65907e9a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:29:43 np0005464891 python3.9[205068]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759336181.850701-554-5150645338441/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:43 np0005464891 systemd[1]: Started libpod-conmon-e1f4bf1094bc78194ee033333084ac71eaa6a78a3a9dbd419230b2b65907e9a3.scope.
Oct  1 12:29:43 np0005464891 podman[205082]: 2025-10-01 16:29:43.028262907 +0000 UTC m=+0.024689824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:29:43 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:29:43 np0005464891 podman[205082]: 2025-10-01 16:29:43.203859759 +0000 UTC m=+0.200286666 container init e1f4bf1094bc78194ee033333084ac71eaa6a78a3a9dbd419230b2b65907e9a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 12:29:43 np0005464891 podman[205082]: 2025-10-01 16:29:43.218297118 +0000 UTC m=+0.214723965 container start e1f4bf1094bc78194ee033333084ac71eaa6a78a3a9dbd419230b2b65907e9a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 12:29:43 np0005464891 xenodochial_kapitsa[205098]: 167 167
Oct  1 12:29:43 np0005464891 systemd[1]: libpod-e1f4bf1094bc78194ee033333084ac71eaa6a78a3a9dbd419230b2b65907e9a3.scope: Deactivated successfully.
Oct  1 12:29:43 np0005464891 podman[205082]: 2025-10-01 16:29:43.228380267 +0000 UTC m=+0.224807174 container attach e1f4bf1094bc78194ee033333084ac71eaa6a78a3a9dbd419230b2b65907e9a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:29:43 np0005464891 podman[205082]: 2025-10-01 16:29:43.22960109 +0000 UTC m=+0.226027947 container died e1f4bf1094bc78194ee033333084ac71eaa6a78a3a9dbd419230b2b65907e9a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 12:29:43 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4bd45641622c9d45076e7d3ce36deeff4b76837eb442161a202ca19618d80a6f-merged.mount: Deactivated successfully.
Oct  1 12:29:43 np0005464891 podman[205082]: 2025-10-01 16:29:43.300399946 +0000 UTC m=+0.296826803 container remove e1f4bf1094bc78194ee033333084ac71eaa6a78a3a9dbd419230b2b65907e9a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 12:29:43 np0005464891 systemd[1]: libpod-conmon-e1f4bf1094bc78194ee033333084ac71eaa6a78a3a9dbd419230b2b65907e9a3.scope: Deactivated successfully.
Oct  1 12:29:43 np0005464891 podman[205212]: 2025-10-01 16:29:43.546404533 +0000 UTC m=+0.106854553 container create 4a9091257563d6ad83ef628b1835cf04193e8b8eca01f786e8dc96d3a7fc851c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:29:43 np0005464891 podman[205212]: 2025-10-01 16:29:43.47605264 +0000 UTC m=+0.036502710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:29:43 np0005464891 systemd[1]: Started libpod-conmon-4a9091257563d6ad83ef628b1835cf04193e8b8eca01f786e8dc96d3a7fc851c.scope.
Oct  1 12:29:43 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:29:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b659359fe58bdceaaabf314a8824225310328a9af99019db342e3f14f28147ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:29:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b659359fe58bdceaaabf314a8824225310328a9af99019db342e3f14f28147ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:29:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b659359fe58bdceaaabf314a8824225310328a9af99019db342e3f14f28147ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:29:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b659359fe58bdceaaabf314a8824225310328a9af99019db342e3f14f28147ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:29:43 np0005464891 podman[205212]: 2025-10-01 16:29:43.720296858 +0000 UTC m=+0.280746948 container init 4a9091257563d6ad83ef628b1835cf04193e8b8eca01f786e8dc96d3a7fc851c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 12:29:43 np0005464891 podman[205212]: 2025-10-01 16:29:43.729763419 +0000 UTC m=+0.290213429 container start 4a9091257563d6ad83ef628b1835cf04193e8b8eca01f786e8dc96d3a7fc851c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hertz, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:29:43 np0005464891 podman[205212]: 2025-10-01 16:29:43.769973031 +0000 UTC m=+0.330423061 container attach 4a9091257563d6ad83ef628b1835cf04193e8b8eca01f786e8dc96d3a7fc851c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hertz, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 12:29:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:29:43 np0005464891 python3.9[205294]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:29:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:44 np0005464891 nice_hertz[205289]: {
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:    "0": [
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:        {
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "devices": [
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "/dev/loop3"
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            ],
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "lv_name": "ceph_lv0",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "lv_size": "21470642176",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "name": "ceph_lv0",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "tags": {
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.cluster_name": "ceph",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.crush_device_class": "",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.encrypted": "0",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.osd_id": "0",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.type": "block",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.vdo": "0"
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            },
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "type": "block",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "vg_name": "ceph_vg0"
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:        }
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:    ],
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:    "1": [
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:        {
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "devices": [
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "/dev/loop4"
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            ],
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "lv_name": "ceph_lv1",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "lv_size": "21470642176",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "name": "ceph_lv1",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "tags": {
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.cluster_name": "ceph",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.crush_device_class": "",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.encrypted": "0",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.osd_id": "1",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.type": "block",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.vdo": "0"
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            },
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "type": "block",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "vg_name": "ceph_vg1"
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:        }
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:    ],
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:    "2": [
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:        {
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "devices": [
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "/dev/loop5"
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            ],
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "lv_name": "ceph_lv2",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "lv_size": "21470642176",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "name": "ceph_lv2",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "tags": {
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.cluster_name": "ceph",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.crush_device_class": "",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.encrypted": "0",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.osd_id": "2",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.type": "block",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:                "ceph.vdo": "0"
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            },
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "type": "block",
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:            "vg_name": "ceph_vg2"
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:        }
Oct  1 12:29:44 np0005464891 nice_hertz[205289]:    ]
Oct  1 12:29:44 np0005464891 nice_hertz[205289]: }
Oct  1 12:29:44 np0005464891 systemd[1]: libpod-4a9091257563d6ad83ef628b1835cf04193e8b8eca01f786e8dc96d3a7fc851c.scope: Deactivated successfully.
Oct  1 12:29:44 np0005464891 podman[205424]: 2025-10-01 16:29:44.566926671 +0000 UTC m=+0.028579920 container died 4a9091257563d6ad83ef628b1835cf04193e8b8eca01f786e8dc96d3a7fc851c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hertz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 12:29:44 np0005464891 python3.9[205419]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759336183.2998142-554-4078463852056/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:44 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b659359fe58bdceaaabf314a8824225310328a9af99019db342e3f14f28147ca-merged.mount: Deactivated successfully.
Oct  1 12:29:44 np0005464891 podman[205424]: 2025-10-01 16:29:44.641448071 +0000 UTC m=+0.103101300 container remove 4a9091257563d6ad83ef628b1835cf04193e8b8eca01f786e8dc96d3a7fc851c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hertz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:29:44 np0005464891 systemd[1]: libpod-conmon-4a9091257563d6ad83ef628b1835cf04193e8b8eca01f786e8dc96d3a7fc851c.scope: Deactivated successfully.
Oct  1 12:29:45 np0005464891 podman[205661]: 2025-10-01 16:29:45.16947364 +0000 UTC m=+0.096807005 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct  1 12:29:45 np0005464891 python3.9[205708]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:29:45 np0005464891 podman[205754]: 2025-10-01 16:29:45.519272576 +0000 UTC m=+0.071945939 container create 0710a47bee03a002eda099dd082d775ca121580209a6a193ec6918c5efbb327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_taussig, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 12:29:45 np0005464891 systemd[1]: Started libpod-conmon-0710a47bee03a002eda099dd082d775ca121580209a6a193ec6918c5efbb327b.scope.
Oct  1 12:29:45 np0005464891 podman[205754]: 2025-10-01 16:29:45.481726689 +0000 UTC m=+0.034400112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:29:45 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:29:45 np0005464891 podman[205754]: 2025-10-01 16:29:45.630248782 +0000 UTC m=+0.182922155 container init 0710a47bee03a002eda099dd082d775ca121580209a6a193ec6918c5efbb327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_taussig, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:29:45 np0005464891 podman[205754]: 2025-10-01 16:29:45.641262297 +0000 UTC m=+0.193935630 container start 0710a47bee03a002eda099dd082d775ca121580209a6a193ec6918c5efbb327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 12:29:45 np0005464891 podman[205754]: 2025-10-01 16:29:45.646516662 +0000 UTC m=+0.199190055 container attach 0710a47bee03a002eda099dd082d775ca121580209a6a193ec6918c5efbb327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 12:29:45 np0005464891 clever_taussig[205817]: 167 167
Oct  1 12:29:45 np0005464891 podman[205754]: 2025-10-01 16:29:45.6482634 +0000 UTC m=+0.200936773 container died 0710a47bee03a002eda099dd082d775ca121580209a6a193ec6918c5efbb327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_taussig, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 12:29:45 np0005464891 systemd[1]: libpod-0710a47bee03a002eda099dd082d775ca121580209a6a193ec6918c5efbb327b.scope: Deactivated successfully.
Oct  1 12:29:45 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ba87722b8a5a1b57b7f095f97b72be60fbb2fffe6bbcf40f91f0e119490f6ac5-merged.mount: Deactivated successfully.
Oct  1 12:29:45 np0005464891 podman[205754]: 2025-10-01 16:29:45.699380893 +0000 UTC m=+0.252054226 container remove 0710a47bee03a002eda099dd082d775ca121580209a6a193ec6918c5efbb327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:29:45 np0005464891 systemd[1]: libpod-conmon-0710a47bee03a002eda099dd082d775ca121580209a6a193ec6918c5efbb327b.scope: Deactivated successfully.
Oct  1 12:29:45 np0005464891 podman[205917]: 2025-10-01 16:29:45.936156575 +0000 UTC m=+0.060996037 container create 477332fee270c9108455be4d8cb215adeeca6e5b7fec357bc8a5e52cc3989f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:29:45 np0005464891 systemd[1]: Started libpod-conmon-477332fee270c9108455be4d8cb215adeeca6e5b7fec357bc8a5e52cc3989f20.scope.
Oct  1 12:29:46 np0005464891 podman[205917]: 2025-10-01 16:29:45.916395079 +0000 UTC m=+0.041234541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:29:46 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:29:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3cee5073131d98b5845b56cb279e0a94a72b155f5a0c36c4f94cd290c05b0ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:29:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3cee5073131d98b5845b56cb279e0a94a72b155f5a0c36c4f94cd290c05b0ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:29:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3cee5073131d98b5845b56cb279e0a94a72b155f5a0c36c4f94cd290c05b0ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:29:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3cee5073131d98b5845b56cb279e0a94a72b155f5a0c36c4f94cd290c05b0ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:29:46 np0005464891 podman[205917]: 2025-10-01 16:29:46.048230501 +0000 UTC m=+0.173070033 container init 477332fee270c9108455be4d8cb215adeeca6e5b7fec357bc8a5e52cc3989f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:29:46 np0005464891 podman[205917]: 2025-10-01 16:29:46.056387037 +0000 UTC m=+0.181226469 container start 477332fee270c9108455be4d8cb215adeeca6e5b7fec357bc8a5e52cc3989f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:29:46 np0005464891 podman[205917]: 2025-10-01 16:29:46.076523403 +0000 UTC m=+0.201362915 container attach 477332fee270c9108455be4d8cb215adeeca6e5b7fec357bc8a5e52cc3989f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:29:46 np0005464891 python3.9[205915]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759336184.787887-554-245736651665880/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:46 np0005464891 python3.9[206090]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]: {
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:        "osd_id": 2,
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:        "type": "bluestore"
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:    },
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:        "osd_id": 0,
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:        "type": "bluestore"
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:    },
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:        "osd_id": 1,
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:        "type": "bluestore"
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]:    }
Oct  1 12:29:46 np0005464891 dreamy_banach[205934]: }
Oct  1 12:29:47 np0005464891 systemd[1]: libpod-477332fee270c9108455be4d8cb215adeeca6e5b7fec357bc8a5e52cc3989f20.scope: Deactivated successfully.
Oct  1 12:29:47 np0005464891 podman[205917]: 2025-10-01 16:29:47.020680021 +0000 UTC m=+1.145519483 container died 477332fee270c9108455be4d8cb215adeeca6e5b7fec357bc8a5e52cc3989f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:29:47 np0005464891 systemd[1]: var-lib-containers-storage-overlay-f3cee5073131d98b5845b56cb279e0a94a72b155f5a0c36c4f94cd290c05b0ae-merged.mount: Deactivated successfully.
Oct  1 12:29:47 np0005464891 podman[205917]: 2025-10-01 16:29:47.084917686 +0000 UTC m=+1.209757118 container remove 477332fee270c9108455be4d8cb215adeeca6e5b7fec357bc8a5e52cc3989f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:29:47 np0005464891 systemd[1]: libpod-conmon-477332fee270c9108455be4d8cb215adeeca6e5b7fec357bc8a5e52cc3989f20.scope: Deactivated successfully.
Oct  1 12:29:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:29:47 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:29:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:29:47 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:29:47 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev f10e6cb5-ee45-42ef-a8bd-6613e363fcef does not exist
Oct  1 12:29:47 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev ca645f14-3e55-45bf-bc64-847a76ef2336 does not exist
Oct  1 12:29:47 np0005464891 python3.9[206333]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:47 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:29:47 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:29:48 np0005464891 python3.9[206485]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:29:48 np0005464891 python3.9[206637]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:49 np0005464891 python3.9[206789]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:50 np0005464891 python3.9[206941]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:51 np0005464891 python3.9[207093]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:51 np0005464891 python3.9[207245]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:52 np0005464891 python3.9[207397]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:53 np0005464891 python3.9[207551]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:29:53 np0005464891 python3.9[207703]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:54 np0005464891 python3.9[207856]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:55 np0005464891 python3.9[208008]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:56 np0005464891 python3.9[208160]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:56 np0005464891 python3.9[208313]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:57 np0005464891 python3.9[208465]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:29:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:29:58 np0005464891 python3.9[208588]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336197.0275867-775-116514199347430/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:29:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:29:59 np0005464891 python3.9[208740]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:29:59 np0005464891 python3.9[208864]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336198.6736946-775-103858441331230/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:00 np0005464891 python3.9[209016]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:01 np0005464891 python3.9[209139]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336200.152597-775-192816017040135/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:01 np0005464891 podman[209263]: 2025-10-01 16:30:01.96618952 +0000 UTC m=+0.104456317 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Oct  1 12:30:02 np0005464891 python3.9[209301]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:02 np0005464891 python3.9[209439]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336201.5624456-775-14862848772422/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:03 np0005464891 python3.9[209592]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:30:04 np0005464891 python3.9[209715]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336203.0367267-775-188030227609314/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:04 np0005464891 python3.9[209867]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:05 np0005464891 python3.9[209990]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336204.2262547-775-56199084411158/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:06 np0005464891 python3.9[210143]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:06 np0005464891 python3.9[210266]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336205.4792643-775-139576841568229/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:07 np0005464891 python3.9[210418]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:08 np0005464891 python3.9[210541]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336206.8943484-775-274636828914661/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:08 np0005464891 python3.9[210694]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:30:09 np0005464891 python3.9[210817]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336208.2096553-775-86618816901735/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:10 np0005464891 python3.9[210969]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:10 np0005464891 python3.9[211092]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336209.604347-775-236563991391985/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:11 np0005464891 python3.9[211246]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:30:11
Oct  1 12:30:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:30:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:30:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'default.rgw.meta', 'vms', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data']
Oct  1 12:30:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:30:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:12 np0005464891 python3.9[211370]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336211.128059-775-232833059993570/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:30:12.425 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:30:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:30:12.427 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:30:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:30:12.427 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:30:13 np0005464891 python3.9[211522]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:13 np0005464891 python3.9[211645]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336212.5764058-775-264291190792739/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:30:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:14 np0005464891 python3.9[211797]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:15 np0005464891 python3.9[211921]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336214.01264-775-8321410865643/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:15 np0005464891 podman[212045]: 2025-10-01 16:30:15.760867751 +0000 UTC m=+0.080784803 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:30:15 np0005464891 python3.9[212086]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:16 np0005464891 python3.9[212216]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336215.3835874-775-163316385478745/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:17 np0005464891 python3.9[212366]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:30:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:18 np0005464891 python3.9[212521]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct  1 12:30:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:30:20 np0005464891 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct  1 12:30:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:20 np0005464891 python3.9[212678]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:21 np0005464891 python3.9[212830]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:30:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:30:21 np0005464891 python3.9[212982]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:22 np0005464891 python3.9[213134]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:23 np0005464891 python3.9[213286]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:30:24 np0005464891 python3.9[213438]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:24 np0005464891 python3.9[213590]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:25 np0005464891 python3.9[213742]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:26 np0005464891 python3.9[213894]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:27 np0005464891 python3.9[214046]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:28 np0005464891 python3.9[214198]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:30:28 np0005464891 systemd[1]: Reloading.
Oct  1 12:30:28 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:30:28 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:30:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:28 np0005464891 systemd[1]: Starting libvirt logging daemon socket...
Oct  1 12:30:28 np0005464891 systemd[1]: Listening on libvirt logging daemon socket.
Oct  1 12:30:28 np0005464891 systemd[1]: Starting libvirt logging daemon admin socket...
Oct  1 12:30:28 np0005464891 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct  1 12:30:28 np0005464891 systemd[1]: Starting libvirt logging daemon...
Oct  1 12:30:28 np0005464891 systemd[1]: Started libvirt logging daemon.
Oct  1 12:30:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:30:29 np0005464891 python3.9[214391]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:30:29 np0005464891 systemd[1]: Reloading.
Oct  1 12:30:29 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:30:29 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:30:30 np0005464891 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct  1 12:30:30 np0005464891 systemd[1]: Starting libvirt nodedev daemon socket...
Oct  1 12:30:30 np0005464891 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct  1 12:30:30 np0005464891 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct  1 12:30:30 np0005464891 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct  1 12:30:30 np0005464891 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct  1 12:30:30 np0005464891 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct  1 12:30:30 np0005464891 systemd[1]: Starting libvirt nodedev daemon...
Oct  1 12:30:30 np0005464891 systemd[1]: Started libvirt nodedev daemon.
Oct  1 12:30:30 np0005464891 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct  1 12:30:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:30 np0005464891 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct  1 12:30:30 np0005464891 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct  1 12:30:30 np0005464891 python3.9[214613]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:30:30 np0005464891 systemd[1]: Reloading.
Oct  1 12:30:31 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:30:31 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:30:31 np0005464891 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct  1 12:30:31 np0005464891 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct  1 12:30:31 np0005464891 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct  1 12:30:31 np0005464891 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct  1 12:30:31 np0005464891 systemd[1]: Starting libvirt proxy daemon...
Oct  1 12:30:31 np0005464891 systemd[1]: Started libvirt proxy daemon.
Oct  1 12:30:31 np0005464891 setroubleshoot[214427]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 2768747e-8b54-4b8f-bf53-67734de9be2c
Oct  1 12:30:31 np0005464891 setroubleshoot[214427]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  1 12:30:31 np0005464891 setroubleshoot[214427]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 2768747e-8b54-4b8f-bf53-67734de9be2c
Oct  1 12:30:31 np0005464891 setroubleshoot[214427]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  1 12:30:32 np0005464891 python3.9[214825]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:30:32 np0005464891 systemd[1]: Reloading.
Oct  1 12:30:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:32 np0005464891 podman[214827]: 2025-10-01 16:30:32.42619346 +0000 UTC m=+0.099106129 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  1 12:30:32 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:30:32 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:30:32 np0005464891 systemd[1]: Listening on libvirt locking daemon socket.
Oct  1 12:30:32 np0005464891 systemd[1]: Starting libvirt QEMU daemon socket...
Oct  1 12:30:32 np0005464891 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  1 12:30:32 np0005464891 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct  1 12:30:32 np0005464891 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct  1 12:30:32 np0005464891 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct  1 12:30:32 np0005464891 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct  1 12:30:32 np0005464891 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct  1 12:30:32 np0005464891 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct  1 12:30:32 np0005464891 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct  1 12:30:32 np0005464891 systemd[1]: Starting libvirt QEMU daemon...
Oct  1 12:30:32 np0005464891 systemd[1]: Started libvirt QEMU daemon.
Oct  1 12:30:33 np0005464891 python3.9[215064]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:30:33 np0005464891 systemd[1]: Reloading.
Oct  1 12:30:33 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:30:33 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:30:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:30:34 np0005464891 systemd[1]: Starting libvirt secret daemon socket...
Oct  1 12:30:34 np0005464891 systemd[1]: Listening on libvirt secret daemon socket.
Oct  1 12:30:34 np0005464891 systemd[1]: Starting libvirt secret daemon admin socket...
Oct  1 12:30:34 np0005464891 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct  1 12:30:34 np0005464891 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct  1 12:30:34 np0005464891 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct  1 12:30:34 np0005464891 systemd[1]: Starting libvirt secret daemon...
Oct  1 12:30:34 np0005464891 systemd[1]: Started libvirt secret daemon.
Oct  1 12:30:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:34 np0005464891 python3.9[215275]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:35 np0005464891 python3.9[215428]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  1 12:30:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:36 np0005464891 python3.9[215580]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:30:37 np0005464891 python3.9[215734]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  1 12:30:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:38 np0005464891 python3.9[215884]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:30:38 np0005464891 python3.9[216005]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336237.8606074-1133-233185547411447/.source.xml follow=False _original_basename=secret.xml.j2 checksum=0b019fb0c9e2c33f33676a0639e386e44c7e8a1e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:39 np0005464891 python3.9[216157]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:30:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:40 np0005464891 python3.9[216319]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:41 np0005464891 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct  1 12:30:41 np0005464891 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.015s CPU time.
Oct  1 12:30:41 np0005464891 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct  1 12:30:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:30:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:30:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:30:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:30:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:30:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:30:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:42 np0005464891 python3.9[216782]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:43 np0005464891 python3.9[216934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:30:44 np0005464891 python3.9[217057]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336243.0719357-1188-164100037651603/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:44 np0005464891 python3.9[217209]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:45 np0005464891 python3.9[217361]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:45 np0005464891 podman[217411]: 2025-10-01 16:30:45.988365706 +0000 UTC m=+0.086708967 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  1 12:30:46 np0005464891 python3.9[217456]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:47 np0005464891 python3.9[217610]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:47 np0005464891 python3.9[217736]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.fdqdlfhl recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:30:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:30:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:30:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:30:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:30:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:30:48 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 1b6abfa9-0159-4190-8bf0-96b394b2ccea does not exist
Oct  1 12:30:48 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev e7299a88-b18f-4dac-97f5-382044a79937 does not exist
Oct  1 12:30:48 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 4869fde3-8218-4ab3-bede-08d5a24b13a8 does not exist
Oct  1 12:30:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:30:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:30:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:30:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:30:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:30:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:30:48 np0005464891 python3.9[217970]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:48 np0005464891 podman[218189]: 2025-10-01 16:30:48.795781848 +0000 UTC m=+0.062709125 container create 7ff5c5ad3ece3ea27d3b1d8a25b4d585f49c493e8b31284d4594c1a4d48c86c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 12:30:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:30:48 np0005464891 systemd[1]: Started libpod-conmon-7ff5c5ad3ece3ea27d3b1d8a25b4d585f49c493e8b31284d4594c1a4d48c86c7.scope.
Oct  1 12:30:48 np0005464891 podman[218189]: 2025-10-01 16:30:48.758655582 +0000 UTC m=+0.025582919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:30:48 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:30:48 np0005464891 python3.9[218185]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:48 np0005464891 podman[218189]: 2025-10-01 16:30:48.905404597 +0000 UTC m=+0.172331884 container init 7ff5c5ad3ece3ea27d3b1d8a25b4d585f49c493e8b31284d4594c1a4d48c86c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noyce, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:30:48 np0005464891 podman[218189]: 2025-10-01 16:30:48.915110214 +0000 UTC m=+0.182037511 container start 7ff5c5ad3ece3ea27d3b1d8a25b4d585f49c493e8b31284d4594c1a4d48c86c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 12:30:48 np0005464891 systemd[1]: libpod-7ff5c5ad3ece3ea27d3b1d8a25b4d585f49c493e8b31284d4594c1a4d48c86c7.scope: Deactivated successfully.
Oct  1 12:30:48 np0005464891 naughty_noyce[218205]: 167 167
Oct  1 12:30:48 np0005464891 podman[218189]: 2025-10-01 16:30:48.927061515 +0000 UTC m=+0.193988802 container attach 7ff5c5ad3ece3ea27d3b1d8a25b4d585f49c493e8b31284d4594c1a4d48c86c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noyce, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:30:48 np0005464891 podman[218189]: 2025-10-01 16:30:48.928592877 +0000 UTC m=+0.195520134 container died 7ff5c5ad3ece3ea27d3b1d8a25b4d585f49c493e8b31284d4594c1a4d48c86c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:30:48 np0005464891 systemd[1]: var-lib-containers-storage-overlay-941f287c30d8fc93f034cd94e0429fbda0346d3b2a284be30d14238f9a7f9ead-merged.mount: Deactivated successfully.
Oct  1 12:30:49 np0005464891 podman[218189]: 2025-10-01 16:30:49.035081809 +0000 UTC m=+0.302009096 container remove 7ff5c5ad3ece3ea27d3b1d8a25b4d585f49c493e8b31284d4594c1a4d48c86c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 12:30:49 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:30:49 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:30:49 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:30:49 np0005464891 systemd[1]: libpod-conmon-7ff5c5ad3ece3ea27d3b1d8a25b4d585f49c493e8b31284d4594c1a4d48c86c7.scope: Deactivated successfully.
Oct  1 12:30:49 np0005464891 podman[218286]: 2025-10-01 16:30:49.191722458 +0000 UTC m=+0.030415452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:30:49 np0005464891 podman[218286]: 2025-10-01 16:30:49.31812697 +0000 UTC m=+0.156819934 container create d262841eefbb42b0523c3170256c62eed891b7ad2618b08d493f9973e00f5c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_clarke, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 12:30:49 np0005464891 systemd[1]: Started libpod-conmon-d262841eefbb42b0523c3170256c62eed891b7ad2618b08d493f9973e00f5c5c.scope.
Oct  1 12:30:49 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:30:49 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d03eb5a9a0615d6b194a1fb8fdd3e3a2372dd6be018756b6a2d82d4bd1de3dea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:30:49 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d03eb5a9a0615d6b194a1fb8fdd3e3a2372dd6be018756b6a2d82d4bd1de3dea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:30:49 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d03eb5a9a0615d6b194a1fb8fdd3e3a2372dd6be018756b6a2d82d4bd1de3dea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:30:49 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d03eb5a9a0615d6b194a1fb8fdd3e3a2372dd6be018756b6a2d82d4bd1de3dea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:30:49 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d03eb5a9a0615d6b194a1fb8fdd3e3a2372dd6be018756b6a2d82d4bd1de3dea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:30:49 np0005464891 podman[218286]: 2025-10-01 16:30:49.497985941 +0000 UTC m=+0.336678865 container init d262841eefbb42b0523c3170256c62eed891b7ad2618b08d493f9973e00f5c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_clarke, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:30:49 np0005464891 podman[218286]: 2025-10-01 16:30:49.510419464 +0000 UTC m=+0.349112388 container start d262841eefbb42b0523c3170256c62eed891b7ad2618b08d493f9973e00f5c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 12:30:49 np0005464891 podman[218286]: 2025-10-01 16:30:49.517632854 +0000 UTC m=+0.356325788 container attach d262841eefbb42b0523c3170256c62eed891b7ad2618b08d493f9973e00f5c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:30:49 np0005464891 python3.9[218402]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:30:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:50 np0005464891 awesome_clarke[218386]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:30:50 np0005464891 awesome_clarke[218386]: --> relative data size: 1.0
Oct  1 12:30:50 np0005464891 awesome_clarke[218386]: --> All data devices are unavailable
Oct  1 12:30:50 np0005464891 systemd[1]: libpod-d262841eefbb42b0523c3170256c62eed891b7ad2618b08d493f9973e00f5c5c.scope: Deactivated successfully.
Oct  1 12:30:50 np0005464891 systemd[1]: libpod-d262841eefbb42b0523c3170256c62eed891b7ad2618b08d493f9973e00f5c5c.scope: Consumed 1.008s CPU time.
Oct  1 12:30:50 np0005464891 conmon[218386]: conmon d262841eefbb42b0523c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d262841eefbb42b0523c3170256c62eed891b7ad2618b08d493f9973e00f5c5c.scope/container/memory.events
Oct  1 12:30:50 np0005464891 podman[218582]: 2025-10-01 16:30:50.607347663 +0000 UTC m=+0.021648879 container died d262841eefbb42b0523c3170256c62eed891b7ad2618b08d493f9973e00f5c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_clarke, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:30:50 np0005464891 python3[218571]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  1 12:30:50 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d03eb5a9a0615d6b194a1fb8fdd3e3a2372dd6be018756b6a2d82d4bd1de3dea-merged.mount: Deactivated successfully.
Oct  1 12:30:50 np0005464891 podman[218582]: 2025-10-01 16:30:50.754224471 +0000 UTC m=+0.168525707 container remove d262841eefbb42b0523c3170256c62eed891b7ad2618b08d493f9973e00f5c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_clarke, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 12:30:50 np0005464891 systemd[1]: libpod-conmon-d262841eefbb42b0523c3170256c62eed891b7ad2618b08d493f9973e00f5c5c.scope: Deactivated successfully.
Oct  1 12:30:51 np0005464891 python3.9[218848]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:51 np0005464891 podman[218887]: 2025-10-01 16:30:51.449281986 +0000 UTC m=+0.063529656 container create 704f3e79d5f0df865decb0404cbd5f690e473430d4039eef63a83a4f5402df00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct  1 12:30:51 np0005464891 podman[218887]: 2025-10-01 16:30:51.408946512 +0000 UTC m=+0.023194212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:30:51 np0005464891 systemd[1]: Started libpod-conmon-704f3e79d5f0df865decb0404cbd5f690e473430d4039eef63a83a4f5402df00.scope.
Oct  1 12:30:51 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:30:51 np0005464891 podman[218887]: 2025-10-01 16:30:51.748011541 +0000 UTC m=+0.362259231 container init 704f3e79d5f0df865decb0404cbd5f690e473430d4039eef63a83a4f5402df00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:30:51 np0005464891 podman[218887]: 2025-10-01 16:30:51.759879709 +0000 UTC m=+0.374127399 container start 704f3e79d5f0df865decb0404cbd5f690e473430d4039eef63a83a4f5402df00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:30:51 np0005464891 gifted_pare[218982]: 167 167
Oct  1 12:30:51 np0005464891 systemd[1]: libpod-704f3e79d5f0df865decb0404cbd5f690e473430d4039eef63a83a4f5402df00.scope: Deactivated successfully.
Oct  1 12:30:51 np0005464891 podman[218887]: 2025-10-01 16:30:51.773267749 +0000 UTC m=+0.387515439 container attach 704f3e79d5f0df865decb0404cbd5f690e473430d4039eef63a83a4f5402df00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:30:51 np0005464891 podman[218887]: 2025-10-01 16:30:51.773926847 +0000 UTC m=+0.388174547 container died 704f3e79d5f0df865decb0404cbd5f690e473430d4039eef63a83a4f5402df00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:30:51 np0005464891 python3.9[218979]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:51 np0005464891 systemd[1]: var-lib-containers-storage-overlay-af5dc362866f17dda2ee89f11d36eb96cf1f2b6e270100ad5672671a511ae7c0-merged.mount: Deactivated successfully.
Oct  1 12:30:52 np0005464891 podman[218887]: 2025-10-01 16:30:52.324907081 +0000 UTC m=+0.939154751 container remove 704f3e79d5f0df865decb0404cbd5f690e473430d4039eef63a83a4f5402df00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:30:52 np0005464891 systemd[1]: libpod-conmon-704f3e79d5f0df865decb0404cbd5f690e473430d4039eef63a83a4f5402df00.scope: Deactivated successfully.
Oct  1 12:30:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:52 np0005464891 podman[219152]: 2025-10-01 16:30:52.524970549 +0000 UTC m=+0.056851672 container create f3185842eb71bf96e6d76f5695aff9f1628d6a7ee75af2cc1b2ef83f319f07c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:30:52 np0005464891 systemd[1]: Started libpod-conmon-f3185842eb71bf96e6d76f5695aff9f1628d6a7ee75af2cc1b2ef83f319f07c9.scope.
Oct  1 12:30:52 np0005464891 podman[219152]: 2025-10-01 16:30:52.493379046 +0000 UTC m=+0.025260179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:30:52 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:30:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee31870df506c7f44b6870a8bcafce3d483f0ca294f0dd745b6d23944b8dbe23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:30:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee31870df506c7f44b6870a8bcafce3d483f0ca294f0dd745b6d23944b8dbe23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:30:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee31870df506c7f44b6870a8bcafce3d483f0ca294f0dd745b6d23944b8dbe23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:30:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee31870df506c7f44b6870a8bcafce3d483f0ca294f0dd745b6d23944b8dbe23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:30:52 np0005464891 podman[219152]: 2025-10-01 16:30:52.615921352 +0000 UTC m=+0.147802505 container init f3185842eb71bf96e6d76f5695aff9f1628d6a7ee75af2cc1b2ef83f319f07c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 12:30:52 np0005464891 podman[219152]: 2025-10-01 16:30:52.623241574 +0000 UTC m=+0.155122687 container start f3185842eb71bf96e6d76f5695aff9f1628d6a7ee75af2cc1b2ef83f319f07c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 12:30:52 np0005464891 podman[219152]: 2025-10-01 16:30:52.628002886 +0000 UTC m=+0.159884029 container attach f3185842eb71bf96e6d76f5695aff9f1628d6a7ee75af2cc1b2ef83f319f07c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 12:30:52 np0005464891 python3.9[219171]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:53 np0005464891 python3.9[219257]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]: {
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:    "0": [
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:        {
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "devices": [
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "/dev/loop3"
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            ],
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "lv_name": "ceph_lv0",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "lv_size": "21470642176",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "name": "ceph_lv0",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "tags": {
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.cluster_name": "ceph",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.crush_device_class": "",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.encrypted": "0",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.osd_id": "0",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.type": "block",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.vdo": "0"
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            },
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "type": "block",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "vg_name": "ceph_vg0"
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:        }
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:    ],
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:    "1": [
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:        {
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "devices": [
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "/dev/loop4"
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            ],
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "lv_name": "ceph_lv1",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "lv_size": "21470642176",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "name": "ceph_lv1",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "tags": {
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.cluster_name": "ceph",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.crush_device_class": "",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.encrypted": "0",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.osd_id": "1",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.type": "block",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.vdo": "0"
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            },
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "type": "block",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "vg_name": "ceph_vg1"
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:        }
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:    ],
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:    "2": [
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:        {
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "devices": [
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "/dev/loop5"
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            ],
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "lv_name": "ceph_lv2",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "lv_size": "21470642176",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "name": "ceph_lv2",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "tags": {
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.cluster_name": "ceph",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.crush_device_class": "",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.encrypted": "0",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.osd_id": "2",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.type": "block",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:                "ceph.vdo": "0"
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            },
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "type": "block",
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:            "vg_name": "ceph_vg2"
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:        }
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]:    ]
Oct  1 12:30:53 np0005464891 laughing_ishizaka[219175]: }
Oct  1 12:30:53 np0005464891 systemd[1]: libpod-f3185842eb71bf96e6d76f5695aff9f1628d6a7ee75af2cc1b2ef83f319f07c9.scope: Deactivated successfully.
Oct  1 12:30:53 np0005464891 podman[219152]: 2025-10-01 16:30:53.456135227 +0000 UTC m=+0.988016330 container died f3185842eb71bf96e6d76f5695aff9f1628d6a7ee75af2cc1b2ef83f319f07c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 12:30:53 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ee31870df506c7f44b6870a8bcafce3d483f0ca294f0dd745b6d23944b8dbe23-merged.mount: Deactivated successfully.
Oct  1 12:30:53 np0005464891 podman[219152]: 2025-10-01 16:30:53.540937271 +0000 UTC m=+1.072818384 container remove f3185842eb71bf96e6d76f5695aff9f1628d6a7ee75af2cc1b2ef83f319f07c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ishizaka, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:30:53 np0005464891 systemd[1]: libpod-conmon-f3185842eb71bf96e6d76f5695aff9f1628d6a7ee75af2cc1b2ef83f319f07c9.scope: Deactivated successfully.
Oct  1 12:30:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:30:54 np0005464891 python3.9[219501]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:54 np0005464891 podman[219594]: 2025-10-01 16:30:54.257705646 +0000 UTC m=+0.042206777 container create 4d9db396027d782e2798f8e763405cc651f3603b6d7c192eb3105f3bad179c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 12:30:54 np0005464891 systemd[1]: Started libpod-conmon-4d9db396027d782e2798f8e763405cc651f3603b6d7c192eb3105f3bad179c5f.scope.
Oct  1 12:30:54 np0005464891 podman[219594]: 2025-10-01 16:30:54.236791728 +0000 UTC m=+0.021292889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:30:54 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:30:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:54 np0005464891 podman[219594]: 2025-10-01 16:30:54.367406417 +0000 UTC m=+0.151907568 container init 4d9db396027d782e2798f8e763405cc651f3603b6d7c192eb3105f3bad179c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 12:30:54 np0005464891 podman[219594]: 2025-10-01 16:30:54.374518363 +0000 UTC m=+0.159019484 container start 4d9db396027d782e2798f8e763405cc651f3603b6d7c192eb3105f3bad179c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bhabha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 12:30:54 np0005464891 heuristic_bhabha[219634]: 167 167
Oct  1 12:30:54 np0005464891 podman[219594]: 2025-10-01 16:30:54.380689714 +0000 UTC m=+0.165190925 container attach 4d9db396027d782e2798f8e763405cc651f3603b6d7c192eb3105f3bad179c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 12:30:54 np0005464891 podman[219594]: 2025-10-01 16:30:54.381133577 +0000 UTC m=+0.165634738 container died 4d9db396027d782e2798f8e763405cc651f3603b6d7c192eb3105f3bad179c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bhabha, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 12:30:54 np0005464891 systemd[1]: libpod-4d9db396027d782e2798f8e763405cc651f3603b6d7c192eb3105f3bad179c5f.scope: Deactivated successfully.
Oct  1 12:30:54 np0005464891 systemd[1]: var-lib-containers-storage-overlay-7cfea1ee697fd7e14ba68c77733618feb344436bdce9dbefb331abbccb9347cd-merged.mount: Deactivated successfully.
Oct  1 12:30:54 np0005464891 podman[219594]: 2025-10-01 16:30:54.46885922 +0000 UTC m=+0.253360341 container remove 4d9db396027d782e2798f8e763405cc651f3603b6d7c192eb3105f3bad179c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 12:30:54 np0005464891 systemd[1]: libpod-conmon-4d9db396027d782e2798f8e763405cc651f3603b6d7c192eb3105f3bad179c5f.scope: Deactivated successfully.
Oct  1 12:30:54 np0005464891 python3.9[219668]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:54 np0005464891 podman[219687]: 2025-10-01 16:30:54.635753062 +0000 UTC m=+0.021484165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:30:54 np0005464891 podman[219687]: 2025-10-01 16:30:54.87928816 +0000 UTC m=+0.265019243 container create 1f2cd50f4c53320fa36dacd2610746a64e29a056f40970b7c5a0cf2c046691ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 12:30:54 np0005464891 systemd[1]: Started libpod-conmon-1f2cd50f4c53320fa36dacd2610746a64e29a056f40970b7c5a0cf2c046691ec.scope.
Oct  1 12:30:54 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:30:54 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2de9a33dac0ede450fe14d647e07337f0e13ac2f25aed1d7b038d7f1c2041faa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:30:54 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2de9a33dac0ede450fe14d647e07337f0e13ac2f25aed1d7b038d7f1c2041faa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:30:54 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2de9a33dac0ede450fe14d647e07337f0e13ac2f25aed1d7b038d7f1c2041faa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:30:54 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2de9a33dac0ede450fe14d647e07337f0e13ac2f25aed1d7b038d7f1c2041faa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:30:55 np0005464891 podman[219687]: 2025-10-01 16:30:55.009274262 +0000 UTC m=+0.395005375 container init 1f2cd50f4c53320fa36dacd2610746a64e29a056f40970b7c5a0cf2c046691ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_taussig, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 12:30:55 np0005464891 podman[219687]: 2025-10-01 16:30:55.023082504 +0000 UTC m=+0.408813607 container start 1f2cd50f4c53320fa36dacd2610746a64e29a056f40970b7c5a0cf2c046691ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 12:30:55 np0005464891 podman[219687]: 2025-10-01 16:30:55.032658498 +0000 UTC m=+0.418389581 container attach 1f2cd50f4c53320fa36dacd2610746a64e29a056f40970b7c5a0cf2c046691ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_taussig, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:30:55 np0005464891 python3.9[219860]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:55 np0005464891 python3.9[219940]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]: {
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:        "osd_id": 2,
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:        "type": "bluestore"
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:    },
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:        "osd_id": 0,
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:        "type": "bluestore"
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:    },
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:        "osd_id": 1,
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:        "type": "bluestore"
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]:    }
Oct  1 12:30:56 np0005464891 sweet_taussig[219780]: }
Oct  1 12:30:56 np0005464891 systemd[1]: libpod-1f2cd50f4c53320fa36dacd2610746a64e29a056f40970b7c5a0cf2c046691ec.scope: Deactivated successfully.
Oct  1 12:30:56 np0005464891 systemd[1]: libpod-1f2cd50f4c53320fa36dacd2610746a64e29a056f40970b7c5a0cf2c046691ec.scope: Consumed 1.108s CPU time.
Oct  1 12:30:56 np0005464891 podman[219687]: 2025-10-01 16:30:56.123111428 +0000 UTC m=+1.508842551 container died 1f2cd50f4c53320fa36dacd2610746a64e29a056f40970b7c5a0cf2c046691ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:30:56 np0005464891 systemd[1]: var-lib-containers-storage-overlay-2de9a33dac0ede450fe14d647e07337f0e13ac2f25aed1d7b038d7f1c2041faa-merged.mount: Deactivated successfully.
Oct  1 12:30:56 np0005464891 podman[219687]: 2025-10-01 16:30:56.271753196 +0000 UTC m=+1.657484319 container remove 1f2cd50f4c53320fa36dacd2610746a64e29a056f40970b7c5a0cf2c046691ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:30:56 np0005464891 systemd[1]: libpod-conmon-1f2cd50f4c53320fa36dacd2610746a64e29a056f40970b7c5a0cf2c046691ec.scope: Deactivated successfully.
Oct  1 12:30:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:30:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:30:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:30:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:30:56 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev fe9fa5aa-41fd-4514-acb5-3103ebcb8cce does not exist
Oct  1 12:30:56 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 6a6a64f8-5422-4cad-a17c-d37d798d106d does not exist
Oct  1 12:30:56 np0005464891 python3.9[220182]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:30:57 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:30:57 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:30:57 np0005464891 python3.9[220307]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759336256.143316-1313-227657571635591/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:58 np0005464891 python3.9[220459]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:30:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:30:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:30:58 np0005464891 python3.9[220611]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:30:59 np0005464891 python3.9[220766]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:31:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:00 np0005464891 python3.9[220918]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:31:01 np0005464891 python3.9[221071]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:31:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:02 np0005464891 python3.9[221225]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:31:03 np0005464891 podman[221352]: 2025-10-01 16:31:03.018992959 +0000 UTC m=+0.116694686 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true)
Oct  1 12:31:03 np0005464891 python3.9[221396]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:31:03 np0005464891 python3.9[221559]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:31:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:31:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:04 np0005464891 python3.9[221682]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336263.3524835-1385-145816688674686/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:31:05 np0005464891 python3.9[221834]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:31:05 np0005464891 python3.9[221957]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336264.6646302-1400-148084068411082/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:31:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:06 np0005464891 python3.9[222109]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:31:07 np0005464891 python3.9[222232]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336265.9813366-1415-275788149868503/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:31:08 np0005464891 python3.9[222384]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:31:08 np0005464891 systemd[1]: Reloading.
Oct  1 12:31:08 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:31:08 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:31:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:08 np0005464891 systemd[1]: Reached target edpm_libvirt.target.
Oct  1 12:31:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:31:09 np0005464891 python3.9[222575]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  1 12:31:09 np0005464891 systemd[1]: Reloading.
Oct  1 12:31:09 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:31:09 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:31:10 np0005464891 systemd[1]: Reloading.
Oct  1 12:31:10 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:31:10 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:31:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:10 np0005464891 systemd[1]: session-50.scope: Deactivated successfully.
Oct  1 12:31:10 np0005464891 systemd[1]: session-50.scope: Consumed 3min 49.363s CPU time.
Oct  1 12:31:10 np0005464891 systemd-logind[801]: Session 50 logged out. Waiting for processes to exit.
Oct  1 12:31:10 np0005464891 systemd-logind[801]: Removed session 50.
Oct  1 12:31:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:31:11
Oct  1 12:31:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:31:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:31:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.control', 'default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'images']
Oct  1 12:31:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:31:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:31:12.427 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:31:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:31:12.429 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:31:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:31:12.429 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:31:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:31:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:16 np0005464891 systemd-logind[801]: New session 51 of user zuul.
Oct  1 12:31:16 np0005464891 systemd[1]: Started Session 51 of User zuul.
Oct  1 12:31:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:16 np0005464891 podman[222673]: 2025-10-01 16:31:16.42467868 +0000 UTC m=+0.097405315 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:31:17 np0005464891 python3.9[222843]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:31:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:18 np0005464891 python3.9[222999]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:31:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:31:19 np0005464891 python3.9[223151]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:31:20 np0005464891 python3.9[223303]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:31:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:20 np0005464891 python3.9[223455]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  1 12:31:21 np0005464891 python3.9[223607]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:31:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:31:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:22 np0005464891 python3.9[223759]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:31:23 np0005464891 python3.9[223913]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:31:23 np0005464891 systemd[1]: Reloading.
Oct  1 12:31:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:31:23 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:31:23 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:31:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:25 np0005464891 python3.9[224102]: ansible-ansible.builtin.service_facts Invoked
Oct  1 12:31:25 np0005464891 network[224119]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 12:31:25 np0005464891 network[224120]: 'network-scripts' will be removed from distribution in near future.
Oct  1 12:31:25 np0005464891 network[224121]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 12:31:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:31:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:30 np0005464891 python3.9[224395]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:31:31 np0005464891 systemd[1]: Reloading.
Oct  1 12:31:31 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:31:31 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:31:32 np0005464891 python3.9[224582]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:31:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:33 np0005464891 python3.9[224734]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  1 12:31:33 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 12:31:33 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 12:31:33 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 12:31:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:31:33 np0005464891 podman[224764]: 2025-10-01 16:31:33.997356537 +0000 UTC m=+0.103575149 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:31:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:35 np0005464891 podman[224745]: 2025-10-01 16:31:35.367202689 +0000 UTC m=+2.143309121 image pull 81d94872551c3ae3c30801602bbb5f0c44872f15dcde472a0ba869fe2f28966e quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  1 12:31:35 np0005464891 podman[224829]: 2025-10-01 16:31:35.579640452 +0000 UTC m=+0.102019075 container create 7f34e8c368b24e3182eeef6c796fdd16142ae53520d1c0b67699e2aad566c693 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  1 12:31:35 np0005464891 podman[224829]: 2025-10-01 16:31:35.501623159 +0000 UTC m=+0.024001812 image pull 81d94872551c3ae3c30801602bbb5f0c44872f15dcde472a0ba869fe2f28966e quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  1 12:31:35 np0005464891 NetworkManager[44940]: <info>  [1759336295.6296] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/21)
Oct  1 12:31:35 np0005464891 kernel: podman0: port 1(veth0) entered blocking state
Oct  1 12:31:35 np0005464891 kernel: podman0: port 1(veth0) entered disabled state
Oct  1 12:31:35 np0005464891 kernel: veth0: entered allmulticast mode
Oct  1 12:31:35 np0005464891 kernel: veth0: entered promiscuous mode
Oct  1 12:31:35 np0005464891 NetworkManager[44940]: <info>  [1759336295.6476] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/22)
Oct  1 12:31:35 np0005464891 kernel: podman0: port 1(veth0) entered blocking state
Oct  1 12:31:35 np0005464891 kernel: podman0: port 1(veth0) entered forwarding state
Oct  1 12:31:35 np0005464891 NetworkManager[44940]: <info>  [1759336295.6494] device (veth0): carrier: link connected
Oct  1 12:31:35 np0005464891 NetworkManager[44940]: <info>  [1759336295.6497] device (podman0): carrier: link connected
Oct  1 12:31:35 np0005464891 systemd-udevd[224868]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:31:35 np0005464891 systemd-udevd[224871]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:31:35 np0005464891 NetworkManager[44940]: <info>  [1759336295.7080] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:31:35 np0005464891 NetworkManager[44940]: <info>  [1759336295.7088] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:31:35 np0005464891 NetworkManager[44940]: <info>  [1759336295.7096] device (podman0): Activation: starting connection 'podman0' (6afb08bd-e326-49b7-8526-a7e28c0de871)
Oct  1 12:31:35 np0005464891 NetworkManager[44940]: <info>  [1759336295.7098] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  1 12:31:35 np0005464891 NetworkManager[44940]: <info>  [1759336295.7100] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  1 12:31:35 np0005464891 NetworkManager[44940]: <info>  [1759336295.7102] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  1 12:31:35 np0005464891 NetworkManager[44940]: <info>  [1759336295.7107] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  1 12:31:35 np0005464891 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  1 12:31:35 np0005464891 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  1 12:31:35 np0005464891 NetworkManager[44940]: <info>  [1759336295.7388] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  1 12:31:35 np0005464891 NetworkManager[44940]: <info>  [1759336295.7390] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  1 12:31:35 np0005464891 NetworkManager[44940]: <info>  [1759336295.7399] device (podman0): Activation: successful, device activated.
Oct  1 12:31:35 np0005464891 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct  1 12:31:36 np0005464891 systemd[1]: Started libpod-conmon-7f34e8c368b24e3182eeef6c796fdd16142ae53520d1c0b67699e2aad566c693.scope.
Oct  1 12:31:36 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:31:36 np0005464891 podman[224829]: 2025-10-01 16:31:36.144433742 +0000 UTC m=+0.666812385 container init 7f34e8c368b24e3182eeef6c796fdd16142ae53520d1c0b67699e2aad566c693 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:31:36 np0005464891 podman[224829]: 2025-10-01 16:31:36.158368582 +0000 UTC m=+0.680747225 container start 7f34e8c368b24e3182eeef6c796fdd16142ae53520d1c0b67699e2aad566c693 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 12:31:36 np0005464891 iscsid_config[224987]: iqn.1994-05.com.redhat:b2b944312e6f#015
Oct  1 12:31:36 np0005464891 systemd[1]: libpod-7f34e8c368b24e3182eeef6c796fdd16142ae53520d1c0b67699e2aad566c693.scope: Deactivated successfully.
Oct  1 12:31:36 np0005464891 conmon[224987]: conmon 7f34e8c368b24e3182ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f34e8c368b24e3182eeef6c796fdd16142ae53520d1c0b67699e2aad566c693.scope/container/memory.events
Oct  1 12:31:36 np0005464891 podman[224829]: 2025-10-01 16:31:36.297215026 +0000 UTC m=+0.819593649 container attach 7f34e8c368b24e3182eeef6c796fdd16142ae53520d1c0b67699e2aad566c693 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  1 12:31:36 np0005464891 podman[224829]: 2025-10-01 16:31:36.30060684 +0000 UTC m=+0.822985503 container died 7f34e8c368b24e3182eeef6c796fdd16142ae53520d1c0b67699e2aad566c693 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 12:31:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:36 np0005464891 kernel: podman0: port 1(veth0) entered disabled state
Oct  1 12:31:36 np0005464891 kernel: veth0 (unregistering): left allmulticast mode
Oct  1 12:31:36 np0005464891 kernel: veth0 (unregistering): left promiscuous mode
Oct  1 12:31:36 np0005464891 kernel: podman0: port 1(veth0) entered disabled state
Oct  1 12:31:36 np0005464891 NetworkManager[44940]: <info>  [1759336296.4878] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:31:36 np0005464891 systemd[1]: run-netns-netns\x2deb84b9da\x2d2060\x2dc8fb\x2d0c85\x2dfe340bec3de7.mount: Deactivated successfully.
Oct  1 12:31:36 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7f34e8c368b24e3182eeef6c796fdd16142ae53520d1c0b67699e2aad566c693-userdata-shm.mount: Deactivated successfully.
Oct  1 12:31:36 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6a82f712f8f1aa71b5a2a5f8905459d054df908e18c0dbb98b8b907a726f7956-merged.mount: Deactivated successfully.
Oct  1 12:31:36 np0005464891 podman[224829]: 2025-10-01 16:31:36.901564893 +0000 UTC m=+1.423943536 container remove 7f34e8c368b24e3182eeef6c796fdd16142ae53520d1c0b67699e2aad566c693 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:31:36 np0005464891 systemd[1]: libpod-conmon-7f34e8c368b24e3182eeef6c796fdd16142ae53520d1c0b67699e2aad566c693.scope: Deactivated successfully.
Oct  1 12:31:36 np0005464891 python3.9[224734]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid:current-podified /usr/sbin/iscsi-iname
Oct  1 12:31:37 np0005464891 python3.9[224734]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct  1 12:31:37 np0005464891 python3.9[225231]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:31:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:38 np0005464891 python3.9[225354]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336297.2669-119-39319616482157/.source.iscsi _original_basename=.vurakc73 follow=False checksum=06c84516e9f9e559acd09c81fa4bfee13b305376 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:31:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:31:39 np0005464891 python3.9[225508]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:31:40 np0005464891 python3.9[225658]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:31:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:41 np0005464891 python3.9[225812]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:31:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:31:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:31:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:31:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:31:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:31:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:31:42 np0005464891 python3.9[225964]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:31:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:42 np0005464891 python3.9[226116]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:31:43 np0005464891 python3.9[226194]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:31:43.868719) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336303868759, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1784, "num_deletes": 251, "total_data_size": 2995754, "memory_usage": 3036856, "flush_reason": "Manual Compaction"}
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336303888972, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1701121, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11779, "largest_seqno": 13562, "table_properties": {"data_size": 1695275, "index_size": 2921, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14679, "raw_average_key_size": 20, "raw_value_size": 1682370, "raw_average_value_size": 2307, "num_data_blocks": 135, "num_entries": 729, "num_filter_entries": 729, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336100, "oldest_key_time": 1759336100, "file_creation_time": 1759336303, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 20304 microseconds, and 4319 cpu microseconds.
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:31:43.889022) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1701121 bytes OK
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:31:43.889041) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:31:43.895143) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:31:43.895168) EVENT_LOG_v1 {"time_micros": 1759336303895162, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:31:43.895188) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2988175, prev total WAL file size 2988175, number of live WAL files 2.
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:31:43.896227) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1661KB)], [29(7806KB)]
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336303896316, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9695462, "oldest_snapshot_seqno": -1}
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4025 keys, 7618371 bytes, temperature: kUnknown
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336303964525, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7618371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7589535, "index_size": 17654, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 95908, "raw_average_key_size": 23, "raw_value_size": 7515018, "raw_average_value_size": 1867, "num_data_blocks": 767, "num_entries": 4025, "num_filter_entries": 4025, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759336303, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:31:43.964805) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7618371 bytes
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:31:43.970764) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.0 rd, 111.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.6 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.2) write-amplify(4.5) OK, records in: 4445, records dropped: 420 output_compression: NoCompression
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:31:43.970786) EVENT_LOG_v1 {"time_micros": 1759336303970776, "job": 12, "event": "compaction_finished", "compaction_time_micros": 68294, "compaction_time_cpu_micros": 24746, "output_level": 6, "num_output_files": 1, "total_output_size": 7618371, "num_input_records": 4445, "num_output_records": 4025, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336303971281, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336303973073, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:31:43.896104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:31:43.973166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:31:43.973172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:31:43.973174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:31:43.973176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:31:43 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:31:43.973178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:31:44 np0005464891 python3.9[226346]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:31:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:44 np0005464891 python3.9[226424]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:31:45 np0005464891 python3.9[226576]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:31:45 np0005464891 python3.9[226729]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:31:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:46 np0005464891 python3.9[226807]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:31:46 np0005464891 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  1 12:31:46 np0005464891 podman[226826]: 2025-10-01 16:31:46.670660142 +0000 UTC m=+0.102021365 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  1 12:31:47 np0005464891 python3.9[226978]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:31:47 np0005464891 python3.9[227056]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:31:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:48 np0005464891 python3.9[227208]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:31:48 np0005464891 systemd[1]: Reloading.
Oct  1 12:31:48 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:31:48 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:31:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:31:49 np0005464891 python3.9[227397]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:31:50 np0005464891 python3.9[227475]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:31:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:51 np0005464891 python3.9[227627]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:31:51 np0005464891 python3.9[227705]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:31:52 np0005464891 python3.9[227857]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:31:52 np0005464891 systemd[1]: Reloading.
Oct  1 12:31:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:52 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:31:52 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:31:52 np0005464891 systemd[1]: Starting Create netns directory...
Oct  1 12:31:52 np0005464891 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  1 12:31:52 np0005464891 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  1 12:31:52 np0005464891 systemd[1]: Finished Create netns directory.
Oct  1 12:31:53 np0005464891 python3.9[228052]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:31:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:31:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:54 np0005464891 python3.9[228204]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:31:55 np0005464891 python3.9[228327]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759336313.9045792-273-124913539810043/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:31:56 np0005464891 python3.9[228479]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:31:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:56 np0005464891 python3.9[228631]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:31:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:31:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:31:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:31:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:31:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:31:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:31:57 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev ebeaff9d-32a4-4ec9-b280-1a378fe453bc does not exist
Oct  1 12:31:57 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev ef661736-e8a6-410d-8dd6-d779cce5c941 does not exist
Oct  1 12:31:57 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 1f56881c-9937-42a1-9ac2-d20a1294e6b4 does not exist
Oct  1 12:31:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:31:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:31:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:31:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:31:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:31:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:31:57 np0005464891 python3.9[228871]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336316.2936969-298-257993737546800/.source.json _original_basename=.8_4vo3t6 follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:31:58 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:31:58 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:31:58 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:31:58 np0005464891 podman[229180]: 2025-10-01 16:31:58.236199029 +0000 UTC m=+0.059666601 container create a6dc74786d80d6dfecbe295f58ffe019315335e54bffb8b3077571e9625fc448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_allen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:31:58 np0005464891 systemd[1]: Started libpod-conmon-a6dc74786d80d6dfecbe295f58ffe019315335e54bffb8b3077571e9625fc448.scope.
Oct  1 12:31:58 np0005464891 podman[229180]: 2025-10-01 16:31:58.207034842 +0000 UTC m=+0.030502404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:31:58 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:31:58 np0005464891 python3.9[229179]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:31:58 np0005464891 podman[229180]: 2025-10-01 16:31:58.348259694 +0000 UTC m=+0.171727326 container init a6dc74786d80d6dfecbe295f58ffe019315335e54bffb8b3077571e9625fc448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_allen, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:31:58 np0005464891 podman[229180]: 2025-10-01 16:31:58.358735786 +0000 UTC m=+0.182203348 container start a6dc74786d80d6dfecbe295f58ffe019315335e54bffb8b3077571e9625fc448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:31:58 np0005464891 podman[229180]: 2025-10-01 16:31:58.365373262 +0000 UTC m=+0.188840834 container attach a6dc74786d80d6dfecbe295f58ffe019315335e54bffb8b3077571e9625fc448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 12:31:58 np0005464891 systemd[1]: libpod-a6dc74786d80d6dfecbe295f58ffe019315335e54bffb8b3077571e9625fc448.scope: Deactivated successfully.
Oct  1 12:31:58 np0005464891 distracted_allen[229196]: 167 167
Oct  1 12:31:58 np0005464891 podman[229180]: 2025-10-01 16:31:58.368577732 +0000 UTC m=+0.192045274 container died a6dc74786d80d6dfecbe295f58ffe019315335e54bffb8b3077571e9625fc448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_allen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:31:58 np0005464891 conmon[229196]: conmon a6dc74786d80d6dfecbe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6dc74786d80d6dfecbe295f58ffe019315335e54bffb8b3077571e9625fc448.scope/container/memory.events
Oct  1 12:31:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:31:58 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6fd13ef28bbb8dc48d7b41bdc90cc1db8ee274f4aadf3df20c48f61cfca543d1-merged.mount: Deactivated successfully.
Oct  1 12:31:58 np0005464891 podman[229180]: 2025-10-01 16:31:58.438265381 +0000 UTC m=+0.261732953 container remove a6dc74786d80d6dfecbe295f58ffe019315335e54bffb8b3077571e9625fc448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:31:58 np0005464891 systemd[1]: libpod-conmon-a6dc74786d80d6dfecbe295f58ffe019315335e54bffb8b3077571e9625fc448.scope: Deactivated successfully.
Oct  1 12:31:58 np0005464891 podman[229246]: 2025-10-01 16:31:58.662263768 +0000 UTC m=+0.067459028 container create fff7f35d2f1a662b6a80bab4a06968fa9ff648d54ff16132483c4d80e85c0f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:31:58 np0005464891 systemd[1]: Started libpod-conmon-fff7f35d2f1a662b6a80bab4a06968fa9ff648d54ff16132483c4d80e85c0f13.scope.
Oct  1 12:31:58 np0005464891 podman[229246]: 2025-10-01 16:31:58.632815084 +0000 UTC m=+0.038010434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:31:58 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:31:58 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbd65518ba2dd39a45e199a9749738645b14bfae47601d19f2c9d2ff9fe6478/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:31:58 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbd65518ba2dd39a45e199a9749738645b14bfae47601d19f2c9d2ff9fe6478/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:31:58 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbd65518ba2dd39a45e199a9749738645b14bfae47601d19f2c9d2ff9fe6478/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:31:58 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbd65518ba2dd39a45e199a9749738645b14bfae47601d19f2c9d2ff9fe6478/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:31:58 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbd65518ba2dd39a45e199a9749738645b14bfae47601d19f2c9d2ff9fe6478/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:31:58 np0005464891 podman[229246]: 2025-10-01 16:31:58.764663812 +0000 UTC m=+0.169859092 container init fff7f35d2f1a662b6a80bab4a06968fa9ff648d54ff16132483c4d80e85c0f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 12:31:58 np0005464891 podman[229246]: 2025-10-01 16:31:58.779404855 +0000 UTC m=+0.184600125 container start fff7f35d2f1a662b6a80bab4a06968fa9ff648d54ff16132483c4d80e85c0f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:31:58 np0005464891 podman[229246]: 2025-10-01 16:31:58.822023537 +0000 UTC m=+0.227218827 container attach fff7f35d2f1a662b6a80bab4a06968fa9ff648d54ff16132483c4d80e85c0f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:31:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:31:59 np0005464891 sweet_hertz[229309]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:31:59 np0005464891 sweet_hertz[229309]: --> relative data size: 1.0
Oct  1 12:31:59 np0005464891 sweet_hertz[229309]: --> All data devices are unavailable
Oct  1 12:31:59 np0005464891 systemd[1]: libpod-fff7f35d2f1a662b6a80bab4a06968fa9ff648d54ff16132483c4d80e85c0f13.scope: Deactivated successfully.
Oct  1 12:31:59 np0005464891 podman[229246]: 2025-10-01 16:31:59.893260315 +0000 UTC m=+1.298455585 container died fff7f35d2f1a662b6a80bab4a06968fa9ff648d54ff16132483c4d80e85c0f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 12:31:59 np0005464891 systemd[1]: libpod-fff7f35d2f1a662b6a80bab4a06968fa9ff648d54ff16132483c4d80e85c0f13.scope: Consumed 1.022s CPU time.
Oct  1 12:31:59 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6fbd65518ba2dd39a45e199a9749738645b14bfae47601d19f2c9d2ff9fe6478-merged.mount: Deactivated successfully.
Oct  1 12:31:59 np0005464891 podman[229246]: 2025-10-01 16:31:59.969389024 +0000 UTC m=+1.374584304 container remove fff7f35d2f1a662b6a80bab4a06968fa9ff648d54ff16132483c4d80e85c0f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 12:31:59 np0005464891 systemd[1]: libpod-conmon-fff7f35d2f1a662b6a80bab4a06968fa9ff648d54ff16132483c4d80e85c0f13.scope: Deactivated successfully.
Oct  1 12:32:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:00 np0005464891 podman[229792]: 2025-10-01 16:32:00.754317523 +0000 UTC m=+0.109435992 container create 4b7c6a679901991777028ee30a54982208ffa247070b7be91c4c8e3640403ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct  1 12:32:00 np0005464891 podman[229792]: 2025-10-01 16:32:00.667635158 +0000 UTC m=+0.022753697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:32:00 np0005464891 systemd[1]: Started libpod-conmon-4b7c6a679901991777028ee30a54982208ffa247070b7be91c4c8e3640403ce3.scope.
Oct  1 12:32:00 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:32:00 np0005464891 podman[229792]: 2025-10-01 16:32:00.902749466 +0000 UTC m=+0.257868005 container init 4b7c6a679901991777028ee30a54982208ffa247070b7be91c4c8e3640403ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:32:00 np0005464891 podman[229792]: 2025-10-01 16:32:00.916064927 +0000 UTC m=+0.271183416 container start 4b7c6a679901991777028ee30a54982208ffa247070b7be91c4c8e3640403ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct  1 12:32:00 np0005464891 great_joliot[229862]: 167 167
Oct  1 12:32:00 np0005464891 systemd[1]: libpod-4b7c6a679901991777028ee30a54982208ffa247070b7be91c4c8e3640403ce3.scope: Deactivated successfully.
Oct  1 12:32:00 np0005464891 podman[229792]: 2025-10-01 16:32:00.939187275 +0000 UTC m=+0.294305764 container attach 4b7c6a679901991777028ee30a54982208ffa247070b7be91c4c8e3640403ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 12:32:00 np0005464891 podman[229792]: 2025-10-01 16:32:00.940834131 +0000 UTC m=+0.295952590 container died 4b7c6a679901991777028ee30a54982208ffa247070b7be91c4c8e3640403ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 12:32:01 np0005464891 systemd[1]: var-lib-containers-storage-overlay-253b9a2dd236f2e25deda9af4fcd4130a3454642813b203fd6a7e4c8c92c1c33-merged.mount: Deactivated successfully.
Oct  1 12:32:01 np0005464891 python3.9[229866]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct  1 12:32:01 np0005464891 podman[229792]: 2025-10-01 16:32:01.135874747 +0000 UTC m=+0.490993196 container remove 4b7c6a679901991777028ee30a54982208ffa247070b7be91c4c8e3640403ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:32:01 np0005464891 systemd[1]: libpod-conmon-4b7c6a679901991777028ee30a54982208ffa247070b7be91c4c8e3640403ce3.scope: Deactivated successfully.
Oct  1 12:32:01 np0005464891 podman[229931]: 2025-10-01 16:32:01.372672541 +0000 UTC m=+0.059730982 container create db0ed2c0bb8855ee23450a66ac1be362b42b31fe40b6be87d922ae2aae520205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:32:01 np0005464891 systemd[1]: Started libpod-conmon-db0ed2c0bb8855ee23450a66ac1be362b42b31fe40b6be87d922ae2aae520205.scope.
Oct  1 12:32:01 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:32:01 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef909f460e8ca0a9747191870e2f5c2851872915e710437e1d2f07705d350c85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:32:01 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef909f460e8ca0a9747191870e2f5c2851872915e710437e1d2f07705d350c85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:32:01 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef909f460e8ca0a9747191870e2f5c2851872915e710437e1d2f07705d350c85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:32:01 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef909f460e8ca0a9747191870e2f5c2851872915e710437e1d2f07705d350c85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:32:01 np0005464891 podman[229931]: 2025-10-01 16:32:01.354821902 +0000 UTC m=+0.041880363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:32:01 np0005464891 podman[229931]: 2025-10-01 16:32:01.460797027 +0000 UTC m=+0.147855558 container init db0ed2c0bb8855ee23450a66ac1be362b42b31fe40b6be87d922ae2aae520205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 12:32:01 np0005464891 podman[229931]: 2025-10-01 16:32:01.469994664 +0000 UTC m=+0.157053105 container start db0ed2c0bb8855ee23450a66ac1be362b42b31fe40b6be87d922ae2aae520205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 12:32:01 np0005464891 podman[229931]: 2025-10-01 16:32:01.480285152 +0000 UTC m=+0.167343673 container attach db0ed2c0bb8855ee23450a66ac1be362b42b31fe40b6be87d922ae2aae520205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:32:02 np0005464891 python3.9[230063]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]: {
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:    "0": [
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:        {
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "devices": [
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "/dev/loop3"
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            ],
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "lv_name": "ceph_lv0",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "lv_size": "21470642176",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "name": "ceph_lv0",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "tags": {
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.cluster_name": "ceph",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.crush_device_class": "",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.encrypted": "0",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.osd_id": "0",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.type": "block",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.vdo": "0"
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            },
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "type": "block",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "vg_name": "ceph_vg0"
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:        }
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:    ],
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:    "1": [
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:        {
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "devices": [
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "/dev/loop4"
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            ],
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "lv_name": "ceph_lv1",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "lv_size": "21470642176",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "name": "ceph_lv1",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "tags": {
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.cluster_name": "ceph",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.crush_device_class": "",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.encrypted": "0",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.osd_id": "1",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.type": "block",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.vdo": "0"
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            },
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "type": "block",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "vg_name": "ceph_vg1"
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:        }
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:    ],
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:    "2": [
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:        {
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "devices": [
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "/dev/loop5"
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            ],
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "lv_name": "ceph_lv2",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "lv_size": "21470642176",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "name": "ceph_lv2",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "tags": {
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.cluster_name": "ceph",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.crush_device_class": "",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.encrypted": "0",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.osd_id": "2",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.type": "block",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:                "ceph.vdo": "0"
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            },
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "type": "block",
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:            "vg_name": "ceph_vg2"
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:        }
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]:    ]
Oct  1 12:32:02 np0005464891 romantic_beaver[229981]: }
Oct  1 12:32:02 np0005464891 systemd[1]: libpod-db0ed2c0bb8855ee23450a66ac1be362b42b31fe40b6be87d922ae2aae520205.scope: Deactivated successfully.
Oct  1 12:32:02 np0005464891 podman[229931]: 2025-10-01 16:32:02.205803359 +0000 UTC m=+0.892861830 container died db0ed2c0bb8855ee23450a66ac1be362b42b31fe40b6be87d922ae2aae520205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_beaver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 12:32:02 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ef909f460e8ca0a9747191870e2f5c2851872915e710437e1d2f07705d350c85-merged.mount: Deactivated successfully.
Oct  1 12:32:02 np0005464891 podman[229931]: 2025-10-01 16:32:02.318006937 +0000 UTC m=+1.005065418 container remove db0ed2c0bb8855ee23450a66ac1be362b42b31fe40b6be87d922ae2aae520205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_beaver, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 12:32:02 np0005464891 systemd[1]: libpod-conmon-db0ed2c0bb8855ee23450a66ac1be362b42b31fe40b6be87d922ae2aae520205.scope: Deactivated successfully.
Oct  1 12:32:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:03 np0005464891 python3.9[230345]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  1 12:32:03 np0005464891 podman[230374]: 2025-10-01 16:32:03.170521756 +0000 UTC m=+0.115462621 container create 786cabc1d9c6108e76ca40a0d8456353de25a96ca461d94a6154ebd1af8f209b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:32:03 np0005464891 podman[230374]: 2025-10-01 16:32:03.093702198 +0000 UTC m=+0.038643073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:32:03 np0005464891 systemd[1]: Started libpod-conmon-786cabc1d9c6108e76ca40a0d8456353de25a96ca461d94a6154ebd1af8f209b.scope.
Oct  1 12:32:03 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:32:03 np0005464891 podman[230374]: 2025-10-01 16:32:03.270961646 +0000 UTC m=+0.215902511 container init 786cabc1d9c6108e76ca40a0d8456353de25a96ca461d94a6154ebd1af8f209b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:32:03 np0005464891 podman[230374]: 2025-10-01 16:32:03.279557787 +0000 UTC m=+0.224498622 container start 786cabc1d9c6108e76ca40a0d8456353de25a96ca461d94a6154ebd1af8f209b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 12:32:03 np0005464891 eloquent_hoover[230405]: 167 167
Oct  1 12:32:03 np0005464891 podman[230374]: 2025-10-01 16:32:03.284288939 +0000 UTC m=+0.229229804 container attach 786cabc1d9c6108e76ca40a0d8456353de25a96ca461d94a6154ebd1af8f209b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct  1 12:32:03 np0005464891 systemd[1]: libpod-786cabc1d9c6108e76ca40a0d8456353de25a96ca461d94a6154ebd1af8f209b.scope: Deactivated successfully.
Oct  1 12:32:03 np0005464891 podman[230374]: 2025-10-01 16:32:03.286063168 +0000 UTC m=+0.231004003 container died 786cabc1d9c6108e76ca40a0d8456353de25a96ca461d94a6154ebd1af8f209b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 12:32:03 np0005464891 systemd[1]: var-lib-containers-storage-overlay-cd71dc2d31b75e1b1b8cc5b2f7ad0240dba8818aac197aa11b694cc836e23a3c-merged.mount: Deactivated successfully.
Oct  1 12:32:03 np0005464891 podman[230374]: 2025-10-01 16:32:03.342680903 +0000 UTC m=+0.287621748 container remove 786cabc1d9c6108e76ca40a0d8456353de25a96ca461d94a6154ebd1af8f209b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 12:32:03 np0005464891 systemd[1]: libpod-conmon-786cabc1d9c6108e76ca40a0d8456353de25a96ca461d94a6154ebd1af8f209b.scope: Deactivated successfully.
Oct  1 12:32:03 np0005464891 podman[230450]: 2025-10-01 16:32:03.530874067 +0000 UTC m=+0.058713143 container create e471de9a0beb3272139213d71d1b5756b7f0f20d1ee09bedeae0353dfce834e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_newton, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:32:03 np0005464891 systemd[1]: Started libpod-conmon-e471de9a0beb3272139213d71d1b5756b7f0f20d1ee09bedeae0353dfce834e0.scope.
Oct  1 12:32:03 np0005464891 podman[230450]: 2025-10-01 16:32:03.50988921 +0000 UTC m=+0.037728296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:32:03 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:32:03 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/115c3d9fe4974e6b3cec65d5470d49e345e4db46f4334b575d8abc20fb8b921c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:32:03 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/115c3d9fe4974e6b3cec65d5470d49e345e4db46f4334b575d8abc20fb8b921c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:32:03 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/115c3d9fe4974e6b3cec65d5470d49e345e4db46f4334b575d8abc20fb8b921c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:32:03 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/115c3d9fe4974e6b3cec65d5470d49e345e4db46f4334b575d8abc20fb8b921c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:32:03 np0005464891 podman[230450]: 2025-10-01 16:32:03.676628964 +0000 UTC m=+0.204468030 container init e471de9a0beb3272139213d71d1b5756b7f0f20d1ee09bedeae0353dfce834e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:32:03 np0005464891 podman[230450]: 2025-10-01 16:32:03.684194546 +0000 UTC m=+0.212033622 container start e471de9a0beb3272139213d71d1b5756b7f0f20d1ee09bedeae0353dfce834e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_newton, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 12:32:03 np0005464891 podman[230450]: 2025-10-01 16:32:03.732964041 +0000 UTC m=+0.260803117 container attach e471de9a0beb3272139213d71d1b5756b7f0f20d1ee09bedeae0353dfce834e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_newton, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:32:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:32:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:04 np0005464891 bold_newton[230478]: {
Oct  1 12:32:04 np0005464891 bold_newton[230478]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:32:04 np0005464891 bold_newton[230478]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:32:04 np0005464891 bold_newton[230478]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:32:04 np0005464891 bold_newton[230478]:        "osd_id": 2,
Oct  1 12:32:04 np0005464891 bold_newton[230478]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:32:04 np0005464891 bold_newton[230478]:        "type": "bluestore"
Oct  1 12:32:04 np0005464891 bold_newton[230478]:    },
Oct  1 12:32:04 np0005464891 bold_newton[230478]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:32:04 np0005464891 bold_newton[230478]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:32:04 np0005464891 bold_newton[230478]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:32:04 np0005464891 bold_newton[230478]:        "osd_id": 0,
Oct  1 12:32:04 np0005464891 bold_newton[230478]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:32:04 np0005464891 bold_newton[230478]:        "type": "bluestore"
Oct  1 12:32:04 np0005464891 bold_newton[230478]:    },
Oct  1 12:32:04 np0005464891 bold_newton[230478]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:32:04 np0005464891 bold_newton[230478]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:32:04 np0005464891 bold_newton[230478]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:32:04 np0005464891 bold_newton[230478]:        "osd_id": 1,
Oct  1 12:32:04 np0005464891 bold_newton[230478]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:32:04 np0005464891 bold_newton[230478]:        "type": "bluestore"
Oct  1 12:32:04 np0005464891 bold_newton[230478]:    }
Oct  1 12:32:04 np0005464891 bold_newton[230478]: }
Oct  1 12:32:04 np0005464891 systemd[1]: libpod-e471de9a0beb3272139213d71d1b5756b7f0f20d1ee09bedeae0353dfce834e0.scope: Deactivated successfully.
Oct  1 12:32:04 np0005464891 systemd[1]: libpod-e471de9a0beb3272139213d71d1b5756b7f0f20d1ee09bedeae0353dfce834e0.scope: Consumed 1.024s CPU time.
Oct  1 12:32:04 np0005464891 podman[230605]: 2025-10-01 16:32:04.724058697 +0000 UTC m=+0.093893438 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:32:04 np0005464891 podman[230661]: 2025-10-01 16:32:04.752737279 +0000 UTC m=+0.031394059 container died e471de9a0beb3272139213d71d1b5756b7f0f20d1ee09bedeae0353dfce834e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_newton, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:32:04 np0005464891 systemd[1]: var-lib-containers-storage-overlay-115c3d9fe4974e6b3cec65d5470d49e345e4db46f4334b575d8abc20fb8b921c-merged.mount: Deactivated successfully.
Oct  1 12:32:04 np0005464891 podman[230661]: 2025-10-01 16:32:04.820812383 +0000 UTC m=+0.099469133 container remove e471de9a0beb3272139213d71d1b5756b7f0f20d1ee09bedeae0353dfce834e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 12:32:04 np0005464891 systemd[1]: libpod-conmon-e471de9a0beb3272139213d71d1b5756b7f0f20d1ee09bedeae0353dfce834e0.scope: Deactivated successfully.
Oct  1 12:32:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:32:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:32:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:32:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:32:04 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev a670c404-27c3-4fa5-9538-4d572467335b does not exist
Oct  1 12:32:04 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 01893880-33a7-4b22-b34d-fb9ce2654130 does not exist
Oct  1 12:32:05 np0005464891 python3[230657]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  1 12:32:05 np0005464891 podman[230767]: 2025-10-01 16:32:05.237366446 +0000 UTC m=+0.050993127 container create 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  1 12:32:05 np0005464891 podman[230767]: 2025-10-01 16:32:05.212898582 +0000 UTC m=+0.026525303 image pull 81d94872551c3ae3c30801602bbb5f0c44872f15dcde472a0ba869fe2f28966e quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  1 12:32:05 np0005464891 python3[230657]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  1 12:32:05 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:32:05 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:32:06 np0005464891 python3.9[230958]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:32:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:07 np0005464891 python3.9[231112]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:07 np0005464891 python3.9[231188]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:32:08 np0005464891 python3.9[231339]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759336327.567501-386-144875993665747/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:32:08 np0005464891 python3.9[231415]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 12:32:08 np0005464891 systemd[1]: Reloading.
Oct  1 12:32:09 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:32:09 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:32:09 np0005464891 python3.9[231525]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:32:10 np0005464891 systemd[1]: Reloading.
Oct  1 12:32:10 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:32:10 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:32:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:10 np0005464891 systemd[1]: Starting iscsid container...
Oct  1 12:32:10 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:32:10 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec1b6db4958366bab89f455378f8214f8dcbb4de2b01243afc3a43ee4fa37c8/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct  1 12:32:10 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec1b6db4958366bab89f455378f8214f8dcbb4de2b01243afc3a43ee4fa37c8/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  1 12:32:10 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec1b6db4958366bab89f455378f8214f8dcbb4de2b01243afc3a43ee4fa37c8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  1 12:32:10 np0005464891 systemd[1]: Started /usr/bin/podman healthcheck run 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67.
Oct  1 12:32:10 np0005464891 podman[231564]: 2025-10-01 16:32:10.805527807 +0000 UTC m=+0.342813492 container init 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid)
Oct  1 12:32:10 np0005464891 iscsid[231580]: + sudo -E kolla_set_configs
Oct  1 12:32:10 np0005464891 podman[231564]: 2025-10-01 16:32:10.845019101 +0000 UTC m=+0.382304726 container start 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:32:10 np0005464891 podman[231564]: iscsid
Oct  1 12:32:10 np0005464891 systemd[1]: Started iscsid container.
Oct  1 12:32:10 np0005464891 systemd[1]: Created slice User Slice of UID 0.
Oct  1 12:32:10 np0005464891 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  1 12:32:10 np0005464891 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  1 12:32:10 np0005464891 systemd[1]: Starting User Manager for UID 0...
Oct  1 12:32:10 np0005464891 podman[231587]: 2025-10-01 16:32:10.949383041 +0000 UTC m=+0.086691896 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  1 12:32:10 np0005464891 systemd[1]: 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67-609f632154f58427.service: Main process exited, code=exited, status=1/FAILURE
Oct  1 12:32:10 np0005464891 systemd[1]: 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67-609f632154f58427.service: Failed with result 'exit-code'.
Oct  1 12:32:11 np0005464891 systemd[231607]: Queued start job for default target Main User Target.
Oct  1 12:32:11 np0005464891 systemd[231607]: Created slice User Application Slice.
Oct  1 12:32:11 np0005464891 systemd[231607]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  1 12:32:11 np0005464891 systemd[231607]: Started Daily Cleanup of User's Temporary Directories.
Oct  1 12:32:11 np0005464891 systemd[231607]: Reached target Paths.
Oct  1 12:32:11 np0005464891 systemd[231607]: Reached target Timers.
Oct  1 12:32:11 np0005464891 systemd[231607]: Starting D-Bus User Message Bus Socket...
Oct  1 12:32:11 np0005464891 systemd[231607]: Starting Create User's Volatile Files and Directories...
Oct  1 12:32:11 np0005464891 systemd[231607]: Listening on D-Bus User Message Bus Socket.
Oct  1 12:32:11 np0005464891 systemd[231607]: Reached target Sockets.
Oct  1 12:32:11 np0005464891 systemd[231607]: Finished Create User's Volatile Files and Directories.
Oct  1 12:32:11 np0005464891 systemd[231607]: Reached target Basic System.
Oct  1 12:32:11 np0005464891 systemd[231607]: Reached target Main User Target.
Oct  1 12:32:11 np0005464891 systemd[231607]: Startup finished in 191ms.
Oct  1 12:32:11 np0005464891 systemd[1]: Started User Manager for UID 0.
Oct  1 12:32:11 np0005464891 systemd[1]: Started Session c3 of User root.
Oct  1 12:32:11 np0005464891 iscsid[231580]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  1 12:32:11 np0005464891 iscsid[231580]: INFO:__main__:Validating config file
Oct  1 12:32:11 np0005464891 iscsid[231580]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  1 12:32:11 np0005464891 iscsid[231580]: INFO:__main__:Writing out command to execute
Oct  1 12:32:11 np0005464891 iscsid[231580]: ++ cat /run_command
Oct  1 12:32:11 np0005464891 systemd[1]: session-c3.scope: Deactivated successfully.
Oct  1 12:32:11 np0005464891 iscsid[231580]: + CMD='/usr/sbin/iscsid -f'
Oct  1 12:32:11 np0005464891 iscsid[231580]: + ARGS=
Oct  1 12:32:11 np0005464891 iscsid[231580]: + sudo kolla_copy_cacerts
Oct  1 12:32:11 np0005464891 systemd[1]: Started Session c4 of User root.
Oct  1 12:32:11 np0005464891 iscsid[231580]: + [[ ! -n '' ]]
Oct  1 12:32:11 np0005464891 iscsid[231580]: + . kolla_extend_start
Oct  1 12:32:11 np0005464891 systemd[1]: session-c4.scope: Deactivated successfully.
Oct  1 12:32:11 np0005464891 iscsid[231580]: Running command: '/usr/sbin/iscsid -f'
Oct  1 12:32:11 np0005464891 iscsid[231580]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct  1 12:32:11 np0005464891 iscsid[231580]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct  1 12:32:11 np0005464891 iscsid[231580]: + umask 0022
Oct  1 12:32:11 np0005464891 iscsid[231580]: + exec /usr/sbin/iscsid -f
Oct  1 12:32:11 np0005464891 kernel: Loading iSCSI transport class v2.0-870.
Oct  1 12:32:11 np0005464891 python3.9[231785]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:32:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:32:11
Oct  1 12:32:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:32:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:32:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'backups', '.rgw.root', 'images', 'default.rgw.meta', 'vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct  1 12:32:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:32:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:32:12.428 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:32:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:32:12.429 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:32:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:32:12.430 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:32:12 np0005464891 python3.9[231937]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:13 np0005464891 python3.9[232089]: ansible-ansible.builtin.service_facts Invoked
Oct  1 12:32:13 np0005464891 network[232106]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 12:32:13 np0005464891 network[232107]: 'network-scripts' will be removed from distribution in near future.
Oct  1 12:32:13 np0005464891 network[232108]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 12:32:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:32:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:16 np0005464891 podman[232202]: 2025-10-01 16:32:16.783914811 +0000 UTC m=+0.060540215 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:32:18 np0005464891 python3.9[232402]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  1 12:32:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:32:19 np0005464891 python3.9[232554]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct  1 12:32:20 np0005464891 python3.9[232710]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:32:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:20 np0005464891 python3.9[232833]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336339.5718155-460-82262811727650/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:21 np0005464891 systemd[1]: Stopping User Manager for UID 0...
Oct  1 12:32:21 np0005464891 systemd[231607]: Activating special unit Exit the Session...
Oct  1 12:32:21 np0005464891 systemd[231607]: Stopped target Main User Target.
Oct  1 12:32:21 np0005464891 systemd[231607]: Stopped target Basic System.
Oct  1 12:32:21 np0005464891 systemd[231607]: Stopped target Paths.
Oct  1 12:32:21 np0005464891 systemd[231607]: Stopped target Sockets.
Oct  1 12:32:21 np0005464891 systemd[231607]: Stopped target Timers.
Oct  1 12:32:21 np0005464891 systemd[231607]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  1 12:32:21 np0005464891 systemd[231607]: Closed D-Bus User Message Bus Socket.
Oct  1 12:32:21 np0005464891 systemd[231607]: Stopped Create User's Volatile Files and Directories.
Oct  1 12:32:21 np0005464891 systemd[231607]: Removed slice User Application Slice.
Oct  1 12:32:21 np0005464891 systemd[231607]: Reached target Shutdown.
Oct  1 12:32:21 np0005464891 systemd[231607]: Finished Exit the Session.
Oct  1 12:32:21 np0005464891 systemd[231607]: Reached target Exit the Session.
Oct  1 12:32:21 np0005464891 systemd[1]: user@0.service: Deactivated successfully.
Oct  1 12:32:21 np0005464891 systemd[1]: Stopped User Manager for UID 0.
Oct  1 12:32:21 np0005464891 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  1 12:32:21 np0005464891 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  1 12:32:21 np0005464891 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  1 12:32:21 np0005464891 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  1 12:32:21 np0005464891 systemd[1]: Removed slice User Slice of UID 0.
Oct  1 12:32:21 np0005464891 python3.9[232985]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:32:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:32:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:22 np0005464891 python3.9[233138]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:32:23 np0005464891 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  1 12:32:23 np0005464891 systemd[1]: Stopped Load Kernel Modules.
Oct  1 12:32:23 np0005464891 systemd[1]: Stopping Load Kernel Modules...
Oct  1 12:32:23 np0005464891 systemd[1]: Starting Load Kernel Modules...
Oct  1 12:32:23 np0005464891 systemd[1]: Finished Load Kernel Modules.
Oct  1 12:32:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:32:24 np0005464891 python3.9[233294]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:32:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:25 np0005464891 python3.9[233446]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:32:25 np0005464891 python3.9[233598]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:32:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:26 np0005464891 python3.9[233750]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:32:27 np0005464891 python3.9[233873]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336346.1650856-518-258344936761166/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:28 np0005464891 python3.9[234025]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:32:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:32:29 np0005464891 python3.9[234178]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:30 np0005464891 python3.9[234330]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:30 np0005464891 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct  1 12:32:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:30 np0005464891 python3.9[234483]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:31 np0005464891 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  1 12:32:31 np0005464891 python3.9[234636]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:32 np0005464891 python3.9[234788]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:33 np0005464891 python3.9[234940]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:32:34 np0005464891 python3.9[235092]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:34 np0005464891 podman[235244]: 2025-10-01 16:32:34.953228338 +0000 UTC m=+0.125628435 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  1 12:32:35 np0005464891 python3.9[235245]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:32:35 np0005464891 python3.9[235425]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:36 np0005464891 python3.9[235577]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:32:37 np0005464891 python3.9[235729]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:32:38 np0005464891 python3.9[235807]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:32:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:38 np0005464891 python3.9[235959]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:32:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:32:39 np0005464891 python3.9[236037]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:32:40 np0005464891 python3.9[236189]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:41 np0005464891 python3.9[236341]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:32:41 np0005464891 podman[236391]: 2025-10-01 16:32:41.524904741 +0000 UTC m=+0.081769628 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:32:41 np0005464891 python3.9[236435]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:32:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:32:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:32:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:32:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:32:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:32:42 np0005464891 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  1 12:32:42 np0005464891 systemd[1]: virtqemud.service: Deactivated successfully.
Oct  1 12:32:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:42 np0005464891 python3.9[236591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:32:43 np0005464891 python3.9[236669]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:32:44 np0005464891 python3.9[236821]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:32:44 np0005464891 systemd[1]: Reloading.
Oct  1 12:32:44 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:32:44 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:32:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:45 np0005464891 python3.9[237010]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:32:45 np0005464891 python3.9[237088]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:46 np0005464891 python3.9[237240]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:32:46 np0005464891 podman[237266]: 2025-10-01 16:32:46.950298917 +0000 UTC m=+0.062898690 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  1 12:32:47 np0005464891 python3.9[237337]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:48 np0005464891 python3.9[237489]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:32:48 np0005464891 systemd[1]: Reloading.
Oct  1 12:32:48 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:32:48 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:32:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:48 np0005464891 systemd[1]: Starting Create netns directory...
Oct  1 12:32:48 np0005464891 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  1 12:32:48 np0005464891 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  1 12:32:48 np0005464891 systemd[1]: Finished Create netns directory.
Oct  1 12:32:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:32:49 np0005464891 python3.9[237682]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:32:50 np0005464891 python3.9[237834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:32:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:50 np0005464891 python3.9[237957]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759336369.8008218-725-25985162601619/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:32:51 np0005464891 python3.9[238109]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:32:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:52 np0005464891 python3.9[238261]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:32:53 np0005464891 python3.9[238384]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336372.1794605-750-246576649030554/.source.json _original_basename=.2d_xfpne follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:53 np0005464891 python3.9[238536]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:32:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:32:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:56 np0005464891 python3.9[238963]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct  1 12:32:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:57 np0005464891 python3.9[239115]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  1 12:32:58 np0005464891 python3.9[239267]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  1 12:32:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:32:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:32:59 np0005464891 python3[239445]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  1 12:33:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:00 np0005464891 podman[239458]: 2025-10-01 16:33:00.899222387 +0000 UTC m=+1.143215743 image pull 4ee39d2b05f9d7d8e7f025baefe799c33619f4419f4eb27d17ca383a40343475 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct  1 12:33:01 np0005464891 podman[239515]: 2025-10-01 16:33:01.082591277 +0000 UTC m=+0.061676566 container create 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:33:01 np0005464891 podman[239515]: 2025-10-01 16:33:01.058502043 +0000 UTC m=+0.037587322 image pull 4ee39d2b05f9d7d8e7f025baefe799c33619f4419f4eb27d17ca383a40343475 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct  1 12:33:01 np0005464891 python3[239445]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct  1 12:33:02 np0005464891 python3.9[239705]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:33:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:03 np0005464891 python3.9[239859]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:03 np0005464891 python3.9[239935]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:33:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:33:04 np0005464891 python3.9[240086]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759336383.6828659-838-86340486220515/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:04 np0005464891 python3.9[240162]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 12:33:05 np0005464891 systemd[1]: Reloading.
Oct  1 12:33:05 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:33:05 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:33:05 np0005464891 podman[240164]: 2025-10-01 16:33:05.190026952 +0000 UTC m=+0.152912989 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:33:06 np0005464891 python3.9[240411]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:33:06 np0005464891 systemd[1]: Reloading.
Oct  1 12:33:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:33:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:33:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:33:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:33:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:33:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:33:06 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 6be4f1e5-eaee-4f6a-b586-35653f7c4060 does not exist
Oct  1 12:33:06 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev aab28b6b-2b88-45d8-af61-06006af9f747 does not exist
Oct  1 12:33:06 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev c66de461-b4ef-4112-b6fe-9dff565a13b5 does not exist
Oct  1 12:33:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:33:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:33:06 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:33:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:33:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:33:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:33:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:33:06 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:33:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:06 np0005464891 systemd[1]: Starting multipathd container...
Oct  1 12:33:06 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:33:06 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:33:06 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:33:06 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:33:06 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76128f396b04a92a2ce9789253c79369bf7952ec5b5d326301725f9161dddcca/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:06 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76128f396b04a92a2ce9789253c79369bf7952ec5b5d326301725f9161dddcca/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:06 np0005464891 systemd[1]: Started /usr/bin/podman healthcheck run 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69.
Oct  1 12:33:06 np0005464891 podman[240492]: 2025-10-01 16:33:06.739824228 +0000 UTC m=+0.257658169 container init 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:33:06 np0005464891 multipathd[240581]: + sudo -E kolla_set_configs
Oct  1 12:33:06 np0005464891 podman[240492]: 2025-10-01 16:33:06.787001508 +0000 UTC m=+0.304835409 container start 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 12:33:06 np0005464891 podman[240492]: multipathd
Oct  1 12:33:06 np0005464891 systemd[1]: Started multipathd container.
Oct  1 12:33:06 np0005464891 multipathd[240581]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  1 12:33:06 np0005464891 multipathd[240581]: INFO:__main__:Validating config file
Oct  1 12:33:06 np0005464891 multipathd[240581]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  1 12:33:06 np0005464891 multipathd[240581]: INFO:__main__:Writing out command to execute
Oct  1 12:33:06 np0005464891 multipathd[240581]: ++ cat /run_command
Oct  1 12:33:06 np0005464891 multipathd[240581]: + CMD='/usr/sbin/multipathd -d'
Oct  1 12:33:06 np0005464891 multipathd[240581]: + ARGS=
Oct  1 12:33:06 np0005464891 multipathd[240581]: + sudo kolla_copy_cacerts
Oct  1 12:33:06 np0005464891 multipathd[240581]: + [[ ! -n '' ]]
Oct  1 12:33:06 np0005464891 multipathd[240581]: + . kolla_extend_start
Oct  1 12:33:06 np0005464891 multipathd[240581]: Running command: '/usr/sbin/multipathd -d'
Oct  1 12:33:06 np0005464891 multipathd[240581]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  1 12:33:06 np0005464891 multipathd[240581]: + umask 0022
Oct  1 12:33:06 np0005464891 multipathd[240581]: + exec /usr/sbin/multipathd -d
Oct  1 12:33:06 np0005464891 podman[240590]: 2025-10-01 16:33:06.878447326 +0000 UTC m=+0.084371921 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:33:06 np0005464891 multipathd[240581]: 3335.518826 | --------start up--------
Oct  1 12:33:06 np0005464891 multipathd[240581]: 3335.518847 | read /etc/multipath.conf
Oct  1 12:33:06 np0005464891 systemd[1]: 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69-54c999b4fefbd0d1.service: Main process exited, code=exited, status=1/FAILURE
Oct  1 12:33:06 np0005464891 systemd[1]: 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69-54c999b4fefbd0d1.service: Failed with result 'exit-code'.
Oct  1 12:33:06 np0005464891 multipathd[240581]: 3335.524297 | path checkers start up
Oct  1 12:33:07 np0005464891 podman[240685]: 2025-10-01 16:33:07.069206873 +0000 UTC m=+0.057310765 container create 9df30fdbcb2de86ad354db012a1d20de8c0876c50d2504b973b435bbc86fed7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct  1 12:33:07 np0005464891 systemd[1]: Started libpod-conmon-9df30fdbcb2de86ad354db012a1d20de8c0876c50d2504b973b435bbc86fed7d.scope.
Oct  1 12:33:07 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:33:07 np0005464891 podman[240685]: 2025-10-01 16:33:07.046835687 +0000 UTC m=+0.034939569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:33:07 np0005464891 podman[240685]: 2025-10-01 16:33:07.151818153 +0000 UTC m=+0.139922045 container init 9df30fdbcb2de86ad354db012a1d20de8c0876c50d2504b973b435bbc86fed7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct  1 12:33:07 np0005464891 podman[240685]: 2025-10-01 16:33:07.165233979 +0000 UTC m=+0.153337841 container start 9df30fdbcb2de86ad354db012a1d20de8c0876c50d2504b973b435bbc86fed7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:33:07 np0005464891 podman[240685]: 2025-10-01 16:33:07.168623033 +0000 UTC m=+0.156726895 container attach 9df30fdbcb2de86ad354db012a1d20de8c0876c50d2504b973b435bbc86fed7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:33:07 np0005464891 cool_lewin[240752]: 167 167
Oct  1 12:33:07 np0005464891 systemd[1]: libpod-9df30fdbcb2de86ad354db012a1d20de8c0876c50d2504b973b435bbc86fed7d.scope: Deactivated successfully.
Oct  1 12:33:07 np0005464891 podman[240685]: 2025-10-01 16:33:07.173071368 +0000 UTC m=+0.161175260 container died 9df30fdbcb2de86ad354db012a1d20de8c0876c50d2504b973b435bbc86fed7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:33:07 np0005464891 systemd[1]: var-lib-containers-storage-overlay-00a1ca3803d76f56b295dba13247fbe435d5c8a2f9c23eac0ffd3909dde73f31-merged.mount: Deactivated successfully.
Oct  1 12:33:07 np0005464891 podman[240685]: 2025-10-01 16:33:07.22888016 +0000 UTC m=+0.216984032 container remove 9df30fdbcb2de86ad354db012a1d20de8c0876c50d2504b973b435bbc86fed7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:33:07 np0005464891 systemd[1]: libpod-conmon-9df30fdbcb2de86ad354db012a1d20de8c0876c50d2504b973b435bbc86fed7d.scope: Deactivated successfully.
Oct  1 12:33:07 np0005464891 podman[240853]: 2025-10-01 16:33:07.415387167 +0000 UTC m=+0.049352292 container create f7808a00062c78f61202aff84d6c3f2803f93e4add708b49c8c5f18e3b8960a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hoover, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:33:07 np0005464891 systemd[1]: Started libpod-conmon-f7808a00062c78f61202aff84d6c3f2803f93e4add708b49c8c5f18e3b8960a1.scope.
Oct  1 12:33:07 np0005464891 podman[240853]: 2025-10-01 16:33:07.395416578 +0000 UTC m=+0.029381753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:33:07 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:33:07 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872c4fb48a7ef939a8a5031e1ff35ca51268c84ee4dd72d22c699765f60ed8e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:07 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872c4fb48a7ef939a8a5031e1ff35ca51268c84ee4dd72d22c699765f60ed8e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:07 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872c4fb48a7ef939a8a5031e1ff35ca51268c84ee4dd72d22c699765f60ed8e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:07 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872c4fb48a7ef939a8a5031e1ff35ca51268c84ee4dd72d22c699765f60ed8e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:07 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872c4fb48a7ef939a8a5031e1ff35ca51268c84ee4dd72d22c699765f60ed8e5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:07 np0005464891 podman[240853]: 2025-10-01 16:33:07.517086742 +0000 UTC m=+0.151051897 container init f7808a00062c78f61202aff84d6c3f2803f93e4add708b49c8c5f18e3b8960a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hoover, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 12:33:07 np0005464891 podman[240853]: 2025-10-01 16:33:07.528704527 +0000 UTC m=+0.162669692 container start f7808a00062c78f61202aff84d6c3f2803f93e4add708b49c8c5f18e3b8960a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hoover, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 12:33:07 np0005464891 python3.9[240847]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:33:07 np0005464891 podman[240853]: 2025-10-01 16:33:07.532962936 +0000 UTC m=+0.166928091 container attach f7808a00062c78f61202aff84d6c3f2803f93e4add708b49c8c5f18e3b8960a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hoover, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:33:08 np0005464891 python3.9[241027]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:33:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:08 np0005464891 happy_hoover[240868]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:33:08 np0005464891 happy_hoover[240868]: --> relative data size: 1.0
Oct  1 12:33:08 np0005464891 happy_hoover[240868]: --> All data devices are unavailable
Oct  1 12:33:08 np0005464891 systemd[1]: libpod-f7808a00062c78f61202aff84d6c3f2803f93e4add708b49c8c5f18e3b8960a1.scope: Deactivated successfully.
Oct  1 12:33:08 np0005464891 podman[240853]: 2025-10-01 16:33:08.652067083 +0000 UTC m=+1.286032228 container died f7808a00062c78f61202aff84d6c3f2803f93e4add708b49c8c5f18e3b8960a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hoover, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:33:08 np0005464891 systemd[1]: libpod-f7808a00062c78f61202aff84d6c3f2803f93e4add708b49c8c5f18e3b8960a1.scope: Consumed 1.033s CPU time.
Oct  1 12:33:08 np0005464891 systemd[1]: var-lib-containers-storage-overlay-872c4fb48a7ef939a8a5031e1ff35ca51268c84ee4dd72d22c699765f60ed8e5-merged.mount: Deactivated successfully.
Oct  1 12:33:08 np0005464891 podman[240853]: 2025-10-01 16:33:08.706415973 +0000 UTC m=+1.340381088 container remove f7808a00062c78f61202aff84d6c3f2803f93e4add708b49c8c5f18e3b8960a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 12:33:08 np0005464891 systemd[1]: libpod-conmon-f7808a00062c78f61202aff84d6c3f2803f93e4add708b49c8c5f18e3b8960a1.scope: Deactivated successfully.
Oct  1 12:33:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:33:09 np0005464891 python3.9[241303]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:33:09 np0005464891 systemd[1]: Stopping multipathd container...
Oct  1 12:33:09 np0005464891 multipathd[240581]: 3338.028893 | exit (signal)
Oct  1 12:33:09 np0005464891 multipathd[240581]: 3338.029769 | --------shut down-------
Oct  1 12:33:09 np0005464891 systemd[1]: libpod-4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69.scope: Deactivated successfully.
Oct  1 12:33:09 np0005464891 podman[241360]: 2025-10-01 16:33:09.435966003 +0000 UTC m=+0.128577878 container died 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Oct  1 12:33:09 np0005464891 systemd[1]: 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69-54c999b4fefbd0d1.timer: Deactivated successfully.
Oct  1 12:33:09 np0005464891 systemd[1]: Stopped /usr/bin/podman healthcheck run 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69.
Oct  1 12:33:09 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69-userdata-shm.mount: Deactivated successfully.
Oct  1 12:33:09 np0005464891 systemd[1]: var-lib-containers-storage-overlay-76128f396b04a92a2ce9789253c79369bf7952ec5b5d326301725f9161dddcca-merged.mount: Deactivated successfully.
Oct  1 12:33:09 np0005464891 podman[241386]: 2025-10-01 16:33:09.479428178 +0000 UTC m=+0.046223343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:33:09 np0005464891 podman[241360]: 2025-10-01 16:33:09.68858455 +0000 UTC m=+0.381196375 container cleanup 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  1 12:33:09 np0005464891 podman[241360]: multipathd
Oct  1 12:33:09 np0005464891 podman[241386]: 2025-10-01 16:33:09.695469152 +0000 UTC m=+0.262264237 container create 4d811fc005528beeb353095a9a572e2f80e5a526e8cc618a584dbb914dd310fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 12:33:09 np0005464891 systemd[1]: Started libpod-conmon-4d811fc005528beeb353095a9a572e2f80e5a526e8cc618a584dbb914dd310fd.scope.
Oct  1 12:33:09 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:33:09 np0005464891 podman[241416]: multipathd
Oct  1 12:33:09 np0005464891 podman[241386]: 2025-10-01 16:33:09.788782763 +0000 UTC m=+0.355577918 container init 4d811fc005528beeb353095a9a572e2f80e5a526e8cc618a584dbb914dd310fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 12:33:09 np0005464891 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct  1 12:33:09 np0005464891 systemd[1]: Stopped multipathd container.
Oct  1 12:33:09 np0005464891 podman[241386]: 2025-10-01 16:33:09.802760834 +0000 UTC m=+0.369555949 container start 4d811fc005528beeb353095a9a572e2f80e5a526e8cc618a584dbb914dd310fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct  1 12:33:09 np0005464891 podman[241386]: 2025-10-01 16:33:09.808087923 +0000 UTC m=+0.374883088 container attach 4d811fc005528beeb353095a9a572e2f80e5a526e8cc618a584dbb914dd310fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 12:33:09 np0005464891 determined_meitner[241424]: 167 167
Oct  1 12:33:09 np0005464891 podman[241386]: 2025-10-01 16:33:09.812934958 +0000 UTC m=+0.379730073 container died 4d811fc005528beeb353095a9a572e2f80e5a526e8cc618a584dbb914dd310fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 12:33:09 np0005464891 systemd[1]: Starting multipathd container...
Oct  1 12:33:09 np0005464891 systemd[1]: libpod-4d811fc005528beeb353095a9a572e2f80e5a526e8cc618a584dbb914dd310fd.scope: Deactivated successfully.
Oct  1 12:33:09 np0005464891 systemd[1]: var-lib-containers-storage-overlay-cbbd64876174b58892091d627f94f7b29fd5b4c88160a2646a23d94c062dc6ad-merged.mount: Deactivated successfully.
Oct  1 12:33:09 np0005464891 podman[241386]: 2025-10-01 16:33:09.859409209 +0000 UTC m=+0.426204294 container remove 4d811fc005528beeb353095a9a572e2f80e5a526e8cc618a584dbb914dd310fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 12:33:09 np0005464891 systemd[1]: libpod-conmon-4d811fc005528beeb353095a9a572e2f80e5a526e8cc618a584dbb914dd310fd.scope: Deactivated successfully.
Oct  1 12:33:09 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:33:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76128f396b04a92a2ce9789253c79369bf7952ec5b5d326301725f9161dddcca/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76128f396b04a92a2ce9789253c79369bf7952ec5b5d326301725f9161dddcca/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:09 np0005464891 systemd[1]: Started /usr/bin/podman healthcheck run 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69.
Oct  1 12:33:09 np0005464891 podman[241433]: 2025-10-01 16:33:09.983417208 +0000 UTC m=+0.151969682 container init 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 12:33:09 np0005464891 multipathd[241460]: + sudo -E kolla_set_configs
Oct  1 12:33:10 np0005464891 podman[241433]: 2025-10-01 16:33:10.019773655 +0000 UTC m=+0.188326129 container start 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:33:10 np0005464891 podman[241433]: multipathd
Oct  1 12:33:10 np0005464891 systemd[1]: Started multipathd container.
Oct  1 12:33:10 np0005464891 podman[241470]: 2025-10-01 16:33:10.05357177 +0000 UTC m=+0.047727646 container create 3a71f0378c2e3956fa2be53568319dd811006b4ba40ae7c04b2f598a1f77c00c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wright, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:33:10 np0005464891 multipathd[241460]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  1 12:33:10 np0005464891 multipathd[241460]: INFO:__main__:Validating config file
Oct  1 12:33:10 np0005464891 multipathd[241460]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  1 12:33:10 np0005464891 multipathd[241460]: INFO:__main__:Writing out command to execute
Oct  1 12:33:10 np0005464891 multipathd[241460]: ++ cat /run_command
Oct  1 12:33:10 np0005464891 multipathd[241460]: + CMD='/usr/sbin/multipathd -d'
Oct  1 12:33:10 np0005464891 multipathd[241460]: + ARGS=
Oct  1 12:33:10 np0005464891 multipathd[241460]: + sudo kolla_copy_cacerts
Oct  1 12:33:10 np0005464891 multipathd[241460]: Running command: '/usr/sbin/multipathd -d'
Oct  1 12:33:10 np0005464891 multipathd[241460]: + [[ ! -n '' ]]
Oct  1 12:33:10 np0005464891 multipathd[241460]: + . kolla_extend_start
Oct  1 12:33:10 np0005464891 multipathd[241460]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  1 12:33:10 np0005464891 multipathd[241460]: + umask 0022
Oct  1 12:33:10 np0005464891 multipathd[241460]: + exec /usr/sbin/multipathd -d
Oct  1 12:33:10 np0005464891 systemd[1]: Started libpod-conmon-3a71f0378c2e3956fa2be53568319dd811006b4ba40ae7c04b2f598a1f77c00c.scope.
Oct  1 12:33:10 np0005464891 multipathd[241460]: 3338.737838 | --------start up--------
Oct  1 12:33:10 np0005464891 multipathd[241460]: 3338.737856 | read /etc/multipath.conf
Oct  1 12:33:10 np0005464891 multipathd[241460]: 3338.743522 | path checkers start up
Oct  1 12:33:10 np0005464891 podman[241474]: 2025-10-01 16:33:10.119602558 +0000 UTC m=+0.088464226 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  1 12:33:10 np0005464891 podman[241470]: 2025-10-01 16:33:10.031339368 +0000 UTC m=+0.025495264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:33:10 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:33:10 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d390073d7a9cf13049a9f90d88eb161f92fa41cc1dd4443c162206fa713fa633/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:10 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d390073d7a9cf13049a9f90d88eb161f92fa41cc1dd4443c162206fa713fa633/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:10 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d390073d7a9cf13049a9f90d88eb161f92fa41cc1dd4443c162206fa713fa633/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:10 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d390073d7a9cf13049a9f90d88eb161f92fa41cc1dd4443c162206fa713fa633/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:10 np0005464891 podman[241470]: 2025-10-01 16:33:10.143574278 +0000 UTC m=+0.137730174 container init 3a71f0378c2e3956fa2be53568319dd811006b4ba40ae7c04b2f598a1f77c00c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wright, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:33:10 np0005464891 podman[241470]: 2025-10-01 16:33:10.151378067 +0000 UTC m=+0.145533943 container start 3a71f0378c2e3956fa2be53568319dd811006b4ba40ae7c04b2f598a1f77c00c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:33:10 np0005464891 podman[241470]: 2025-10-01 16:33:10.154578616 +0000 UTC m=+0.148734492 container attach 3a71f0378c2e3956fa2be53568319dd811006b4ba40ae7c04b2f598a1f77c00c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wright, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:33:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:10 np0005464891 python3.9[241674]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]: {
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:    "0": [
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:        {
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "devices": [
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "/dev/loop3"
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            ],
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "lv_name": "ceph_lv0",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "lv_size": "21470642176",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "name": "ceph_lv0",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "tags": {
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.cluster_name": "ceph",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.crush_device_class": "",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.encrypted": "0",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.osd_id": "0",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.type": "block",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.vdo": "0"
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            },
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "type": "block",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "vg_name": "ceph_vg0"
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:        }
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:    ],
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:    "1": [
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:        {
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "devices": [
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "/dev/loop4"
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            ],
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "lv_name": "ceph_lv1",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "lv_size": "21470642176",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "name": "ceph_lv1",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "tags": {
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.cluster_name": "ceph",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.crush_device_class": "",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.encrypted": "0",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.osd_id": "1",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.type": "block",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.vdo": "0"
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            },
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "type": "block",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "vg_name": "ceph_vg1"
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:        }
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:    ],
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:    "2": [
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:        {
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "devices": [
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "/dev/loop5"
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            ],
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "lv_name": "ceph_lv2",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "lv_size": "21470642176",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "name": "ceph_lv2",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "tags": {
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.cluster_name": "ceph",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.crush_device_class": "",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.encrypted": "0",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.osd_id": "2",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.type": "block",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:                "ceph.vdo": "0"
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            },
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "type": "block",
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:            "vg_name": "ceph_vg2"
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:        }
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]:    ]
Oct  1 12:33:10 np0005464891 intelligent_wright[241515]: }
Oct  1 12:33:10 np0005464891 systemd[1]: libpod-3a71f0378c2e3956fa2be53568319dd811006b4ba40ae7c04b2f598a1f77c00c.scope: Deactivated successfully.
Oct  1 12:33:10 np0005464891 podman[241470]: 2025-10-01 16:33:10.948424135 +0000 UTC m=+0.942580071 container died 3a71f0378c2e3956fa2be53568319dd811006b4ba40ae7c04b2f598a1f77c00c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wright, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 12:33:10 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d390073d7a9cf13049a9f90d88eb161f92fa41cc1dd4443c162206fa713fa633-merged.mount: Deactivated successfully.
Oct  1 12:33:11 np0005464891 podman[241470]: 2025-10-01 16:33:11.022961069 +0000 UTC m=+1.017116935 container remove 3a71f0378c2e3956fa2be53568319dd811006b4ba40ae7c04b2f598a1f77c00c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 12:33:11 np0005464891 systemd[1]: libpod-conmon-3a71f0378c2e3956fa2be53568319dd811006b4ba40ae7c04b2f598a1f77c00c.scope: Deactivated successfully.
Oct  1 12:33:11 np0005464891 podman[241916]: 2025-10-01 16:33:11.700592886 +0000 UTC m=+0.089164155 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 12:33:11 np0005464891 python3.9[241973]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  1 12:33:11 np0005464891 podman[242002]: 2025-10-01 16:33:11.923018438 +0000 UTC m=+0.070572195 container create a93a58233af7c3b86409bc070ffce6d81136d33ec412016a6a60e7f60736416c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:33:11 np0005464891 systemd[1]: Started libpod-conmon-a93a58233af7c3b86409bc070ffce6d81136d33ec412016a6a60e7f60736416c.scope.
Oct  1 12:33:11 np0005464891 podman[242002]: 2025-10-01 16:33:11.890744206 +0000 UTC m=+0.038298043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:33:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:33:11
Oct  1 12:33:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:33:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:33:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'images', '.rgw.root', 'vms', '.mgr', 'default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct  1 12:33:11 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:33:12 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:33:12 np0005464891 podman[242002]: 2025-10-01 16:33:12.018496789 +0000 UTC m=+0.166050566 container init a93a58233af7c3b86409bc070ffce6d81136d33ec412016a6a60e7f60736416c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:33:12 np0005464891 podman[242002]: 2025-10-01 16:33:12.029961811 +0000 UTC m=+0.177515588 container start a93a58233af7c3b86409bc070ffce6d81136d33ec412016a6a60e7f60736416c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct  1 12:33:12 np0005464891 podman[242002]: 2025-10-01 16:33:12.034464246 +0000 UTC m=+0.182018013 container attach a93a58233af7c3b86409bc070ffce6d81136d33ec412016a6a60e7f60736416c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kirch, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:33:12 np0005464891 zealous_kirch[242042]: 167 167
Oct  1 12:33:12 np0005464891 systemd[1]: libpod-a93a58233af7c3b86409bc070ffce6d81136d33ec412016a6a60e7f60736416c.scope: Deactivated successfully.
Oct  1 12:33:12 np0005464891 podman[242002]: 2025-10-01 16:33:12.038003405 +0000 UTC m=+0.185557222 container died a93a58233af7c3b86409bc070ffce6d81136d33ec412016a6a60e7f60736416c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:33:12 np0005464891 systemd[1]: var-lib-containers-storage-overlay-93b7cc7747c00b31b4acc68f4022214d9bd50e565a37bc9b9031df119af24747-merged.mount: Deactivated successfully.
Oct  1 12:33:12 np0005464891 podman[242002]: 2025-10-01 16:33:12.088840028 +0000 UTC m=+0.236393805 container remove a93a58233af7c3b86409bc070ffce6d81136d33ec412016a6a60e7f60736416c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kirch, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:33:12 np0005464891 systemd[1]: libpod-conmon-a93a58233af7c3b86409bc070ffce6d81136d33ec412016a6a60e7f60736416c.scope: Deactivated successfully.
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:33:12 np0005464891 podman[242119]: 2025-10-01 16:33:12.322059942 +0000 UTC m=+0.072753917 container create d79aa3d0963a71e63adb7c59def370bb29eab05cc89517ba7241b3ca66033c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:33:12 np0005464891 systemd[1]: Started libpod-conmon-d79aa3d0963a71e63adb7c59def370bb29eab05cc89517ba7241b3ca66033c50.scope.
Oct  1 12:33:12 np0005464891 podman[242119]: 2025-10-01 16:33:12.291594989 +0000 UTC m=+0.042288974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:33:12 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:33:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e524a72902bc78b4a3b4400ce706ef364c9979bb2ffc3ccc33d6d5241ce9ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e524a72902bc78b4a3b4400ce706ef364c9979bb2ffc3ccc33d6d5241ce9ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e524a72902bc78b4a3b4400ce706ef364c9979bb2ffc3ccc33d6d5241ce9ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e524a72902bc78b4a3b4400ce706ef364c9979bb2ffc3ccc33d6d5241ce9ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:33:12 np0005464891 podman[242119]: 2025-10-01 16:33:12.410503695 +0000 UTC m=+0.161197670 container init d79aa3d0963a71e63adb7c59def370bb29eab05cc89517ba7241b3ca66033c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 12:33:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:12 np0005464891 podman[242119]: 2025-10-01 16:33:12.426796691 +0000 UTC m=+0.177490636 container start d79aa3d0963a71e63adb7c59def370bb29eab05cc89517ba7241b3ca66033c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:33:12 np0005464891 podman[242119]: 2025-10-01 16:33:12.42997204 +0000 UTC m=+0.180666015 container attach d79aa3d0963a71e63adb7c59def370bb29eab05cc89517ba7241b3ca66033c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mccarthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:33:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:33:12.429 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:33:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:33:12.431 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:33:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:33:12.431 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:33:12 np0005464891 python3.9[242215]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct  1 12:33:12 np0005464891 kernel: Key type psk registered
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]: {
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:        "osd_id": 2,
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:        "type": "bluestore"
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:    },
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:        "osd_id": 0,
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:        "type": "bluestore"
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:    },
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:        "osd_id": 1,
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:        "type": "bluestore"
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]:    }
Oct  1 12:33:13 np0005464891 peaceful_mccarthy[242182]: }
Oct  1 12:33:13 np0005464891 systemd[1]: libpod-d79aa3d0963a71e63adb7c59def370bb29eab05cc89517ba7241b3ca66033c50.scope: Deactivated successfully.
Oct  1 12:33:13 np0005464891 podman[242119]: 2025-10-01 16:33:13.485784277 +0000 UTC m=+1.236478262 container died d79aa3d0963a71e63adb7c59def370bb29eab05cc89517ba7241b3ca66033c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 12:33:13 np0005464891 systemd[1]: libpod-d79aa3d0963a71e63adb7c59def370bb29eab05cc89517ba7241b3ca66033c50.scope: Consumed 1.056s CPU time.
Oct  1 12:33:13 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e7e524a72902bc78b4a3b4400ce706ef364c9979bb2ffc3ccc33d6d5241ce9ff-merged.mount: Deactivated successfully.
Oct  1 12:33:13 np0005464891 podman[242119]: 2025-10-01 16:33:13.552549205 +0000 UTC m=+1.303243160 container remove d79aa3d0963a71e63adb7c59def370bb29eab05cc89517ba7241b3ca66033c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 12:33:13 np0005464891 python3.9[242398]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:33:13 np0005464891 systemd[1]: libpod-conmon-d79aa3d0963a71e63adb7c59def370bb29eab05cc89517ba7241b3ca66033c50.scope: Deactivated successfully.
Oct  1 12:33:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:33:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:33:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:33:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:33:13 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev b58adc18-beca-4069-932b-534b0c42b64b does not exist
Oct  1 12:33:13 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev a0743b5f-a921-49ea-b0eb-467e46add5c7 does not exist
Oct  1 12:33:13 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:33:13 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:33:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:33:14 np0005464891 python3.9[242589]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759336392.9766006-918-51564863729012/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:14 np0005464891 python3.9[242741]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:15 np0005464891 python3.9[242893]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:33:15 np0005464891 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  1 12:33:15 np0005464891 systemd[1]: Stopped Load Kernel Modules.
Oct  1 12:33:15 np0005464891 systemd[1]: Stopping Load Kernel Modules...
Oct  1 12:33:16 np0005464891 systemd[1]: Starting Load Kernel Modules...
Oct  1 12:33:16 np0005464891 systemd[1]: Finished Load Kernel Modules.
Oct  1 12:33:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:17 np0005464891 python3.9[243049]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 12:33:17 np0005464891 podman[243105]: 2025-10-01 16:33:17.95197711 +0000 UTC m=+0.124538436 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  1 12:33:18 np0005464891 python3.9[243149]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 12:33:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:33:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:33:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:33:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:33:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:24 np0005464891 systemd[1]: Reloading.
Oct  1 12:33:24 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:33:24 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:33:24 np0005464891 systemd[1]: Reloading.
Oct  1 12:33:24 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:33:24 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:33:25 np0005464891 systemd-logind[801]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  1 12:33:25 np0005464891 systemd-logind[801]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  1 12:33:25 np0005464891 lvm[243265]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  1 12:33:25 np0005464891 lvm[243265]: VG ceph_vg2 finished
Oct  1 12:33:25 np0005464891 lvm[243266]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  1 12:33:25 np0005464891 lvm[243266]: VG ceph_vg0 finished
Oct  1 12:33:25 np0005464891 lvm[243268]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  1 12:33:25 np0005464891 lvm[243268]: VG ceph_vg1 finished
Oct  1 12:33:25 np0005464891 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 12:33:25 np0005464891 systemd[1]: Starting man-db-cache-update.service...
Oct  1 12:33:25 np0005464891 systemd[1]: Reloading.
Oct  1 12:33:25 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:33:25 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:33:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:33:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3268 writes, 14K keys, 3268 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3268 writes, 3268 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1268 writes, 5522 keys, 1268 commit groups, 1.0 writes per commit group, ingest: 8.43 MB, 0.01 MB/s#012Interval WAL: 1268 writes, 1268 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     29.0      0.49              0.04         6    0.081       0      0       0.0       0.0#012  L6      1/0    7.27 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4     31.4     26.2      1.32              0.14         5    0.263     19K   2177       0.0       0.0#012 Sum      1/0    7.27 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     22.9     26.9      1.80              0.19        11    0.164     19K   2177       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6     16.2     16.6      1.61              0.10         6    0.269     12K   1448       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     31.4     26.2      1.32              0.14         5    0.263     19K   2177       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     29.1      0.48              0.04         5    0.096       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.014, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.03 MB/s read, 1.8 seconds#012Interval compaction: 0.03 GB write, 0.04 MB/s write, 0.03 GB read, 0.04 MB/s read, 1.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bddc5951f0#2 capacity: 308.00 MB usage: 1.42 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(84,1.23 MB,0.399275%) FilterBlock(12,63.48 KB,0.0201287%) IndexBlock(12,130.39 KB,0.0413424%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 12:33:25 np0005464891 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  1 12:33:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:28 np0005464891 python3.9[244604]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:33:29 np0005464891 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 12:33:29 np0005464891 systemd[1]: Finished man-db-cache-update.service.
Oct  1 12:33:29 np0005464891 systemd[1]: man-db-cache-update.service: Consumed 1.802s CPU time.
Oct  1 12:33:29 np0005464891 systemd[1]: run-rd2f5df9649714329a0bb3ba68ac3e5ea.service: Deactivated successfully.
Oct  1 12:33:29 np0005464891 python3.9[244757]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 12:33:30 np0005464891 python3.9[244914]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:31 np0005464891 python3.9[245066]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 12:33:31 np0005464891 systemd[1]: Reloading.
Oct  1 12:33:31 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:33:31 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:33:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:32 np0005464891 python3.9[245251]: ansible-ansible.builtin.service_facts Invoked
Oct  1 12:33:32 np0005464891 network[245268]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 12:33:32 np0005464891 network[245269]: 'network-scripts' will be removed from distribution in near future.
Oct  1 12:33:32 np0005464891 network[245270]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:33:33.897696) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336413897770, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1352, "num_deletes": 506, "total_data_size": 1656284, "memory_usage": 1694352, "flush_reason": "Manual Compaction"}
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336413913729, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1629958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13563, "largest_seqno": 14914, "table_properties": {"data_size": 1623971, "index_size": 2806, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 14897, "raw_average_key_size": 18, "raw_value_size": 1610102, "raw_average_value_size": 1946, "num_data_blocks": 129, "num_entries": 827, "num_filter_entries": 827, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336304, "oldest_key_time": 1759336304, "file_creation_time": 1759336413, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 16097 microseconds, and 5893 cpu microseconds.
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:33:33.913801) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1629958 bytes OK
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:33:33.913827) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:33:33.915563) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:33:33.915590) EVENT_LOG_v1 {"time_micros": 1759336413915581, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:33:33.915615) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1649170, prev total WAL file size 1649170, number of live WAL files 2.
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:33:33.917496) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1591KB)], [32(7439KB)]
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336413917551, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9248329, "oldest_snapshot_seqno": -1}
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3827 keys, 7261027 bytes, temperature: kUnknown
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336413970000, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7261027, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7233566, "index_size": 16791, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 93860, "raw_average_key_size": 24, "raw_value_size": 7162368, "raw_average_value_size": 1871, "num_data_blocks": 710, "num_entries": 3827, "num_filter_entries": 3827, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759336413, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:33:33.970287) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7261027 bytes
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:33:33.971716) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.0 rd, 138.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.3 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(10.1) write-amplify(4.5) OK, records in: 4852, records dropped: 1025 output_compression: NoCompression
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:33:33.971740) EVENT_LOG_v1 {"time_micros": 1759336413971729, "job": 14, "event": "compaction_finished", "compaction_time_micros": 52551, "compaction_time_cpu_micros": 19703, "output_level": 6, "num_output_files": 1, "total_output_size": 7261027, "num_input_records": 4852, "num_output_records": 3827, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336413972211, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336413973781, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:33:33.916420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:33:33.973817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:33:33.973821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:33:33.973824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:33:33.973825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:33:33 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:33:33.973828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:33:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:35 np0005464891 podman[245340]: 2025-10-01 16:33:35.601604994 +0000 UTC m=+0.135702949 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  1 12:33:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:33:40 np0005464891 python3.9[245574]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:33:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:40 np0005464891 podman[245699]: 2025-10-01 16:33:40.538321821 +0000 UTC m=+0.068737384 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:33:40 np0005464891 python3.9[245746]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:33:41 np0005464891 python3.9[245899]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:33:41 np0005464891 podman[245974]: 2025-10-01 16:33:41.940533551 +0000 UTC m=+0.058149659 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  1 12:33:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:33:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:33:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:33:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:33:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:33:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:33:42 np0005464891 python3.9[246072]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:33:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:43 np0005464891 python3.9[246225]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:33:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:33:44 np0005464891 python3.9[246378]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:33:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:45 np0005464891 python3.9[246531]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:33:46 np0005464891 python3.9[246684]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:33:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:47 np0005464891 python3.9[246837]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:47 np0005464891 python3.9[246989]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:48 np0005464891 podman[247113]: 2025-10-01 16:33:48.320539957 +0000 UTC m=+0.072099499 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  1 12:33:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:48 np0005464891 python3.9[247160]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:33:49 np0005464891 python3.9[247313]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:50 np0005464891 python3.9[247465]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:50 np0005464891 python3.9[247617]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:51 np0005464891 python3.9[247769]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:52 np0005464891 python3.9[247921]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:52 np0005464891 python3.9[248073]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:53 np0005464891 python3.9[248225]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:33:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:54 np0005464891 python3.9[248377]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:55 np0005464891 python3.9[248529]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:55 np0005464891 python3.9[248681]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:56 np0005464891 python3.9[248833]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:57 np0005464891 python3.9[248985]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:58 np0005464891 python3.9[249137]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:33:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:33:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:33:58 np0005464891 python3.9[249289]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:33:59 np0005464891 python3.9[249441]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  1 12:34:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:00 np0005464891 python3.9[249593]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 12:34:00 np0005464891 systemd[1]: Reloading.
Oct  1 12:34:00 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:34:00 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:34:01 np0005464891 python3.9[249780]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:34:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:02 np0005464891 python3.9[249933]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:34:03 np0005464891 python3.9[250086]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:34:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:34:04 np0005464891 python3.9[250239]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:34:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:04 np0005464891 python3.9[250392]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:34:05 np0005464891 python3.9[250545]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:34:06 np0005464891 podman[250646]: 2025-10-01 16:34:06.036297609 +0000 UTC m=+0.134792152 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  1 12:34:06 np0005464891 python3.9[250725]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:34:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:08 np0005464891 python3.9[250878]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 12:34:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:34:09 np0005464891 python3.9[251031]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:10 np0005464891 python3.9[251183]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:10 np0005464891 podman[251307]: 2025-10-01 16:34:10.77012445 +0000 UTC m=+0.079696802 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:34:10 np0005464891 python3.9[251356]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:11 np0005464891 python3.9[251509]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:34:11
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', '.mgr', 'backups']
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:34:12 np0005464891 podman[251633]: 2025-10-01 16:34:12.249266452 +0000 UTC m=+0.056666127 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  1 12:34:12 np0005464891 python3.9[251681]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:34:12.431 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:34:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:34:12.431 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:34:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:34:12.432 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:34:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:13 np0005464891 python3.9[251833]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:13 np0005464891 python3.9[251985]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:34:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:34:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:34:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:34:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:34:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:34:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:34:14 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev f7f60670-ed9d-42ea-b552-9d1d6afcf3c8 does not exist
Oct  1 12:34:14 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 951dcd2f-f80e-472a-99a5-21ddd122f268 does not exist
Oct  1 12:34:14 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 5436c401-136c-4322-97c5-a484634d05b2 does not exist
Oct  1 12:34:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:34:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:34:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:34:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:34:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:34:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:34:14 np0005464891 python3.9[252255]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:14 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:34:14 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:34:14 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:34:15 np0005464891 python3.9[252552]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:15 np0005464891 podman[252561]: 2025-10-01 16:34:15.248931954 +0000 UTC m=+0.039195568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:34:15 np0005464891 podman[252561]: 2025-10-01 16:34:15.429684072 +0000 UTC m=+0.219947696 container create 6a2751c09f6903a353df752019205c7acdd3efbc520c00844aaa836e1111c3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:34:15 np0005464891 systemd[1]: Started libpod-conmon-6a2751c09f6903a353df752019205c7acdd3efbc520c00844aaa836e1111c3e9.scope.
Oct  1 12:34:15 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:34:15 np0005464891 podman[252561]: 2025-10-01 16:34:15.704848972 +0000 UTC m=+0.495112586 container init 6a2751c09f6903a353df752019205c7acdd3efbc520c00844aaa836e1111c3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 12:34:15 np0005464891 podman[252561]: 2025-10-01 16:34:15.712901257 +0000 UTC m=+0.503164851 container start 6a2751c09f6903a353df752019205c7acdd3efbc520c00844aaa836e1111c3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 12:34:15 np0005464891 eager_noether[252654]: 167 167
Oct  1 12:34:15 np0005464891 systemd[1]: libpod-6a2751c09f6903a353df752019205c7acdd3efbc520c00844aaa836e1111c3e9.scope: Deactivated successfully.
Oct  1 12:34:15 np0005464891 podman[252561]: 2025-10-01 16:34:15.823809721 +0000 UTC m=+0.614073315 container attach 6a2751c09f6903a353df752019205c7acdd3efbc520c00844aaa836e1111c3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:34:15 np0005464891 podman[252561]: 2025-10-01 16:34:15.824154491 +0000 UTC m=+0.614418095 container died 6a2751c09f6903a353df752019205c7acdd3efbc520c00844aaa836e1111c3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:34:15 np0005464891 systemd[1]: var-lib-containers-storage-overlay-0a1178742a09bb0a2069129720681e7bd3c3751872a4ad70972ed1853b4e6ff6-merged.mount: Deactivated successfully.
Oct  1 12:34:15 np0005464891 podman[252561]: 2025-10-01 16:34:15.894202091 +0000 UTC m=+0.684465695 container remove 6a2751c09f6903a353df752019205c7acdd3efbc520c00844aaa836e1111c3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:34:15 np0005464891 systemd[1]: libpod-conmon-6a2751c09f6903a353df752019205c7acdd3efbc520c00844aaa836e1111c3e9.scope: Deactivated successfully.
Oct  1 12:34:16 np0005464891 python3.9[252744]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:16 np0005464891 podman[252755]: 2025-10-01 16:34:16.100748581 +0000 UTC m=+0.050070342 container create c426cc846d45ba06299e80ca65221f22ed87a610aa616c6caa63a31e79f5a243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swanson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:34:16 np0005464891 systemd[1]: Started libpod-conmon-c426cc846d45ba06299e80ca65221f22ed87a610aa616c6caa63a31e79f5a243.scope.
Oct  1 12:34:16 np0005464891 podman[252755]: 2025-10-01 16:34:16.078747685 +0000 UTC m=+0.028069546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:34:16 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:34:16 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d7310681f67e9e6196b91b53bb867da98ddb7eb41560aa930da0d63b2e32e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:34:16 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d7310681f67e9e6196b91b53bb867da98ddb7eb41560aa930da0d63b2e32e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:34:16 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d7310681f67e9e6196b91b53bb867da98ddb7eb41560aa930da0d63b2e32e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:34:16 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d7310681f67e9e6196b91b53bb867da98ddb7eb41560aa930da0d63b2e32e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:34:16 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d7310681f67e9e6196b91b53bb867da98ddb7eb41560aa930da0d63b2e32e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:34:16 np0005464891 podman[252755]: 2025-10-01 16:34:16.195936065 +0000 UTC m=+0.145257836 container init c426cc846d45ba06299e80ca65221f22ed87a610aa616c6caa63a31e79f5a243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swanson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:34:16 np0005464891 podman[252755]: 2025-10-01 16:34:16.204304399 +0000 UTC m=+0.153626180 container start c426cc846d45ba06299e80ca65221f22ed87a610aa616c6caa63a31e79f5a243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 12:34:16 np0005464891 podman[252755]: 2025-10-01 16:34:16.208227078 +0000 UTC m=+0.157548859 container attach c426cc846d45ba06299e80ca65221f22ed87a610aa616c6caa63a31e79f5a243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swanson, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:34:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:16 np0005464891 python3.9[252927]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:17 np0005464891 unruffled_swanson[252792]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:34:17 np0005464891 unruffled_swanson[252792]: --> relative data size: 1.0
Oct  1 12:34:17 np0005464891 unruffled_swanson[252792]: --> All data devices are unavailable
Oct  1 12:34:17 np0005464891 systemd[1]: libpod-c426cc846d45ba06299e80ca65221f22ed87a610aa616c6caa63a31e79f5a243.scope: Deactivated successfully.
Oct  1 12:34:17 np0005464891 systemd[1]: libpod-c426cc846d45ba06299e80ca65221f22ed87a610aa616c6caa63a31e79f5a243.scope: Consumed 1.073s CPU time.
Oct  1 12:34:17 np0005464891 podman[252755]: 2025-10-01 16:34:17.350419102 +0000 UTC m=+1.299740903 container died c426cc846d45ba06299e80ca65221f22ed87a610aa616c6caa63a31e79f5a243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 12:34:17 np0005464891 systemd[1]: var-lib-containers-storage-overlay-25d7310681f67e9e6196b91b53bb867da98ddb7eb41560aa930da0d63b2e32e3-merged.mount: Deactivated successfully.
Oct  1 12:34:17 np0005464891 python3.9[253097]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:17 np0005464891 podman[252755]: 2025-10-01 16:34:17.424446133 +0000 UTC m=+1.373767934 container remove c426cc846d45ba06299e80ca65221f22ed87a610aa616c6caa63a31e79f5a243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 12:34:17 np0005464891 systemd[1]: libpod-conmon-c426cc846d45ba06299e80ca65221f22ed87a610aa616c6caa63a31e79f5a243.scope: Deactivated successfully.
Oct  1 12:34:18 np0005464891 podman[253280]: 2025-10-01 16:34:18.223768601 +0000 UTC m=+0.064232188 container create e6f33cadcbe386a1273d3821ac9b2aacce165fae1c6322ca2df534356cd89b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:34:18 np0005464891 systemd[1]: Started libpod-conmon-e6f33cadcbe386a1273d3821ac9b2aacce165fae1c6322ca2df534356cd89b6b.scope.
Oct  1 12:34:18 np0005464891 podman[253280]: 2025-10-01 16:34:18.200717896 +0000 UTC m=+0.041181573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:34:18 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:34:18 np0005464891 podman[253280]: 2025-10-01 16:34:18.319914692 +0000 UTC m=+0.160378349 container init e6f33cadcbe386a1273d3821ac9b2aacce165fae1c6322ca2df534356cd89b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:34:18 np0005464891 podman[253280]: 2025-10-01 16:34:18.330713244 +0000 UTC m=+0.171176841 container start e6f33cadcbe386a1273d3821ac9b2aacce165fae1c6322ca2df534356cd89b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_williams, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 12:34:18 np0005464891 podman[253280]: 2025-10-01 16:34:18.334757867 +0000 UTC m=+0.175221464 container attach e6f33cadcbe386a1273d3821ac9b2aacce165fae1c6322ca2df534356cd89b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:34:18 np0005464891 gallant_williams[253295]: 167 167
Oct  1 12:34:18 np0005464891 systemd[1]: libpod-e6f33cadcbe386a1273d3821ac9b2aacce165fae1c6322ca2df534356cd89b6b.scope: Deactivated successfully.
Oct  1 12:34:18 np0005464891 podman[253280]: 2025-10-01 16:34:18.338847472 +0000 UTC m=+0.179311089 container died e6f33cadcbe386a1273d3821ac9b2aacce165fae1c6322ca2df534356cd89b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:34:18 np0005464891 systemd[1]: var-lib-containers-storage-overlay-429c2d87dd2d6b0acdcac6a2c32aa1d913e3061f6b0d1dcca558d89f8bf372b1-merged.mount: Deactivated successfully.
Oct  1 12:34:18 np0005464891 podman[253280]: 2025-10-01 16:34:18.405123356 +0000 UTC m=+0.245586943 container remove e6f33cadcbe386a1273d3821ac9b2aacce165fae1c6322ca2df534356cd89b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_williams, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 12:34:18 np0005464891 systemd[1]: libpod-conmon-e6f33cadcbe386a1273d3821ac9b2aacce165fae1c6322ca2df534356cd89b6b.scope: Deactivated successfully.
Oct  1 12:34:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:18 np0005464891 podman[253301]: 2025-10-01 16:34:18.464771175 +0000 UTC m=+0.085134404 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  1 12:34:18 np0005464891 podman[253341]: 2025-10-01 16:34:18.637247851 +0000 UTC m=+0.067397447 container create c42f69af1ecf4e19df1062d14dd88083d7907b2b45a9c54c01eae0b7ccb8d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 12:34:18 np0005464891 systemd[1]: Started libpod-conmon-c42f69af1ecf4e19df1062d14dd88083d7907b2b45a9c54c01eae0b7ccb8d863.scope.
Oct  1 12:34:18 np0005464891 podman[253341]: 2025-10-01 16:34:18.614185986 +0000 UTC m=+0.044335592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:34:18 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:34:18 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9806f35cc63719352ad3763a404b5fb0c8868e9f9c16fdfa6b3306298ee5bc05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:34:18 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9806f35cc63719352ad3763a404b5fb0c8868e9f9c16fdfa6b3306298ee5bc05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:34:18 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9806f35cc63719352ad3763a404b5fb0c8868e9f9c16fdfa6b3306298ee5bc05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:34:18 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9806f35cc63719352ad3763a404b5fb0c8868e9f9c16fdfa6b3306298ee5bc05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:34:18 np0005464891 podman[253341]: 2025-10-01 16:34:18.752120106 +0000 UTC m=+0.182269712 container init c42f69af1ecf4e19df1062d14dd88083d7907b2b45a9c54c01eae0b7ccb8d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:34:18 np0005464891 podman[253341]: 2025-10-01 16:34:18.758851075 +0000 UTC m=+0.189000641 container start c42f69af1ecf4e19df1062d14dd88083d7907b2b45a9c54c01eae0b7ccb8d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:34:18 np0005464891 podman[253341]: 2025-10-01 16:34:18.767732163 +0000 UTC m=+0.197881759 container attach c42f69af1ecf4e19df1062d14dd88083d7907b2b45a9c54c01eae0b7ccb8d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 12:34:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]: {
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:    "0": [
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:        {
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "devices": [
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "/dev/loop3"
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            ],
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "lv_name": "ceph_lv0",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "lv_size": "21470642176",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "name": "ceph_lv0",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "tags": {
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.cluster_name": "ceph",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.crush_device_class": "",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.encrypted": "0",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.osd_id": "0",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.type": "block",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.vdo": "0"
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            },
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "type": "block",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "vg_name": "ceph_vg0"
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:        }
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:    ],
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:    "1": [
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:        {
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "devices": [
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "/dev/loop4"
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            ],
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "lv_name": "ceph_lv1",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "lv_size": "21470642176",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "name": "ceph_lv1",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "tags": {
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.cluster_name": "ceph",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.crush_device_class": "",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.encrypted": "0",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.osd_id": "1",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.type": "block",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.vdo": "0"
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            },
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "type": "block",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "vg_name": "ceph_vg1"
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:        }
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:    ],
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:    "2": [
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:        {
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "devices": [
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "/dev/loop5"
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            ],
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "lv_name": "ceph_lv2",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "lv_size": "21470642176",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "name": "ceph_lv2",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "tags": {
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.cluster_name": "ceph",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.crush_device_class": "",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.encrypted": "0",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.osd_id": "2",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.type": "block",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:                "ceph.vdo": "0"
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            },
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "type": "block",
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:            "vg_name": "ceph_vg2"
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:        }
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]:    ]
Oct  1 12:34:19 np0005464891 pensive_darwin[253358]: }
Oct  1 12:34:19 np0005464891 systemd[1]: libpod-c42f69af1ecf4e19df1062d14dd88083d7907b2b45a9c54c01eae0b7ccb8d863.scope: Deactivated successfully.
Oct  1 12:34:19 np0005464891 podman[253341]: 2025-10-01 16:34:19.532066552 +0000 UTC m=+0.962216148 container died c42f69af1ecf4e19df1062d14dd88083d7907b2b45a9c54c01eae0b7ccb8d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 12:34:19 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9806f35cc63719352ad3763a404b5fb0c8868e9f9c16fdfa6b3306298ee5bc05-merged.mount: Deactivated successfully.
Oct  1 12:34:19 np0005464891 podman[253341]: 2025-10-01 16:34:19.596936918 +0000 UTC m=+1.027086464 container remove c42f69af1ecf4e19df1062d14dd88083d7907b2b45a9c54c01eae0b7ccb8d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 12:34:19 np0005464891 systemd[1]: libpod-conmon-c42f69af1ecf4e19df1062d14dd88083d7907b2b45a9c54c01eae0b7ccb8d863.scope: Deactivated successfully.
Oct  1 12:34:20 np0005464891 podman[253522]: 2025-10-01 16:34:20.378430797 +0000 UTC m=+0.068559960 container create 99f9110f215efddfc497ee2f83bb3211996295645c03f563e7abd0fe4d62a714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:34:20 np0005464891 systemd[1]: Started libpod-conmon-99f9110f215efddfc497ee2f83bb3211996295645c03f563e7abd0fe4d62a714.scope.
Oct  1 12:34:20 np0005464891 podman[253522]: 2025-10-01 16:34:20.34892069 +0000 UTC m=+0.039049924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:34:20 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:34:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:20 np0005464891 podman[253522]: 2025-10-01 16:34:20.462720165 +0000 UTC m=+0.152849328 container init 99f9110f215efddfc497ee2f83bb3211996295645c03f563e7abd0fe4d62a714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:34:20 np0005464891 podman[253522]: 2025-10-01 16:34:20.471370027 +0000 UTC m=+0.161499160 container start 99f9110f215efddfc497ee2f83bb3211996295645c03f563e7abd0fe4d62a714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:34:20 np0005464891 podman[253522]: 2025-10-01 16:34:20.474627368 +0000 UTC m=+0.164756511 container attach 99f9110f215efddfc497ee2f83bb3211996295645c03f563e7abd0fe4d62a714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  1 12:34:20 np0005464891 crazy_stonebraker[253539]: 167 167
Oct  1 12:34:20 np0005464891 systemd[1]: libpod-99f9110f215efddfc497ee2f83bb3211996295645c03f563e7abd0fe4d62a714.scope: Deactivated successfully.
Oct  1 12:34:20 np0005464891 podman[253522]: 2025-10-01 16:34:20.47790629 +0000 UTC m=+0.168035443 container died 99f9110f215efddfc497ee2f83bb3211996295645c03f563e7abd0fe4d62a714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 12:34:20 np0005464891 systemd[1]: var-lib-containers-storage-overlay-3eddb453470d81406d5480e9b5ffd58ccd5435b5a8ed47ce31ee05355370da0f-merged.mount: Deactivated successfully.
Oct  1 12:34:20 np0005464891 podman[253522]: 2025-10-01 16:34:20.517796746 +0000 UTC m=+0.207925879 container remove 99f9110f215efddfc497ee2f83bb3211996295645c03f563e7abd0fe4d62a714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:34:20 np0005464891 systemd[1]: libpod-conmon-99f9110f215efddfc497ee2f83bb3211996295645c03f563e7abd0fe4d62a714.scope: Deactivated successfully.
Oct  1 12:34:20 np0005464891 podman[253563]: 2025-10-01 16:34:20.740888079 +0000 UTC m=+0.069772593 container create 5490d5a1cda2edf2821f6ca8f72dbf1e6ec35b709dc01624943b1cebdc1c0618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:34:20 np0005464891 systemd[1]: Started libpod-conmon-5490d5a1cda2edf2821f6ca8f72dbf1e6ec35b709dc01624943b1cebdc1c0618.scope.
Oct  1 12:34:20 np0005464891 podman[253563]: 2025-10-01 16:34:20.713626756 +0000 UTC m=+0.042511310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:34:20 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:34:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd0eaa3c0034ead5c33eb831d5c639be08fc973ba6befc7be81536734fd6a2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:34:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd0eaa3c0034ead5c33eb831d5c639be08fc973ba6befc7be81536734fd6a2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:34:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd0eaa3c0034ead5c33eb831d5c639be08fc973ba6befc7be81536734fd6a2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:34:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd0eaa3c0034ead5c33eb831d5c639be08fc973ba6befc7be81536734fd6a2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:34:20 np0005464891 podman[253563]: 2025-10-01 16:34:20.850675041 +0000 UTC m=+0.179559615 container init 5490d5a1cda2edf2821f6ca8f72dbf1e6ec35b709dc01624943b1cebdc1c0618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 12:34:20 np0005464891 podman[253563]: 2025-10-01 16:34:20.867964305 +0000 UTC m=+0.196848809 container start 5490d5a1cda2edf2821f6ca8f72dbf1e6ec35b709dc01624943b1cebdc1c0618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:34:20 np0005464891 podman[253563]: 2025-10-01 16:34:20.872109721 +0000 UTC m=+0.200994225 container attach 5490d5a1cda2edf2821f6ca8f72dbf1e6ec35b709dc01624943b1cebdc1c0618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:34:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:34:21 np0005464891 angry_kare[253579]: {
Oct  1 12:34:21 np0005464891 angry_kare[253579]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:34:21 np0005464891 angry_kare[253579]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:34:21 np0005464891 angry_kare[253579]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:34:21 np0005464891 angry_kare[253579]:        "osd_id": 2,
Oct  1 12:34:21 np0005464891 angry_kare[253579]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:34:21 np0005464891 angry_kare[253579]:        "type": "bluestore"
Oct  1 12:34:21 np0005464891 angry_kare[253579]:    },
Oct  1 12:34:21 np0005464891 angry_kare[253579]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:34:21 np0005464891 angry_kare[253579]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:34:21 np0005464891 angry_kare[253579]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:34:21 np0005464891 angry_kare[253579]:        "osd_id": 0,
Oct  1 12:34:21 np0005464891 angry_kare[253579]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:34:21 np0005464891 angry_kare[253579]:        "type": "bluestore"
Oct  1 12:34:21 np0005464891 angry_kare[253579]:    },
Oct  1 12:34:21 np0005464891 angry_kare[253579]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:34:21 np0005464891 angry_kare[253579]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:34:21 np0005464891 angry_kare[253579]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:34:21 np0005464891 angry_kare[253579]:        "osd_id": 1,
Oct  1 12:34:21 np0005464891 angry_kare[253579]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:34:21 np0005464891 angry_kare[253579]:        "type": "bluestore"
Oct  1 12:34:21 np0005464891 angry_kare[253579]:    }
Oct  1 12:34:21 np0005464891 angry_kare[253579]: }
Oct  1 12:34:21 np0005464891 systemd[1]: libpod-5490d5a1cda2edf2821f6ca8f72dbf1e6ec35b709dc01624943b1cebdc1c0618.scope: Deactivated successfully.
Oct  1 12:34:21 np0005464891 systemd[1]: libpod-5490d5a1cda2edf2821f6ca8f72dbf1e6ec35b709dc01624943b1cebdc1c0618.scope: Consumed 1.098s CPU time.
Oct  1 12:34:21 np0005464891 podman[253563]: 2025-10-01 16:34:21.96088668 +0000 UTC m=+1.289771184 container died 5490d5a1cda2edf2821f6ca8f72dbf1e6ec35b709dc01624943b1cebdc1c0618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kare, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:34:22 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4fd0eaa3c0034ead5c33eb831d5c639be08fc973ba6befc7be81536734fd6a2f-merged.mount: Deactivated successfully.
Oct  1 12:34:22 np0005464891 podman[253563]: 2025-10-01 16:34:22.044632742 +0000 UTC m=+1.373517226 container remove 5490d5a1cda2edf2821f6ca8f72dbf1e6ec35b709dc01624943b1cebdc1c0618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:34:22 np0005464891 systemd[1]: libpod-conmon-5490d5a1cda2edf2821f6ca8f72dbf1e6ec35b709dc01624943b1cebdc1c0618.scope: Deactivated successfully.
Oct  1 12:34:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:34:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:34:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:34:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:34:22 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 01fa27fe-47f4-4f1c-ac6e-1873281a9f82 does not exist
Oct  1 12:34:22 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 4d4764c8-704e-4215-a5d4-a8ab672568c9 does not exist
Oct  1 12:34:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:23 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:34:23 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:34:23 np0005464891 python3.9[253801]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct  1 12:34:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:34:24 np0005464891 python3.9[253954]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  1 12:34:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:25 np0005464891 python3.9[254112]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  1 12:34:26 np0005464891 systemd-logind[801]: New session 53 of user zuul.
Oct  1 12:34:26 np0005464891 systemd[1]: Started Session 53 of User zuul.
Oct  1 12:34:26 np0005464891 systemd[1]: session-53.scope: Deactivated successfully.
Oct  1 12:34:26 np0005464891 systemd-logind[801]: Session 53 logged out. Waiting for processes to exit.
Oct  1 12:34:26 np0005464891 systemd-logind[801]: Removed session 53.
Oct  1 12:34:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:27 np0005464891 python3.9[254298]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:34:27 np0005464891 python3.9[254419]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759336466.5938723-1555-266281727891904/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:28 np0005464891 python3.9[254569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:34:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:34:29 np0005464891 python3.9[254645]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:29 np0005464891 python3.9[254795]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:34:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:30 np0005464891 python3.9[254916]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759336469.2267463-1555-247827553494004/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:31 np0005464891 python3.9[255066]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:34:31 np0005464891 python3.9[255187]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759336470.6762393-1555-34008611271570/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:32 np0005464891 python3.9[255337]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:34:33 np0005464891 python3.9[255458]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759336472.051693-1555-29277297151285/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:34:34 np0005464891 python3.9[255610]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:34:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:35 np0005464891 python3.9[255762]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:34:35 np0005464891 python3.9[255914]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:34:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:36 np0005464891 podman[256014]: 2025-10-01 16:34:36.557786865 +0000 UTC m=+0.149220636 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  1 12:34:36 np0005464891 python3.9[256093]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:34:37 np0005464891 python3.9[256216]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759336476.181858-1648-105189221659509/.source _original_basename=.jaxitboi follow=False checksum=35979b08536ddf8e5775bdd4664dc77a56ce18d8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct  1 12:34:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:38 np0005464891 python3.9[256368]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:34:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:34:39 np0005464891 python3.9[256520]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:34:39 np0005464891 python3.9[256641]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759336478.751477-1674-268804208804393/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:40 np0005464891 python3.9[256791]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 12:34:40 np0005464891 podman[256862]: 2025-10-01 16:34:40.972718031 +0000 UTC m=+0.080465653 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  1 12:34:41 np0005464891 python3.9[256931]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759336480.0749886-1689-9272288725920/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 12:34:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:34:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:34:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:34:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:34:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:34:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:34:42 np0005464891 python3.9[257083]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct  1 12:34:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:42 np0005464891 podman[257207]: 2025-10-01 16:34:42.734689277 +0000 UTC m=+0.102120889 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  1 12:34:42 np0005464891 python3.9[257251]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  1 12:34:43 np0005464891 python3[257406]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct  1 12:34:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:34:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:34:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:34:54 np0005464891 podman[257465]: 2025-10-01 16:34:54.364342577 +0000 UTC m=+5.468408127 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 12:34:54 np0005464891 podman[257419]: 2025-10-01 16:34:54.42447818 +0000 UTC m=+10.485291078 image pull cb7a9bebda1404fc92f1415580e7da04b5fcfd160582e38b9b99703a41ed1519 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct  1 12:34:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:54 np0005464891 podman[257524]: 2025-10-01 16:34:54.651646267 +0000 UTC m=+0.073047205 container create dc9e95cdcf507d39428c807d13e274fb2c0edf370080d67eb9e8e0a4fffaf676 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=nova_compute_init, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true)
Oct  1 12:34:54 np0005464891 podman[257524]: 2025-10-01 16:34:54.615396903 +0000 UTC m=+0.036797851 image pull cb7a9bebda1404fc92f1415580e7da04b5fcfd160582e38b9b99703a41ed1519 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct  1 12:34:54 np0005464891 python3[257406]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct  1 12:34:55 np0005464891 python3.9[257714]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:34:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:56 np0005464891 python3.9[257868]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct  1 12:34:58 np0005464891 python3.9[258020]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  1 12:34:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:34:59 np0005464891 python3[258172]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct  1 12:34:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:34:59 np0005464891 podman[258209]: 2025-10-01 16:34:59.432600197 +0000 UTC m=+0.072370717 container create 95bfbac6091687614f832a752b410d5cd94a73194023479bab0267ec68c91b43 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  1 12:34:59 np0005464891 podman[258209]: 2025-10-01 16:34:59.3916446 +0000 UTC m=+0.031415180 image pull cb7a9bebda1404fc92f1415580e7da04b5fcfd160582e38b9b99703a41ed1519 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct  1 12:34:59 np0005464891 python3[258172]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Oct  1 12:35:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:00 np0005464891 python3.9[258399]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:35:01 np0005464891 python3.9[258553]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:35:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:02 np0005464891 python3.9[258704]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759336501.7785602-1781-235508985534279/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 12:35:03 np0005464891 python3.9[258780]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 12:35:03 np0005464891 systemd[1]: Reloading.
Oct  1 12:35:03 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:35:03 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:35:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:35:04 np0005464891 python3.9[258891]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 12:35:04 np0005464891 systemd[1]: Reloading.
Oct  1 12:35:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:04 np0005464891 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 12:35:04 np0005464891 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 12:35:04 np0005464891 systemd[1]: Starting nova_compute container...
Oct  1 12:35:04 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:35:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82bed872b6e5155e3cbf5a28c694cf93a872670892c9fcc5808dab0c8de6a90/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82bed872b6e5155e3cbf5a28c694cf93a872670892c9fcc5808dab0c8de6a90/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82bed872b6e5155e3cbf5a28c694cf93a872670892c9fcc5808dab0c8de6a90/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82bed872b6e5155e3cbf5a28c694cf93a872670892c9fcc5808dab0c8de6a90/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82bed872b6e5155e3cbf5a28c694cf93a872670892c9fcc5808dab0c8de6a90/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:04 np0005464891 podman[258932]: 2025-10-01 16:35:04.909251244 +0000 UTC m=+0.145763310 container init 95bfbac6091687614f832a752b410d5cd94a73194023479bab0267ec68c91b43 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:35:04 np0005464891 podman[258932]: 2025-10-01 16:35:04.923107602 +0000 UTC m=+0.159619638 container start 95bfbac6091687614f832a752b410d5cd94a73194023479bab0267ec68c91b43 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251001)
Oct  1 12:35:04 np0005464891 podman[258932]: nova_compute
Oct  1 12:35:04 np0005464891 nova_compute[258947]: + sudo -E kolla_set_configs
Oct  1 12:35:04 np0005464891 systemd[1]: Started nova_compute container.
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Validating config file
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Copying service configuration files
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Deleting /etc/ceph
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Creating directory /etc/ceph
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Setting permission for /etc/ceph
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Writing out command to execute
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  1 12:35:05 np0005464891 nova_compute[258947]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  1 12:35:05 np0005464891 nova_compute[258947]: ++ cat /run_command
Oct  1 12:35:05 np0005464891 nova_compute[258947]: + CMD=nova-compute
Oct  1 12:35:05 np0005464891 nova_compute[258947]: + ARGS=
Oct  1 12:35:05 np0005464891 nova_compute[258947]: + sudo kolla_copy_cacerts
Oct  1 12:35:05 np0005464891 nova_compute[258947]: + [[ ! -n '' ]]
Oct  1 12:35:05 np0005464891 nova_compute[258947]: + . kolla_extend_start
Oct  1 12:35:05 np0005464891 nova_compute[258947]: + echo 'Running command: '\''nova-compute'\'''
Oct  1 12:35:05 np0005464891 nova_compute[258947]: Running command: 'nova-compute'
Oct  1 12:35:05 np0005464891 nova_compute[258947]: + umask 0022
Oct  1 12:35:05 np0005464891 nova_compute[258947]: + exec nova-compute
Oct  1 12:35:06 np0005464891 python3.9[259108]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:35:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:06 np0005464891 podman[259233]: 2025-10-01 16:35:06.963935492 +0000 UTC m=+0.137028776 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  1 12:35:07 np0005464891 python3.9[259276]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:35:07 np0005464891 nova_compute[258947]: 2025-10-01 16:35:07.481 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  1 12:35:07 np0005464891 nova_compute[258947]: 2025-10-01 16:35:07.481 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  1 12:35:07 np0005464891 nova_compute[258947]: 2025-10-01 16:35:07.481 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  1 12:35:07 np0005464891 nova_compute[258947]: 2025-10-01 16:35:07.481 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  1 12:35:07 np0005464891 nova_compute[258947]: 2025-10-01 16:35:07.644 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:35:07 np0005464891 nova_compute[258947]: 2025-10-01 16:35:07.677 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:35:07 np0005464891 python3.9[259439]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.263 2 INFO nova.virt.driver [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.394 2 INFO nova.compute.provider_config [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.409 2 DEBUG oslo_concurrency.lockutils [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.409 2 DEBUG oslo_concurrency.lockutils [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.409 2 DEBUG oslo_concurrency.lockutils [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.410 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.410 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.410 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.410 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.410 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.410 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.411 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.411 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.411 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.411 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.411 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.412 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.412 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.412 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.412 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.412 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.412 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.413 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.413 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.413 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.413 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.413 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.414 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.414 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.414 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.414 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.414 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.415 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.415 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.415 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.415 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.415 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.415 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.416 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.416 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.416 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.416 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.416 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.417 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.417 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.417 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.417 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.417 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.418 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.418 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.418 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.418 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.418 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.418 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.419 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.419 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.419 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.419 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.419 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.420 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.420 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.420 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.420 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.420 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.421 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.421 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.421 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.421 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.421 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.421 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.422 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.422 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.422 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.422 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.422 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.423 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.423 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.423 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.423 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.423 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.423 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.424 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.424 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.424 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.424 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.424 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.425 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.425 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.425 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.425 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.425 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.426 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.426 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.426 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.426 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.426 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.426 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.427 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.427 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.427 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.427 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.427 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.428 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.428 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.428 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.428 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.428 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.428 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.429 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.429 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.429 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.429 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.429 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.429 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.430 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.430 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.430 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.430 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.430 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.431 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.431 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.431 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.431 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.431 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.431 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.432 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.432 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.432 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.432 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.432 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.433 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.433 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.433 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.433 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.433 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.433 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.434 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.434 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.434 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.434 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.434 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.435 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.435 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.435 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.435 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.435 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.436 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.436 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.436 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.436 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.436 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.436 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.437 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.437 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.437 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.437 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.437 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.438 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.438 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.438 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.438 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.438 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.439 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.439 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.439 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.439 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.439 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.440 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.440 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.440 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.440 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.440 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.440 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.441 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.441 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.441 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.441 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.441 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.442 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.442 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.442 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.442 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.442 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.442 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.443 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.443 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.443 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.443 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.443 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.444 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.444 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.444 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.444 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.444 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.445 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.445 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.445 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.445 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.445 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.445 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.446 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.446 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.446 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.446 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.446 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.447 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.447 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.447 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.447 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.447 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.447 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.448 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.448 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.448 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.448 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.448 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.449 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.449 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.449 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.449 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.449 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.450 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.450 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.450 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.450 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.450 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.450 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.450 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.451 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.451 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.451 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.451 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.451 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.451 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.451 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.451 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.452 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.452 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.452 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.452 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.452 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.452 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.453 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.453 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.453 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.453 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.453 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.453 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.453 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.454 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.454 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.454 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.454 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.454 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.454 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.454 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.454 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.455 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.455 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.455 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.455 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.455 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.455 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.455 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.455 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.456 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.456 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.456 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.456 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.456 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.456 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.456 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.457 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.457 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.457 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.457 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.457 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.457 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.457 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.457 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.458 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.458 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.458 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.458 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.458 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.458 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.458 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.459 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.459 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.459 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.459 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.459 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.459 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.459 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.460 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.460 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.460 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.460 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.460 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.460 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.460 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.460 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.461 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.461 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.461 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.461 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.461 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.461 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.461 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.462 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.462 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.462 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.462 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.462 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.462 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.462 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.462 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.463 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.463 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.463 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.463 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.463 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.463 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.463 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.464 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.464 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.464 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.464 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.464 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.464 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.464 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.464 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.465 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.465 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.465 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.465 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.465 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.465 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.465 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.466 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.466 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.466 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.466 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.466 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.466 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.466 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.466 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.467 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.467 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.467 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.467 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.467 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.467 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.467 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.468 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.468 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.468 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.468 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.468 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.468 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.469 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.469 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.469 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.469 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.469 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.469 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.469 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.470 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.470 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.470 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.470 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.470 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.470 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.470 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.470 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.471 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.471 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.471 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.471 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.471 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.472 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.472 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.472 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.472 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.472 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.472 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.472 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.472 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.473 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.473 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.473 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.473 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.473 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.473 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.473 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.474 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.474 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.474 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.474 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.474 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.474 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.474 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.475 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.475 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.475 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.475 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.475 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.475 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.475 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.475 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.476 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.476 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.476 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.476 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.476 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.476 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.476 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.477 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.477 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.477 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.477 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.477 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.477 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.477 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.477 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.478 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.478 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.478 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.478 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.478 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.478 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.478 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.479 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.479 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.479 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.479 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.479 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.479 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.479 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.480 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.480 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.480 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.480 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.480 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.480 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.480 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.480 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.481 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.481 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.481 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.481 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.481 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.481 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.481 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.482 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.482 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.482 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.482 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.482 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.482 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.482 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.483 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.483 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.483 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.483 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.483 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.483 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.483 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.484 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.484 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.484 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.484 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.484 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.484 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.484 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.485 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.485 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.485 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.485 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.485 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.485 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.485 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.486 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.486 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.486 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.486 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.486 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.486 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.486 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.487 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.487 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.487 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.487 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.487 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.487 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.487 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.488 2 WARNING oslo_config.cfg [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  1 12:35:08 np0005464891 nova_compute[258947]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  1 12:35:08 np0005464891 nova_compute[258947]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  1 12:35:08 np0005464891 nova_compute[258947]: and ``live_migration_inbound_addr`` respectively.
Oct  1 12:35:08 np0005464891 nova_compute[258947]: ).  Its value may be silently ignored in the future.#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.488 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.488 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.488 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.488 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.488 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.489 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.489 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.489 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.489 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.489 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.489 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.489 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.490 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.490 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.490 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.490 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.490 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.490 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.490 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.rbd_secret_uuid        = 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.491 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.491 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.491 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.491 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.491 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.491 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.491 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.492 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.492 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.492 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.492 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.492 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.492 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.492 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.493 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.493 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.493 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.493 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.493 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.493 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.494 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.494 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.494 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.494 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.494 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.494 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.494 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.494 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.495 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.495 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.495 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.495 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.495 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.495 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.495 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.496 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.496 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.496 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.496 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.496 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.496 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.496 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.497 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.497 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.497 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.497 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.497 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.497 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.497 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.497 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.498 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.498 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.498 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.498 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.498 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.498 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.498 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.498 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.499 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.499 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.499 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.499 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.499 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.499 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.499 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.500 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.500 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.500 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.500 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.500 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.500 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.500 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.501 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.501 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.501 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.501 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.501 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.501 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.501 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.501 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.502 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.502 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.502 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.502 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.502 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.502 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.502 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.502 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.503 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.503 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.503 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.503 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.503 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.503 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.503 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.504 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.504 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.504 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.504 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.504 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.504 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.504 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.504 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.505 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.505 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.505 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.505 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.505 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.505 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.505 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.506 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.506 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.506 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.506 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.506 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.506 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.506 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.507 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.507 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.507 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.507 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.507 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.507 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.508 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.508 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.508 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.508 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.508 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.508 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.508 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.509 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.509 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.509 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.509 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.509 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.509 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.509 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.510 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.510 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.510 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.510 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.510 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.510 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.511 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.511 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.511 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.511 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.511 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.511 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.511 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.511 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.512 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.512 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.512 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.512 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.512 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.512 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.512 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.513 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.513 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.513 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.513 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.513 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.513 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.513 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.514 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.514 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.514 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.514 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.514 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.514 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.514 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.515 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.515 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.515 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.515 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.515 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.515 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.515 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.516 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.516 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.516 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.516 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.516 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.516 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.516 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.516 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.517 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.517 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.517 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.517 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.517 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.517 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.518 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.518 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.518 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.518 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.518 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.518 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.518 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.519 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.519 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.519 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.519 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.519 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.519 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.519 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.520 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.520 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.520 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.520 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.520 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.520 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.520 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.520 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.521 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.521 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.521 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.521 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.521 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.521 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.521 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.522 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.522 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.522 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.522 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.522 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.522 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.522 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.523 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.523 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.523 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.523 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.523 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.523 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.524 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.524 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.524 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.524 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.524 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.524 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.524 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.524 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.525 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.525 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.525 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.525 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.525 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.525 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.525 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.526 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.526 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.526 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.526 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.526 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.526 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.526 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.526 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.527 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.527 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.527 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.527 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.527 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.527 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.527 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.528 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.528 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.528 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.528 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.528 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.528 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.528 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.529 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.529 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.529 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.529 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.529 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.529 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.529 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.530 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.530 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.530 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.530 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.530 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.530 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.530 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.530 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.531 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.531 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.531 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.531 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.531 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.531 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.531 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.532 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.532 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.532 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.532 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.532 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.532 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.532 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.533 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.533 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.533 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.533 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.533 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.533 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.533 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.533 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.534 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.534 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.534 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.534 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.534 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.534 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.534 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.535 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.535 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.535 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.535 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.535 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.535 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.535 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.536 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.536 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.536 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.536 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.536 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.536 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.536 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.536 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.537 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.537 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.537 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.537 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.537 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.537 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.537 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.537 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.538 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.538 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.538 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.538 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.538 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.538 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.538 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.539 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.539 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.539 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.539 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.539 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.539 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.539 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.539 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.540 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.540 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.540 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.540 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.540 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.540 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.540 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.540 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.541 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.541 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.541 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.541 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.541 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.541 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.541 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.542 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.542 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.542 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.542 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.542 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.542 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.542 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.542 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.543 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.543 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.543 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.543 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.543 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.543 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.543 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.544 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.544 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.544 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.544 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.544 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.544 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.544 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.544 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.545 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.545 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.545 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.545 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.545 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.545 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.545 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.546 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.546 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.546 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.546 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.546 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.546 2 DEBUG oslo_service.service [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.547 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.559 2 DEBUG nova.virt.libvirt.host [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.560 2 DEBUG nova.virt.libvirt.host [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.560 2 DEBUG nova.virt.libvirt.host [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.560 2 DEBUG nova.virt.libvirt.host [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  1 12:35:08 np0005464891 systemd[1]: Starting libvirt QEMU daemon...
Oct  1 12:35:08 np0005464891 systemd[1]: Started libvirt QEMU daemon.
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.663 2 DEBUG nova.virt.libvirt.host [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7ff2734aedc0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.665 2 DEBUG nova.virt.libvirt.host [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7ff2734aedc0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.666 2 INFO nova.virt.libvirt.driver [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.696 2 WARNING nova.virt.libvirt.driver [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  1 12:35:08 np0005464891 nova_compute[258947]: 2025-10-01 16:35:08.696 2 DEBUG nova.virt.libvirt.volume.mount [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  1 12:35:08 np0005464891 python3.9[259592]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  1 12:35:08 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 12:35:08 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 12:35:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:35:09 np0005464891 nova_compute[258947]: 2025-10-01 16:35:09.701 2 INFO nova.virt.libvirt.host [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Libvirt host capabilities <capabilities>
Oct  1 12:35:09 np0005464891 nova_compute[258947]: 
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <host>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <uuid>9659e747-1637-4bf9-8b69-aeb4fd4304e0</uuid>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <cpu>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <arch>x86_64</arch>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model>EPYC-Rome-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <vendor>AMD</vendor>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <microcode version='16777317'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <signature family='23' model='49' stepping='0'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <maxphysaddr mode='emulate' bits='40'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='x2apic'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='tsc-deadline'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='osxsave'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='hypervisor'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='tsc_adjust'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='spec-ctrl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='stibp'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='arch-capabilities'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='ssbd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='cmp_legacy'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='topoext'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='virt-ssbd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='lbrv'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='tsc-scale'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='vmcb-clean'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='pause-filter'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='pfthreshold'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='svme-addr-chk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='rdctl-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='skip-l1dfl-vmentry'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='mds-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature name='pschange-mc-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <pages unit='KiB' size='4'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <pages unit='KiB' size='2048'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <pages unit='KiB' size='1048576'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </cpu>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <power_management>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <suspend_mem/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </power_management>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <iommu support='no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <migration_features>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <live/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <uri_transports>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <uri_transport>tcp</uri_transport>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <uri_transport>rdma</uri_transport>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </uri_transports>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </migration_features>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <topology>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <cells num='1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <cell id='0'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:          <memory unit='KiB'>7864116</memory>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:          <pages unit='KiB' size='4'>1966029</pages>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:          <pages unit='KiB' size='2048'>0</pages>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:          <distances>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:            <sibling id='0' value='10'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:          </distances>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:          <cpus num='8'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:          </cpus>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        </cell>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </cells>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </topology>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <cache>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </cache>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <secmodel>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model>selinux</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <doi>0</doi>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </secmodel>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <secmodel>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model>dac</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <doi>0</doi>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </secmodel>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  </host>
Oct  1 12:35:09 np0005464891 nova_compute[258947]: 
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <guest>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <os_type>hvm</os_type>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <arch name='i686'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <wordsize>32</wordsize>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <domain type='qemu'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <domain type='kvm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </arch>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <features>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <pae/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <nonpae/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <acpi default='on' toggle='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <apic default='on' toggle='no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <cpuselection/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <deviceboot/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <disksnapshot default='on' toggle='no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <externalSnapshot/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </features>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  </guest>
Oct  1 12:35:09 np0005464891 nova_compute[258947]: 
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <guest>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <os_type>hvm</os_type>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <arch name='x86_64'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <wordsize>64</wordsize>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <domain type='qemu'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <domain type='kvm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </arch>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <features>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <acpi default='on' toggle='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <apic default='on' toggle='no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <cpuselection/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <deviceboot/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <disksnapshot default='on' toggle='no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <externalSnapshot/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </features>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  </guest>
Oct  1 12:35:09 np0005464891 nova_compute[258947]: 
Oct  1 12:35:09 np0005464891 nova_compute[258947]: </capabilities>
Oct  1 12:35:09 np0005464891 nova_compute[258947]: #033[00m
Oct  1 12:35:09 np0005464891 nova_compute[258947]: 2025-10-01 16:35:09.725 2 DEBUG nova.virt.libvirt.host [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  1 12:35:09 np0005464891 nova_compute[258947]: 2025-10-01 16:35:09.773 2 DEBUG nova.virt.libvirt.host [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  1 12:35:09 np0005464891 nova_compute[258947]: <domainCapabilities>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <path>/usr/libexec/qemu-kvm</path>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <domain>kvm</domain>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <arch>i686</arch>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <vcpu max='4096'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <iothreads supported='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <os supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <enum name='firmware'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <loader supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='type'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>rom</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>pflash</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='readonly'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>yes</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>no</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='secure'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>no</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </loader>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  </os>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <cpu>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <mode name='host-passthrough' supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='hostPassthroughMigratable'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>on</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>off</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </mode>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <mode name='maximum' supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='maximumMigratable'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>on</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>off</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </mode>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <mode name='host-model' supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <vendor>AMD</vendor>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='x2apic'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='tsc-deadline'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='hypervisor'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='tsc_adjust'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='spec-ctrl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='stibp'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='arch-capabilities'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='ssbd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='cmp_legacy'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='overflow-recov'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='succor'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='ibrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='amd-ssbd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='virt-ssbd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='lbrv'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='tsc-scale'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='vmcb-clean'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='flushbyasid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='pause-filter'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='pfthreshold'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='svme-addr-chk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='rdctl-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='mds-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='pschange-mc-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='gds-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='rfds-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='disable' name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </mode>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <mode name='custom' supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-noTSX'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v5'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cooperlake'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cooperlake-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cooperlake-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Denverton'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mpx'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Denverton-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mpx'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Denverton-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Denverton-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Dhyana-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Genoa'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amd-psfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='auto-ibrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='stibp-always-on'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Genoa-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amd-psfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='auto-ibrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='stibp-always-on'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Milan'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Milan-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Milan-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amd-psfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='stibp-always-on'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Rome'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Rome-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Rome-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Rome-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='GraniteRapids'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='prefetchiti'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='GraniteRapids-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='prefetchiti'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='GraniteRapids-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx10'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx10-128'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx10-256'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx10-512'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='prefetchiti'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-noTSX'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-noTSX'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v5'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v6'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v7'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='IvyBridge'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='IvyBridge-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='IvyBridge-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='IvyBridge-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='KnightsMill'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-4fmaps'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-4vnniw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512er'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512pf'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='KnightsMill-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-4fmaps'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-4vnniw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512er'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512pf'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Opteron_G4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fma4'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xop'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Opteron_G4-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fma4'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xop'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Opteron_G5'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fma4'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tbm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xop'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Opteron_G5-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fma4'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tbm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xop'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='SapphireRapids'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='SapphireRapids-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='SapphireRapids-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='SapphireRapids-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='SierraForest'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-ne-convert'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cmpccxadd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='SierraForest-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-ne-convert'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cmpccxadd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v5'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Snowridge'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='core-capability'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mpx'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='split-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Snowridge-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='core-capability'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mpx'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='split-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Snowridge-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='core-capability'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='split-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Snowridge-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='core-capability'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='split-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Snowridge-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='athlon'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='3dnow'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='3dnowext'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='athlon-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='3dnow'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='3dnowext'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='core2duo'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='core2duo-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='coreduo'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='coreduo-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='n270'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='n270-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='phenom'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='3dnow'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='3dnowext'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='phenom-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='3dnow'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='3dnowext'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </mode>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  </cpu>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <memoryBacking supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <enum name='sourceType'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <value>file</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <value>anonymous</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <value>memfd</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  </memoryBacking>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <devices>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <disk supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='diskDevice'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>disk</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>cdrom</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>floppy</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>lun</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='bus'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>fdc</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>scsi</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtio</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>usb</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>sata</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='model'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtio</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtio-transitional</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtio-non-transitional</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </disk>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <graphics supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='type'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>vnc</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>egl-headless</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>dbus</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </graphics>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <video supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='modelType'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>vga</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>cirrus</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtio</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>none</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>bochs</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>ramfb</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </video>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <hostdev supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='mode'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>subsystem</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='startupPolicy'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>default</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>mandatory</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>requisite</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>optional</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='subsysType'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>usb</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>pci</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>scsi</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='capsType'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='pciBackend'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </hostdev>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <rng supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='model'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtio</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtio-transitional</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtio-non-transitional</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='backendModel'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>random</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>egd</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>builtin</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </rng>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <filesystem supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='driverType'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>path</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>handle</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtiofs</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </filesystem>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <tpm supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='model'>
Oct  1 12:35:09 np0005464891 python3.9[259827]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>tpm-tis</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>tpm-crb</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='backendModel'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>emulator</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>external</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='backendVersion'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>2.0</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </tpm>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <redirdev supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='bus'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>usb</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </redirdev>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <channel supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='type'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>pty</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>unix</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </channel>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <crypto supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='model'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='type'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>qemu</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='backendModel'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>builtin</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </crypto>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <interface supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='backendType'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>default</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>passt</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </interface>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <panic supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='model'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>isa</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>hyperv</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </panic>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  </devices>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <features>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <gic supported='no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <vmcoreinfo supported='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <genid supported='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <backingStoreInput supported='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <backup supported='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <async-teardown supported='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <ps2 supported='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <sev supported='no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <sgx supported='no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <hyperv supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='features'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>relaxed</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>vapic</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>spinlocks</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>vpindex</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>runtime</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>synic</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>stimer</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>reset</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>vendor_id</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>frequencies</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>reenlightenment</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>tlbflush</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>ipi</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>avic</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>emsr_bitmap</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>xmm_input</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </hyperv>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <launchSecurity supported='no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  </features>
Oct  1 12:35:09 np0005464891 nova_compute[258947]: </domainCapabilities>
Oct  1 12:35:09 np0005464891 nova_compute[258947]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  1 12:35:09 np0005464891 nova_compute[258947]: 2025-10-01 16:35:09.784 2 DEBUG nova.virt.libvirt.host [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  1 12:35:09 np0005464891 nova_compute[258947]: <domainCapabilities>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <path>/usr/libexec/qemu-kvm</path>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <domain>kvm</domain>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <arch>i686</arch>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <vcpu max='240'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <iothreads supported='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <os supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <enum name='firmware'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <loader supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='type'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>rom</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>pflash</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='readonly'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>yes</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>no</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='secure'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>no</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </loader>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  </os>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <cpu>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <mode name='host-passthrough' supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='hostPassthroughMigratable'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>on</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>off</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </mode>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <mode name='maximum' supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='maximumMigratable'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>on</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>off</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </mode>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <mode name='host-model' supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <vendor>AMD</vendor>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='x2apic'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='tsc-deadline'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='hypervisor'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='tsc_adjust'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='spec-ctrl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='stibp'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='arch-capabilities'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='ssbd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='cmp_legacy'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='overflow-recov'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='succor'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='ibrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='amd-ssbd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='virt-ssbd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='lbrv'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='tsc-scale'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='vmcb-clean'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='flushbyasid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='pause-filter'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='pfthreshold'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='svme-addr-chk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='rdctl-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='mds-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='pschange-mc-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='gds-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='rfds-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='disable' name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </mode>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <mode name='custom' supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-noTSX'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v5'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cooperlake'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cooperlake-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cooperlake-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Denverton'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mpx'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Denverton-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mpx'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Denverton-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Denverton-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Dhyana-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Genoa'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amd-psfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='auto-ibrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='stibp-always-on'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Genoa-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amd-psfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='auto-ibrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='stibp-always-on'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Milan'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Milan-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Milan-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amd-psfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='stibp-always-on'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Rome'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Rome-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Rome-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Rome-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='GraniteRapids'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='prefetchiti'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='GraniteRapids-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='prefetchiti'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='GraniteRapids-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx10'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx10-128'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx10-256'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx10-512'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='prefetchiti'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-noTSX'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-noTSX'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v5'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v6'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v7'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='IvyBridge'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='IvyBridge-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='IvyBridge-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='IvyBridge-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='KnightsMill'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-4fmaps'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-4vnniw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512er'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512pf'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='KnightsMill-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-4fmaps'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-4vnniw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512er'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512pf'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Opteron_G4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fma4'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xop'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Opteron_G4-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fma4'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xop'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Opteron_G5'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fma4'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tbm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xop'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Opteron_G5-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fma4'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tbm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xop'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='SapphireRapids'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='SapphireRapids-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='SapphireRapids-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='SapphireRapids-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='SierraForest'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-ne-convert'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cmpccxadd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='SierraForest-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-ne-convert'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cmpccxadd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v5'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Snowridge'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='core-capability'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mpx'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='split-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Snowridge-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='core-capability'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mpx'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='split-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Snowridge-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='core-capability'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='split-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Snowridge-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:09 np0005464891 systemd[1]: Stopping nova_compute container...
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='core-capability'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='split-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Snowridge-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='athlon'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='3dnow'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='3dnowext'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='athlon-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='3dnow'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='3dnowext'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='core2duo'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='core2duo-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='coreduo'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='coreduo-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='n270'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='n270-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='phenom'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='3dnow'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='3dnowext'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='phenom-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='3dnow'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='3dnowext'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </mode>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  </cpu>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <memoryBacking supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <enum name='sourceType'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <value>file</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <value>anonymous</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <value>memfd</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  </memoryBacking>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <devices>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <disk supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='diskDevice'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>disk</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>cdrom</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>floppy</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>lun</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='bus'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>ide</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>fdc</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>scsi</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtio</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>usb</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>sata</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='model'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtio</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtio-transitional</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtio-non-transitional</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </disk>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <graphics supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='type'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>vnc</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>egl-headless</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>dbus</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </graphics>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <video supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='modelType'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>vga</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>cirrus</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtio</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>none</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>bochs</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>ramfb</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </video>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <hostdev supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='mode'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>subsystem</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='startupPolicy'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>default</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>mandatory</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>requisite</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>optional</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='subsysType'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>usb</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>pci</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>scsi</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='capsType'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='pciBackend'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </hostdev>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <rng supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='model'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtio</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtio-transitional</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtio-non-transitional</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='backendModel'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>random</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>egd</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>builtin</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </rng>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <filesystem supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='driverType'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>path</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>handle</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>virtiofs</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </filesystem>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <tpm supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='model'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>tpm-tis</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>tpm-crb</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='backendModel'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>emulator</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>external</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='backendVersion'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>2.0</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </tpm>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <redirdev supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='bus'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>usb</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </redirdev>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <channel supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='type'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>pty</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>unix</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </channel>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <crypto supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='model'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='type'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>qemu</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='backendModel'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>builtin</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </crypto>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <interface supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='backendType'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>default</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>passt</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </interface>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <panic supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='model'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>isa</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>hyperv</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </panic>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  </devices>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <features>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <gic supported='no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <vmcoreinfo supported='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <genid supported='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <backingStoreInput supported='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <backup supported='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <async-teardown supported='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <ps2 supported='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <sev supported='no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <sgx supported='no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <hyperv supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='features'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>relaxed</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>vapic</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>spinlocks</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>vpindex</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>runtime</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>synic</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>stimer</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>reset</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>vendor_id</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>frequencies</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>reenlightenment</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>tlbflush</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>ipi</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>avic</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>emsr_bitmap</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>xmm_input</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </hyperv>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <launchSecurity supported='no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  </features>
Oct  1 12:35:09 np0005464891 nova_compute[258947]: </domainCapabilities>
Oct  1 12:35:09 np0005464891 nova_compute[258947]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  1 12:35:09 np0005464891 nova_compute[258947]: 2025-10-01 16:35:09.878 2 DEBUG nova.virt.libvirt.host [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  1 12:35:09 np0005464891 nova_compute[258947]: 2025-10-01 16:35:09.883 2 DEBUG nova.virt.libvirt.host [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  1 12:35:09 np0005464891 nova_compute[258947]: <domainCapabilities>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <path>/usr/libexec/qemu-kvm</path>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <domain>kvm</domain>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <arch>x86_64</arch>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <vcpu max='4096'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <iothreads supported='yes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <os supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <enum name='firmware'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <value>efi</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <loader supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='type'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>rom</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>pflash</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='readonly'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>yes</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>no</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='secure'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>yes</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>no</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </loader>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  </os>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:  <cpu>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <mode name='host-passthrough' supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='hostPassthroughMigratable'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>on</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>off</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </mode>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <mode name='maximum' supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <enum name='maximumMigratable'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>on</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <value>off</value>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </mode>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <mode name='host-model' supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <vendor>AMD</vendor>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='x2apic'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='tsc-deadline'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='hypervisor'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='tsc_adjust'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='spec-ctrl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='stibp'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='arch-capabilities'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='ssbd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='cmp_legacy'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='overflow-recov'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='succor'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='ibrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='amd-ssbd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='virt-ssbd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='lbrv'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='tsc-scale'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='vmcb-clean'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='flushbyasid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='pause-filter'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='pfthreshold'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='svme-addr-chk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='rdctl-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='mds-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='pschange-mc-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='gds-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='require' name='rfds-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <feature policy='disable' name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    </mode>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:    <mode name='custom' supported='yes'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-noTSX'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v5'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cooperlake'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cooperlake-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Cooperlake-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Denverton'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mpx'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Denverton-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mpx'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Denverton-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Denverton-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Dhyana-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Genoa'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amd-psfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='auto-ibrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='stibp-always-on'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Genoa-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amd-psfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='auto-ibrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='stibp-always-on'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Milan'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Milan-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Milan-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amd-psfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='stibp-always-on'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Rome'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Rome-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Rome-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Rome-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='EPYC-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='GraniteRapids'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='prefetchiti'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='GraniteRapids-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='prefetchiti'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='GraniteRapids-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx10'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx10-128'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx10-256'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx10-512'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='prefetchiti'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-noTSX'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Haswell-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-noTSX'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v1'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v2'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v3'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v4'>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:09 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v5'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v6'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v7'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='IvyBridge'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='IvyBridge-IBRS'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='IvyBridge-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='IvyBridge-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='KnightsMill'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-4fmaps'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-4vnniw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512er'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512pf'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='KnightsMill-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-4fmaps'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-4vnniw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512er'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512pf'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Opteron_G4'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fma4'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xop'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Opteron_G4-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fma4'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xop'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Opteron_G5'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fma4'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='tbm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xop'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Opteron_G5-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fma4'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='tbm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xop'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='SapphireRapids'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='SapphireRapids-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='SapphireRapids-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='SapphireRapids-v3'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='SierraForest'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-ne-convert'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni-int8'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cmpccxadd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='SierraForest-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-ne-convert'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni-int8'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cmpccxadd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-IBRS'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-v3'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-v4'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-IBRS'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v3'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v4'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v5'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Snowridge'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='core-capability'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='mpx'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='split-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Snowridge-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='core-capability'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='mpx'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='split-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Snowridge-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='core-capability'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='split-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Snowridge-v3'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='core-capability'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='split-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Snowridge-v4'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='athlon'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='3dnow'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='3dnowext'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='athlon-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='3dnow'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='3dnowext'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='core2duo'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='core2duo-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='coreduo'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='coreduo-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='n270'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='n270-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='phenom'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='3dnow'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='3dnowext'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='phenom-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='3dnow'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='3dnowext'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </mode>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  </cpu>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  <memoryBacking supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <enum name='sourceType'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <value>file</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <value>anonymous</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <value>memfd</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  </memoryBacking>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  <devices>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <disk supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='diskDevice'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>disk</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>cdrom</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>floppy</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>lun</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='bus'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>fdc</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>scsi</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtio</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>usb</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>sata</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='model'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtio</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtio-transitional</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtio-non-transitional</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </disk>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <graphics supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='type'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>vnc</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>egl-headless</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>dbus</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </graphics>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <video supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='modelType'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>vga</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>cirrus</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtio</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>none</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>bochs</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>ramfb</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </video>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <hostdev supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='mode'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>subsystem</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='startupPolicy'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>default</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>mandatory</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>requisite</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>optional</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='subsysType'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>usb</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>pci</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>scsi</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='capsType'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='pciBackend'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </hostdev>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <rng supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='model'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtio</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtio-transitional</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtio-non-transitional</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='backendModel'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>random</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>egd</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>builtin</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </rng>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <filesystem supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='driverType'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>path</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>handle</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtiofs</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </filesystem>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <tpm supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='model'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>tpm-tis</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>tpm-crb</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='backendModel'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>emulator</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>external</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='backendVersion'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>2.0</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </tpm>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <redirdev supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='bus'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>usb</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </redirdev>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <channel supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='type'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>pty</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>unix</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </channel>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <crypto supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='model'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='type'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>qemu</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='backendModel'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>builtin</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </crypto>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <interface supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='backendType'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>default</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>passt</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </interface>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <panic supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='model'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>isa</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>hyperv</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </panic>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  </devices>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  <features>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <gic supported='no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <vmcoreinfo supported='yes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <genid supported='yes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <backingStoreInput supported='yes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <backup supported='yes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <async-teardown supported='yes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <ps2 supported='yes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <sev supported='no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <sgx supported='no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <hyperv supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='features'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>relaxed</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>vapic</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>spinlocks</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>vpindex</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>runtime</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>synic</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>stimer</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>reset</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>vendor_id</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>frequencies</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>reenlightenment</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>tlbflush</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>ipi</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>avic</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>emsr_bitmap</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>xmm_input</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </hyperv>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <launchSecurity supported='no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  </features>
Oct  1 12:35:10 np0005464891 nova_compute[258947]: </domainCapabilities>
Oct  1 12:35:10 np0005464891 nova_compute[258947]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  1 12:35:10 np0005464891 nova_compute[258947]: 2025-10-01 16:35:09.945 2 DEBUG nova.virt.libvirt.host [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  1 12:35:10 np0005464891 nova_compute[258947]: <domainCapabilities>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  <path>/usr/libexec/qemu-kvm</path>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  <domain>kvm</domain>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  <arch>x86_64</arch>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  <vcpu max='240'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  <iothreads supported='yes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  <os supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <enum name='firmware'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <loader supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='type'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>rom</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>pflash</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='readonly'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>yes</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>no</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='secure'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>no</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </loader>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  </os>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  <cpu>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <mode name='host-passthrough' supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='hostPassthroughMigratable'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>on</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>off</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </mode>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <mode name='maximum' supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='maximumMigratable'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>on</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>off</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </mode>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <mode name='host-model' supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <vendor>AMD</vendor>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='x2apic'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='tsc-deadline'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='hypervisor'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='tsc_adjust'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='spec-ctrl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='stibp'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='arch-capabilities'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='ssbd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='cmp_legacy'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='overflow-recov'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='succor'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='ibrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='amd-ssbd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='virt-ssbd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='lbrv'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='tsc-scale'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='vmcb-clean'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='flushbyasid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='pause-filter'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='pfthreshold'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='svme-addr-chk'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='rdctl-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='mds-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='pschange-mc-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='gds-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='require' name='rfds-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <feature policy='disable' name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </mode>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <mode name='custom' supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Broadwell'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-IBRS'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-noTSX'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-v3'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Broadwell-v4'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v3'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v4'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Cascadelake-Server-v5'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Cooperlake'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Cooperlake-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Cooperlake-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Denverton'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='mpx'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Denverton-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='mpx'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Denverton-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Denverton-v3'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Dhyana-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Genoa'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amd-psfd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='auto-ibrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='stibp-always-on'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Genoa-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amd-psfd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='auto-ibrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='stibp-always-on'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Milan'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Milan-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Milan-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amd-psfd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='stibp-always-on'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Rome'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Rome-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Rome-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='EPYC-Rome-v3'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='EPYC-v3'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='EPYC-v4'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='GraniteRapids'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-fp16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='prefetchiti'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='GraniteRapids-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-fp16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='prefetchiti'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='GraniteRapids-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-fp16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx10'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx10-128'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx10-256'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx10-512'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='prefetchiti'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Haswell'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Haswell-IBRS'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Haswell-noTSX'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Haswell-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Haswell-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Haswell-v3'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Haswell-v4'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-noTSX'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v3'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v4'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v5'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v6'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Icelake-Server-v7'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='IvyBridge'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='IvyBridge-IBRS'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='IvyBridge-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='IvyBridge-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='KnightsMill'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-4fmaps'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-4vnniw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512er'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512pf'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='KnightsMill-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-4fmaps'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-4vnniw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512er'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512pf'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Opteron_G4'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fma4'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xop'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Opteron_G4-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fma4'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xop'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Opteron_G5'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fma4'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='tbm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xop'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Opteron_G5-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fma4'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='tbm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xop'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='SapphireRapids'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='SapphireRapids-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='SapphireRapids-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='SapphireRapids-v3'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-int8'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='amx-tile'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-bf16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-fp16'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bitalg'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrc'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fzrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='la57'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='taa-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xfd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='SierraForest'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-ne-convert'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni-int8'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cmpccxadd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='SierraForest-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-ifma'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-ne-convert'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx-vnni-int8'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cmpccxadd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fbsdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='fsrs'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ibrs-all'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='mcdt-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pbrsb-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='psdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='serialize'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vaes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-IBRS'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-v3'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Client-v4'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-IBRS'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='hle'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='rtm'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v3'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v4'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Skylake-Server-v5'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512bw'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512cd'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512dq'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512f'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='avx512vl'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='invpcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pcid'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='pku'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Snowridge'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='core-capability'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='mpx'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='split-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Snowridge-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='core-capability'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='mpx'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='split-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Snowridge-v2'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='core-capability'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='split-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Snowridge-v3'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='core-capability'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='split-lock-detect'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='Snowridge-v4'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='cldemote'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='erms'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='gfni'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdir64b'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='movdiri'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='xsaves'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='athlon'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='3dnow'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='3dnowext'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='athlon-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='3dnow'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='3dnowext'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='core2duo'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='core2duo-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='coreduo'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='coreduo-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='n270'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='n270-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='ss'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='phenom'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='3dnow'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='3dnowext'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <blockers model='phenom-v1'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='3dnow'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <feature name='3dnowext'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </blockers>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </mode>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  </cpu>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  <memoryBacking supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <enum name='sourceType'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <value>file</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <value>anonymous</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <value>memfd</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  </memoryBacking>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  <devices>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <disk supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='diskDevice'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>disk</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>cdrom</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>floppy</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>lun</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='bus'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>ide</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>fdc</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>scsi</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtio</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>usb</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>sata</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='model'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtio</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtio-transitional</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtio-non-transitional</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </disk>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <graphics supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='type'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>vnc</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>egl-headless</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>dbus</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </graphics>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <video supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='modelType'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>vga</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>cirrus</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtio</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>none</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>bochs</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>ramfb</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </video>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <hostdev supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='mode'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>subsystem</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='startupPolicy'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>default</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>mandatory</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>requisite</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>optional</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='subsysType'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>usb</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>pci</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>scsi</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='capsType'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='pciBackend'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </hostdev>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <rng supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='model'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtio</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtio-transitional</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtio-non-transitional</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='backendModel'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>random</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>egd</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>builtin</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </rng>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <filesystem supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='driverType'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>path</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>handle</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>virtiofs</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </filesystem>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <tpm supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='model'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>tpm-tis</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>tpm-crb</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='backendModel'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>emulator</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>external</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='backendVersion'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>2.0</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </tpm>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <redirdev supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='bus'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>usb</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </redirdev>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <channel supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='type'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>pty</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>unix</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </channel>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <crypto supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='model'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='type'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>qemu</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='backendModel'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>builtin</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </crypto>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <interface supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='backendType'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>default</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>passt</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </interface>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <panic supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='model'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>isa</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>hyperv</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </panic>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  </devices>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  <features>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <gic supported='no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <vmcoreinfo supported='yes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <genid supported='yes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <backingStoreInput supported='yes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <backup supported='yes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <async-teardown supported='yes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <ps2 supported='yes'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <sev supported='no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <sgx supported='no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <hyperv supported='yes'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      <enum name='features'>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>relaxed</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>vapic</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>spinlocks</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>vpindex</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>runtime</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>synic</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>stimer</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>reset</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>vendor_id</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>frequencies</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>reenlightenment</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>tlbflush</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>ipi</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>avic</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>emsr_bitmap</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:        <value>xmm_input</value>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:      </enum>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    </hyperv>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:    <launchSecurity supported='no'/>
Oct  1 12:35:10 np0005464891 nova_compute[258947]:  </features>
Oct  1 12:35:10 np0005464891 nova_compute[258947]: </domainCapabilities>
Oct  1 12:35:10 np0005464891 nova_compute[258947]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  1 12:35:10 np0005464891 nova_compute[258947]: 2025-10-01 16:35:10.000 2 DEBUG nova.virt.libvirt.host [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  1 12:35:10 np0005464891 nova_compute[258947]: 2025-10-01 16:35:10.000 2 INFO nova.virt.libvirt.host [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] Secure Boot support detected#033[00m
Oct  1 12:35:10 np0005464891 nova_compute[258947]: 2025-10-01 16:35:10.002 2 INFO nova.virt.libvirt.driver [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  1 12:35:10 np0005464891 nova_compute[258947]: 2025-10-01 16:35:10.002 2 INFO nova.virt.libvirt.driver [None req-26ed7133-8d76-450f-8acf-08e366365d88 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  1 12:35:10 np0005464891 nova_compute[258947]: 2025-10-01 16:35:10.005 2 DEBUG oslo_concurrency.lockutils [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:35:10 np0005464891 nova_compute[258947]: 2025-10-01 16:35:10.006 2 DEBUG oslo_concurrency.lockutils [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:35:10 np0005464891 nova_compute[258947]: 2025-10-01 16:35:10.006 2 DEBUG oslo_concurrency.lockutils [None req-a6694a9e-f29c-4b9d-bf3f-7591a64bc3e7 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:35:10 np0005464891 virtqemud[259614]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct  1 12:35:10 np0005464891 systemd[1]: libpod-95bfbac6091687614f832a752b410d5cd94a73194023479bab0267ec68c91b43.scope: Deactivated successfully.
Oct  1 12:35:10 np0005464891 conmon[258947]: conmon 95bfbac6091687614f83 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-95bfbac6091687614f832a752b410d5cd94a73194023479bab0267ec68c91b43.scope/container/memory.events
Oct  1 12:35:10 np0005464891 virtqemud[259614]: hostname: compute-0
Oct  1 12:35:10 np0005464891 virtqemud[259614]: End of file while reading data: Input/output error
Oct  1 12:35:10 np0005464891 systemd[1]: libpod-95bfbac6091687614f832a752b410d5cd94a73194023479bab0267ec68c91b43.scope: Consumed 3.070s CPU time.
Oct  1 12:35:10 np0005464891 podman[259835]: 2025-10-01 16:35:10.38729022 +0000 UTC m=+0.427378820 container stop 95bfbac6091687614f832a752b410d5cd94a73194023479bab0267ec68c91b43 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 12:35:10 np0005464891 podman[259835]: 2025-10-01 16:35:10.421449216 +0000 UTC m=+0.461543436 container died 95bfbac6091687614f832a752b410d5cd94a73194023479bab0267ec68c91b43 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct  1 12:35:10 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-95bfbac6091687614f832a752b410d5cd94a73194023479bab0267ec68c91b43-userdata-shm.mount: Deactivated successfully.
Oct  1 12:35:10 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b82bed872b6e5155e3cbf5a28c694cf93a872670892c9fcc5808dab0c8de6a90-merged.mount: Deactivated successfully.
Oct  1 12:35:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:11 np0005464891 podman[259835]: 2025-10-01 16:35:11.271031831 +0000 UTC m=+1.311120421 container cleanup 95bfbac6091687614f832a752b410d5cd94a73194023479bab0267ec68c91b43 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=nova_compute)
Oct  1 12:35:11 np0005464891 podman[259835]: nova_compute
Oct  1 12:35:11 np0005464891 podman[259861]: nova_compute
Oct  1 12:35:11 np0005464891 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct  1 12:35:11 np0005464891 systemd[1]: Stopped nova_compute container.
Oct  1 12:35:11 np0005464891 systemd[1]: Starting nova_compute container...
Oct  1 12:35:11 np0005464891 podman[259862]: 2025-10-01 16:35:11.401410549 +0000 UTC m=+0.088485697 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:35:11 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:35:11 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82bed872b6e5155e3cbf5a28c694cf93a872670892c9fcc5808dab0c8de6a90/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:11 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82bed872b6e5155e3cbf5a28c694cf93a872670892c9fcc5808dab0c8de6a90/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:11 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82bed872b6e5155e3cbf5a28c694cf93a872670892c9fcc5808dab0c8de6a90/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:11 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82bed872b6e5155e3cbf5a28c694cf93a872670892c9fcc5808dab0c8de6a90/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:11 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82bed872b6e5155e3cbf5a28c694cf93a872670892c9fcc5808dab0c8de6a90/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:11 np0005464891 podman[259891]: 2025-10-01 16:35:11.48828764 +0000 UTC m=+0.099252778 container init 95bfbac6091687614f832a752b410d5cd94a73194023479bab0267ec68c91b43 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible)
Oct  1 12:35:11 np0005464891 podman[259891]: 2025-10-01 16:35:11.50008865 +0000 UTC m=+0.111053768 container start 95bfbac6091687614f832a752b410d5cd94a73194023479bab0267ec68c91b43 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 12:35:11 np0005464891 podman[259891]: nova_compute
Oct  1 12:35:11 np0005464891 nova_compute[259907]: + sudo -E kolla_set_configs
Oct  1 12:35:11 np0005464891 systemd[1]: Started nova_compute container.
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Validating config file
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Copying service configuration files
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Deleting /etc/ceph
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Creating directory /etc/ceph
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Setting permission for /etc/ceph
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Writing out command to execute
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  1 12:35:11 np0005464891 nova_compute[259907]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  1 12:35:11 np0005464891 nova_compute[259907]: ++ cat /run_command
Oct  1 12:35:11 np0005464891 nova_compute[259907]: + CMD=nova-compute
Oct  1 12:35:11 np0005464891 nova_compute[259907]: + ARGS=
Oct  1 12:35:11 np0005464891 nova_compute[259907]: + sudo kolla_copy_cacerts
Oct  1 12:35:11 np0005464891 nova_compute[259907]: + [[ ! -n '' ]]
Oct  1 12:35:11 np0005464891 nova_compute[259907]: + . kolla_extend_start
Oct  1 12:35:11 np0005464891 nova_compute[259907]: Running command: 'nova-compute'
Oct  1 12:35:11 np0005464891 nova_compute[259907]: + echo 'Running command: '\''nova-compute'\'''
Oct  1 12:35:11 np0005464891 nova_compute[259907]: + umask 0022
Oct  1 12:35:11 np0005464891 nova_compute[259907]: + exec nova-compute
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:35:12
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['backups', '.rgw.root', 'cephfs.cephfs.data', 'images', '.mgr', 'default.rgw.log', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:35:12 np0005464891 python3.9[260070]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  1 12:35:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:35:12.432 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:35:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:35:12.432 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:35:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:35:12.432 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:35:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:12 np0005464891 systemd[1]: Started libpod-conmon-dc9e95cdcf507d39428c807d13e274fb2c0edf370080d67eb9e8e0a4fffaf676.scope.
Oct  1 12:35:12 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:35:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6453b077f200c623eced5a128c4a8cdf132bd0598d186474730d8ab4c091de/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6453b077f200c623eced5a128c4a8cdf132bd0598d186474730d8ab4c091de/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:12 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6453b077f200c623eced5a128c4a8cdf132bd0598d186474730d8ab4c091de/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:12 np0005464891 podman[260096]: 2025-10-01 16:35:12.72723071 +0000 UTC m=+0.161841929 container init dc9e95cdcf507d39428c807d13e274fb2c0edf370080d67eb9e8e0a4fffaf676 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init)
Oct  1 12:35:12 np0005464891 podman[260096]: 2025-10-01 16:35:12.738139955 +0000 UTC m=+0.172751174 container start dc9e95cdcf507d39428c807d13e274fb2c0edf370080d67eb9e8e0a4fffaf676 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:35:12 np0005464891 python3.9[260070]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct  1 12:35:12 np0005464891 nova_compute_init[260118]: INFO:nova_statedir:Applying nova statedir ownership
Oct  1 12:35:12 np0005464891 nova_compute_init[260118]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct  1 12:35:12 np0005464891 nova_compute_init[260118]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct  1 12:35:12 np0005464891 nova_compute_init[260118]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct  1 12:35:12 np0005464891 nova_compute_init[260118]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct  1 12:35:12 np0005464891 nova_compute_init[260118]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct  1 12:35:12 np0005464891 nova_compute_init[260118]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct  1 12:35:12 np0005464891 nova_compute_init[260118]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct  1 12:35:12 np0005464891 nova_compute_init[260118]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct  1 12:35:12 np0005464891 nova_compute_init[260118]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct  1 12:35:12 np0005464891 nova_compute_init[260118]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct  1 12:35:12 np0005464891 nova_compute_init[260118]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct  1 12:35:12 np0005464891 nova_compute_init[260118]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct  1 12:35:12 np0005464891 nova_compute_init[260118]: INFO:nova_statedir:Nova statedir ownership complete
Oct  1 12:35:12 np0005464891 systemd[1]: libpod-dc9e95cdcf507d39428c807d13e274fb2c0edf370080d67eb9e8e0a4fffaf676.scope: Deactivated successfully.
Oct  1 12:35:12 np0005464891 podman[260119]: 2025-10-01 16:35:12.83406747 +0000 UTC m=+0.053932921 container died dc9e95cdcf507d39428c807d13e274fb2c0edf370080d67eb9e8e0a4fffaf676 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:35:12 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc9e95cdcf507d39428c807d13e274fb2c0edf370080d67eb9e8e0a4fffaf676-userdata-shm.mount: Deactivated successfully.
Oct  1 12:35:12 np0005464891 systemd[1]: var-lib-containers-storage-overlay-bb6453b077f200c623eced5a128c4a8cdf132bd0598d186474730d8ab4c091de-merged.mount: Deactivated successfully.
Oct  1 12:35:12 np0005464891 podman[260132]: 2025-10-01 16:35:12.943160212 +0000 UTC m=+0.098424595 container cleanup dc9e95cdcf507d39428c807d13e274fb2c0edf370080d67eb9e8e0a4fffaf676 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:35:12 np0005464891 systemd[1]: libpod-conmon-dc9e95cdcf507d39428c807d13e274fb2c0edf370080d67eb9e8e0a4fffaf676.scope: Deactivated successfully.
Oct  1 12:35:12 np0005464891 podman[260133]: 2025-10-01 16:35:12.964833789 +0000 UTC m=+0.101191383 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:35:13 np0005464891 systemd[1]: session-51.scope: Deactivated successfully.
Oct  1 12:35:13 np0005464891 systemd[1]: session-51.scope: Consumed 3min 5.725s CPU time.
Oct  1 12:35:13 np0005464891 systemd-logind[801]: Session 51 logged out. Waiting for processes to exit.
Oct  1 12:35:13 np0005464891 systemd-logind[801]: Removed session 51.
Oct  1 12:35:13 np0005464891 nova_compute[259907]: 2025-10-01 16:35:13.640 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  1 12:35:13 np0005464891 nova_compute[259907]: 2025-10-01 16:35:13.640 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  1 12:35:13 np0005464891 nova_compute[259907]: 2025-10-01 16:35:13.640 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  1 12:35:13 np0005464891 nova_compute[259907]: 2025-10-01 16:35:13.641 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  1 12:35:13 np0005464891 nova_compute[259907]: 2025-10-01 16:35:13.769 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:35:13 np0005464891 nova_compute[259907]: 2025-10-01 16:35:13.794 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.269 2 INFO nova.virt.driver [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  1 12:35:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.395 2 INFO nova.compute.provider_config [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.432 2 DEBUG oslo_concurrency.lockutils [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.432 2 DEBUG oslo_concurrency.lockutils [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.433 2 DEBUG oslo_concurrency.lockutils [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.433 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.434 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.434 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.435 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.436 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.436 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.436 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.437 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.437 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.437 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.438 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.438 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.438 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.439 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.439 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.439 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.440 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.440 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.441 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.441 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.442 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.442 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.443 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.443 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.443 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.444 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.444 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.444 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.445 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.445 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.445 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.445 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.446 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.446 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.446 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.447 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.447 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.447 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.448 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.448 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.448 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.448 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.449 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.449 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.449 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.450 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.450 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.450 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.451 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.451 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.451 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.452 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.452 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.452 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.452 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.453 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.453 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.453 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.454 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.454 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.454 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.454 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.455 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.455 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.455 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.456 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.456 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.456 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.456 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.457 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.457 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.457 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.458 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.458 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.458 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.458 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.459 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.459 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.459 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.460 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.460 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.460 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.460 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.461 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.461 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.461 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.462 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.462 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.462 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.463 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.463 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.463 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.463 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.464 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.464 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.464 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.465 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.465 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.465 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.465 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.466 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.466 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.466 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.467 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.467 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.467 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.467 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.468 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.468 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.468 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.469 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.469 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.469 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.469 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.470 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.470 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.470 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.471 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.471 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.471 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.471 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.472 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.472 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.472 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.473 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.473 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.473 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.473 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.473 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.474 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.474 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.474 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.474 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.474 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.475 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.475 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.475 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.475 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.475 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.476 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.476 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.476 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.476 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.476 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.476 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.477 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.477 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.477 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.477 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.478 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.478 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.478 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.478 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.478 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.479 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.479 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.479 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.479 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.479 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.480 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.480 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.480 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.480 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.480 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.481 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.481 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.481 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.481 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.481 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.482 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.482 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.482 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.482 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.483 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.483 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.483 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.483 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.483 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.484 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.484 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.484 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.484 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.484 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.485 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.485 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.485 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.485 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.485 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.486 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.486 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.486 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.486 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.486 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.486 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.487 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.487 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.487 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.487 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.487 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.488 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.488 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.488 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.488 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.488 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.489 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.489 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.489 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.489 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.489 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.490 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.490 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.490 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.490 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.491 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.491 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.491 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.491 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.491 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.492 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.492 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.492 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.492 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.492 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.493 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.493 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.493 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.493 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.493 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.494 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.494 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.494 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.494 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.494 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.495 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.495 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.495 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.495 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.495 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.496 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.496 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.496 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.496 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.496 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.497 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.497 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.497 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.497 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.497 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.498 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.498 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.498 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.498 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.498 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.499 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.499 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.499 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.499 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.499 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.500 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.500 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.500 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.500 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.500 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.500 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.501 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.501 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.501 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.501 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.501 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.502 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.502 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.502 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.502 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.502 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.503 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.503 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.503 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.503 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.503 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.504 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.504 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.504 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.504 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.504 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.504 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.505 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.505 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.505 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.505 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.505 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.506 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.506 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.506 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.506 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.506 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.507 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.507 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.507 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.507 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.507 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.507 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.508 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.508 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.508 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.508 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.508 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.509 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.509 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.509 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.509 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.509 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.510 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.510 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.510 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.510 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.510 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.511 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.511 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.511 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.511 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.511 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.511 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.512 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.512 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.512 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.512 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.512 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.513 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.513 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.513 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.513 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.513 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.514 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.514 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.514 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.514 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.514 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.514 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.515 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.515 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.515 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.515 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.515 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.516 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.516 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.516 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.516 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.516 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.517 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.517 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.517 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.517 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.517 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.517 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.518 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.518 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.518 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.518 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.519 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.519 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.519 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.519 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.519 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.520 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.520 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.520 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.520 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.520 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.520 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.521 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.521 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.521 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.521 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.521 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.522 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.522 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.522 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.522 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.522 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.522 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.523 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.523 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.523 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.523 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.523 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.524 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.524 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.524 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.524 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.524 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.525 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.525 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.525 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.525 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.525 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.526 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.526 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.526 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.526 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.526 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.526 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.527 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.527 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.527 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.527 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.527 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.527 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.527 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.528 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.528 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.528 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.528 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.528 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.528 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.528 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.529 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.529 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.529 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.529 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.529 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.529 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.529 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.530 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.530 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.530 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.530 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.530 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.530 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.531 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.531 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.531 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.531 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.531 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.531 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.531 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.532 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.532 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.532 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.532 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.532 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.532 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.532 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.533 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.533 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.533 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.533 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.533 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.533 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.533 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.534 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.534 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.534 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.534 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.534 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.534 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.534 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.535 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.535 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.535 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.535 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.535 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.535 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.535 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.536 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.536 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.536 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.536 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.536 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.536 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.536 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.537 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.537 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.537 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.537 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.537 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.537 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.537 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.538 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.538 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.538 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.538 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.538 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.538 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.538 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.539 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.539 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.539 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.539 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.539 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.539 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.540 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.540 2 WARNING oslo_config.cfg [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  1 12:35:14 np0005464891 nova_compute[259907]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  1 12:35:14 np0005464891 nova_compute[259907]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  1 12:35:14 np0005464891 nova_compute[259907]: and ``live_migration_inbound_addr`` respectively.
Oct  1 12:35:14 np0005464891 nova_compute[259907]: ).  Its value may be silently ignored in the future.#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.540 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.540 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.540 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.540 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.541 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.541 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.541 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.541 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.541 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.541 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.541 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.542 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.542 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.542 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.542 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.542 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.542 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.542 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.543 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.rbd_secret_uuid        = 6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.543 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.543 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.543 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.543 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.543 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.543 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.544 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.544 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.544 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.544 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.544 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.544 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.544 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.545 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.545 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.545 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.545 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.545 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.545 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.545 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.546 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.546 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.546 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.546 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.546 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.546 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.546 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.547 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.547 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.547 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.547 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.547 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.547 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.547 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.548 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.548 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.548 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.548 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.548 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.548 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.548 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.549 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.549 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.549 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.549 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.549 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.549 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.549 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.550 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.550 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.550 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.550 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.550 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.550 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.550 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.551 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.551 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.551 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.551 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.551 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.551 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.551 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.552 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.552 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.552 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.552 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.552 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.552 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.552 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.553 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.553 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.553 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.553 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.553 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.553 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.553 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.554 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.554 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.554 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.554 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.554 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.554 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.554 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.554 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.555 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.555 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.555 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.555 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.555 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.555 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.555 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.556 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.556 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.556 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.556 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.556 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.556 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.557 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.557 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.557 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.557 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.557 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.557 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.557 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.558 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.558 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.558 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.558 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.558 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.558 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.559 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.559 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.559 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.559 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.559 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.559 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.559 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.560 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.560 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.560 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.560 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.560 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.560 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.561 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.561 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.561 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.561 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.561 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.561 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.561 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.562 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.562 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.562 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.562 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.562 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.562 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.563 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.563 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.563 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.563 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.563 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.563 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.563 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.564 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.564 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.564 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.564 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.564 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.564 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.564 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.565 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.565 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.565 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.565 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.565 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.565 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.565 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.566 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.566 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.566 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.566 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.566 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.566 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.566 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.567 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.567 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.567 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.567 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.567 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.567 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.567 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.568 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.568 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.568 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.568 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.568 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.568 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.568 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.569 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.569 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.569 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.569 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.569 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.569 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.569 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.570 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.570 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.570 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.570 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.570 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.570 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.570 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.571 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.571 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.571 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.571 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.571 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.571 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.571 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.571 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.572 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.572 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.572 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.572 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.572 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.572 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.572 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.573 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.573 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.573 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.573 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.573 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.573 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.573 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.573 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.574 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.574 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.574 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.574 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.574 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.574 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.574 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.575 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.575 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.575 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.575 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.575 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.575 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.575 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.576 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.576 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.576 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.576 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.576 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.576 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.576 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.577 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.577 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.577 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.577 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.577 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.577 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.577 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.578 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.578 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.578 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.578 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.578 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.578 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.578 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.579 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.579 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.579 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.579 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.579 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.579 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.579 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.579 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.580 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.580 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.580 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.580 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.580 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.580 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.580 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.581 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.581 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.581 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.581 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.581 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.581 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.581 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.582 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.582 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.582 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.582 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.582 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.582 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.582 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.583 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.583 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.583 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.583 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.583 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.583 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.583 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.584 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.584 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.584 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.584 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.584 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.584 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.584 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.585 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.585 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.585 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.585 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.585 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.585 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.585 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.586 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.586 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.586 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.586 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.586 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.586 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.586 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.587 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.587 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.587 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.587 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.587 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.587 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.587 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.587 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.588 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.588 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.588 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.588 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.588 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.588 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.588 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.589 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.589 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.589 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.589 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.589 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.589 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.589 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.590 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.590 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.590 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.590 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.590 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.590 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.590 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.590 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.591 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.591 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.591 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.591 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.591 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.591 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.591 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.591 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.592 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.592 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.592 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.592 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.592 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.592 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.592 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.593 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.593 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.593 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.593 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.593 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.593 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.593 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.593 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.594 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.594 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.594 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.594 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.594 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.594 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.594 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.595 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.595 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.595 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.595 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.595 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.595 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.595 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.595 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.596 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.596 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.596 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.596 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.596 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.596 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.596 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.597 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.597 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.597 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.597 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.597 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.597 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.597 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.597 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.598 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.598 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.598 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.598 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.598 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.598 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.599 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.599 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.599 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.599 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.599 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.599 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.600 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.600 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.600 2 DEBUG oslo_service.service [None req-d1fc4b75-926f-4d9d-a2fb-a2502d879137 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.601 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.618 2 DEBUG nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.618 2 DEBUG nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.619 2 DEBUG nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.619 2 DEBUG nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.635 2 DEBUG nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fadb3802eb0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.639 2 DEBUG nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fadb3802eb0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.640 2 INFO nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.648 2 INFO nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Libvirt host capabilities <capabilities>
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <host>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <uuid>9659e747-1637-4bf9-8b69-aeb4fd4304e0</uuid>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <cpu>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <arch>x86_64</arch>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model>EPYC-Rome-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <vendor>AMD</vendor>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <microcode version='16777317'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <signature family='23' model='49' stepping='0'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <maxphysaddr mode='emulate' bits='40'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='x2apic'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='tsc-deadline'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='osxsave'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='hypervisor'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='tsc_adjust'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='spec-ctrl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='stibp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='arch-capabilities'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='ssbd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='cmp_legacy'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='topoext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='virt-ssbd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='lbrv'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='tsc-scale'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='vmcb-clean'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='pause-filter'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='pfthreshold'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='svme-addr-chk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='rdctl-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='skip-l1dfl-vmentry'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='mds-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature name='pschange-mc-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <pages unit='KiB' size='4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <pages unit='KiB' size='2048'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <pages unit='KiB' size='1048576'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </cpu>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <power_management>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <suspend_mem/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </power_management>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <iommu support='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <migration_features>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <live/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <uri_transports>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <uri_transport>tcp</uri_transport>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <uri_transport>rdma</uri_transport>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </uri_transports>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </migration_features>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <topology>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <cells num='1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <cell id='0'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:          <memory unit='KiB'>7864116</memory>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:          <pages unit='KiB' size='4'>1966029</pages>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:          <pages unit='KiB' size='2048'>0</pages>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:          <distances>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:            <sibling id='0' value='10'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:          </distances>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:          <cpus num='8'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:          </cpus>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        </cell>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </cells>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </topology>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <cache>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </cache>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <secmodel>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model>selinux</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <doi>0</doi>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </secmodel>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <secmodel>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model>dac</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <doi>0</doi>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </secmodel>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </host>
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <guest>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <os_type>hvm</os_type>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <arch name='i686'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <wordsize>32</wordsize>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <domain type='qemu'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <domain type='kvm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </arch>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <features>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <pae/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <nonpae/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <acpi default='on' toggle='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <apic default='on' toggle='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <cpuselection/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <deviceboot/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <disksnapshot default='on' toggle='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <externalSnapshot/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </features>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </guest>
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <guest>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <os_type>hvm</os_type>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <arch name='x86_64'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <wordsize>64</wordsize>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <domain type='qemu'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <domain type='kvm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </arch>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <features>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <acpi default='on' toggle='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <apic default='on' toggle='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <cpuselection/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <deviceboot/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <disksnapshot default='on' toggle='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <externalSnapshot/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </features>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </guest>
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 
Oct  1 12:35:14 np0005464891 nova_compute[259907]: </capabilities>
Oct  1 12:35:14 np0005464891 nova_compute[259907]: #033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.657 2 DEBUG nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.663 2 DEBUG nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  1 12:35:14 np0005464891 nova_compute[259907]: <domainCapabilities>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <path>/usr/libexec/qemu-kvm</path>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <domain>kvm</domain>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <arch>i686</arch>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <vcpu max='4096'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <iothreads supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <os supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <enum name='firmware'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <loader supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='type'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>rom</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>pflash</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='readonly'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>yes</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>no</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='secure'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>no</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </loader>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <cpu>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <mode name='host-passthrough' supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='hostPassthroughMigratable'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>on</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>off</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </mode>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <mode name='maximum' supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='maximumMigratable'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>on</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>off</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </mode>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <mode name='host-model' supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <vendor>AMD</vendor>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='x2apic'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='tsc-deadline'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='hypervisor'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='tsc_adjust'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='spec-ctrl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='stibp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='arch-capabilities'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='ssbd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='cmp_legacy'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='overflow-recov'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='succor'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='ibrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='amd-ssbd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='virt-ssbd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='lbrv'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='tsc-scale'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='vmcb-clean'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='flushbyasid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='pause-filter'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='pfthreshold'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='svme-addr-chk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='rdctl-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='mds-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='pschange-mc-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='gds-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='rfds-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='disable' name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </mode>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <mode name='custom' supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-noTSX'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v5'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cooperlake'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cooperlake-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cooperlake-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Denverton'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mpx'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Denverton-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mpx'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Denverton-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Denverton-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Dhyana-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Genoa'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amd-psfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='auto-ibrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='stibp-always-on'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Genoa-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amd-psfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='auto-ibrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='stibp-always-on'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Milan'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Milan-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Milan-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amd-psfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='stibp-always-on'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Rome'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Rome-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Rome-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Rome-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='GraniteRapids'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='prefetchiti'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='GraniteRapids-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='prefetchiti'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='GraniteRapids-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx10'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx10-128'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx10-256'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx10-512'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='prefetchiti'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-noTSX'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-noTSX'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v5'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v6'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v7'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='IvyBridge'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='IvyBridge-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='IvyBridge-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='IvyBridge-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='KnightsMill'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-4fmaps'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-4vnniw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512er'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512pf'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='KnightsMill-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-4fmaps'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-4vnniw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512er'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512pf'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Opteron_G4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fma4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xop'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Opteron_G4-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fma4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xop'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Opteron_G5'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fma4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tbm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xop'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Opteron_G5-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fma4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tbm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xop'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SapphireRapids'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SapphireRapids-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SapphireRapids-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SapphireRapids-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SierraForest'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-ne-convert'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cmpccxadd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SierraForest-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-ne-convert'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cmpccxadd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v5'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='core-capability'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mpx'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='split-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='core-capability'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mpx'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='split-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='core-capability'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='split-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='core-capability'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='split-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='athlon'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnow'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnowext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='athlon-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnow'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnowext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='core2duo'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='core2duo-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='coreduo'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='coreduo-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='n270'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='n270-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='phenom'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnow'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnowext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='phenom-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnow'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnowext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </mode>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <memoryBacking supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <enum name='sourceType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>file</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>anonymous</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>memfd</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </memoryBacking>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <disk supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='diskDevice'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>disk</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>cdrom</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>floppy</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>lun</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='bus'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>fdc</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>scsi</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>usb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>sata</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio-transitional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio-non-transitional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <graphics supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='type'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vnc</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>egl-headless</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>dbus</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </graphics>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <video supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='modelType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vga</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>cirrus</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>none</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>bochs</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>ramfb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <hostdev supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='mode'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>subsystem</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='startupPolicy'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>default</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>mandatory</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>requisite</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>optional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='subsysType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>usb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>pci</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>scsi</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='capsType'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='pciBackend'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </hostdev>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <rng supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio-transitional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio-non-transitional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendModel'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>random</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>egd</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>builtin</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <filesystem supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='driverType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>path</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>handle</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtiofs</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </filesystem>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <tpm supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>tpm-tis</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>tpm-crb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendModel'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>emulator</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>external</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendVersion'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>2.0</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </tpm>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <redirdev supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='bus'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>usb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </redirdev>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <channel supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='type'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>pty</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>unix</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </channel>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <crypto supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='type'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>qemu</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendModel'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>builtin</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </crypto>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <interface supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>default</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>passt</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <panic supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>isa</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>hyperv</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </panic>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <gic supported='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <vmcoreinfo supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <genid supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <backingStoreInput supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <backup supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <async-teardown supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <ps2 supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <sev supported='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <sgx supported='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <hyperv supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='features'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>relaxed</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vapic</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>spinlocks</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vpindex</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>runtime</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>synic</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>stimer</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>reset</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vendor_id</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>frequencies</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>reenlightenment</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>tlbflush</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>ipi</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>avic</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>emsr_bitmap</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>xmm_input</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </hyperv>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <launchSecurity supported='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:35:14 np0005464891 nova_compute[259907]: </domainCapabilities>
Oct  1 12:35:14 np0005464891 nova_compute[259907]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.672 2 DEBUG nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  1 12:35:14 np0005464891 nova_compute[259907]: <domainCapabilities>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <path>/usr/libexec/qemu-kvm</path>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <domain>kvm</domain>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <arch>i686</arch>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <vcpu max='240'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <iothreads supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <os supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <enum name='firmware'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <loader supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='type'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>rom</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>pflash</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='readonly'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>yes</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>no</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='secure'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>no</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </loader>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <cpu>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <mode name='host-passthrough' supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='hostPassthroughMigratable'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>on</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>off</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </mode>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <mode name='maximum' supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='maximumMigratable'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>on</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>off</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </mode>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <mode name='host-model' supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <vendor>AMD</vendor>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='x2apic'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='tsc-deadline'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='hypervisor'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='tsc_adjust'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='spec-ctrl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='stibp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='arch-capabilities'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='ssbd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='cmp_legacy'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='overflow-recov'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='succor'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='ibrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='amd-ssbd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='virt-ssbd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='lbrv'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='tsc-scale'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='vmcb-clean'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='flushbyasid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='pause-filter'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='pfthreshold'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='svme-addr-chk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='rdctl-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='mds-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='pschange-mc-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='gds-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='rfds-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='disable' name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </mode>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <mode name='custom' supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-noTSX'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v5'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cooperlake'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cooperlake-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cooperlake-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Denverton'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mpx'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Denverton-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mpx'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Denverton-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Denverton-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Dhyana-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Genoa'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amd-psfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='auto-ibrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='stibp-always-on'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Genoa-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amd-psfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='auto-ibrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='stibp-always-on'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Milan'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Milan-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Milan-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amd-psfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='stibp-always-on'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Rome'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Rome-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Rome-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Rome-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='GraniteRapids'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='prefetchiti'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='GraniteRapids-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='prefetchiti'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='GraniteRapids-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx10'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx10-128'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx10-256'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx10-512'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='prefetchiti'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-noTSX'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-noTSX'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v5'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v6'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v7'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='IvyBridge'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='IvyBridge-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='IvyBridge-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='IvyBridge-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='KnightsMill'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-4fmaps'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-4vnniw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512er'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512pf'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='KnightsMill-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-4fmaps'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-4vnniw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512er'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512pf'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Opteron_G4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fma4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xop'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Opteron_G4-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fma4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xop'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Opteron_G5'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fma4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tbm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xop'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Opteron_G5-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fma4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tbm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xop'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SapphireRapids'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SapphireRapids-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SapphireRapids-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SapphireRapids-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SierraForest'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-ne-convert'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cmpccxadd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SierraForest-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-ne-convert'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cmpccxadd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v5'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='core-capability'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mpx'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='split-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='core-capability'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mpx'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='split-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='core-capability'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='split-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='core-capability'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='split-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='athlon'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnow'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnowext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='athlon-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnow'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnowext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='core2duo'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='core2duo-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='coreduo'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='coreduo-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='n270'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='n270-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='phenom'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnow'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnowext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='phenom-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnow'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnowext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </mode>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <memoryBacking supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <enum name='sourceType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>file</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>anonymous</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>memfd</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </memoryBacking>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <disk supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='diskDevice'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>disk</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>cdrom</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>floppy</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>lun</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='bus'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>ide</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>fdc</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>scsi</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>usb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>sata</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio-transitional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio-non-transitional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <graphics supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='type'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vnc</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>egl-headless</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>dbus</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </graphics>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <video supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='modelType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vga</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>cirrus</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>none</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>bochs</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>ramfb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <hostdev supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='mode'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>subsystem</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='startupPolicy'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>default</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>mandatory</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>requisite</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>optional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='subsysType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>usb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>pci</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>scsi</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='capsType'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='pciBackend'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </hostdev>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <rng supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio-transitional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio-non-transitional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendModel'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>random</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>egd</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>builtin</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <filesystem supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='driverType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>path</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>handle</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtiofs</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </filesystem>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <tpm supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>tpm-tis</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>tpm-crb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendModel'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>emulator</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>external</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendVersion'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>2.0</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </tpm>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <redirdev supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='bus'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>usb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </redirdev>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <channel supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='type'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>pty</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>unix</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </channel>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <crypto supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='type'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>qemu</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendModel'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>builtin</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </crypto>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <interface supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>default</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>passt</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <panic supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>isa</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>hyperv</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </panic>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <gic supported='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <vmcoreinfo supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <genid supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <backingStoreInput supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <backup supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <async-teardown supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <ps2 supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <sev supported='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <sgx supported='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <hyperv supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='features'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>relaxed</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vapic</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>spinlocks</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vpindex</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>runtime</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>synic</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>stimer</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>reset</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vendor_id</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>frequencies</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>reenlightenment</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>tlbflush</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>ipi</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>avic</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>emsr_bitmap</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>xmm_input</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </hyperv>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <launchSecurity supported='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:35:14 np0005464891 nova_compute[259907]: </domainCapabilities>
Oct  1 12:35:14 np0005464891 nova_compute[259907]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.704 2 DEBUG nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.708 2 DEBUG nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  1 12:35:14 np0005464891 nova_compute[259907]: <domainCapabilities>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <path>/usr/libexec/qemu-kvm</path>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <domain>kvm</domain>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <arch>x86_64</arch>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <vcpu max='4096'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <iothreads supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <os supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <enum name='firmware'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>efi</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <loader supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='type'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>rom</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>pflash</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='readonly'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>yes</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>no</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='secure'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>yes</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>no</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </loader>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <cpu>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <mode name='host-passthrough' supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='hostPassthroughMigratable'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>on</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>off</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </mode>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <mode name='maximum' supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='maximumMigratable'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>on</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>off</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </mode>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <mode name='host-model' supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <vendor>AMD</vendor>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='x2apic'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='tsc-deadline'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='hypervisor'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='tsc_adjust'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='spec-ctrl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='stibp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='arch-capabilities'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='ssbd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='cmp_legacy'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='overflow-recov'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='succor'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='ibrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='amd-ssbd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='virt-ssbd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='lbrv'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='tsc-scale'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='vmcb-clean'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='flushbyasid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='pause-filter'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='pfthreshold'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='svme-addr-chk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='rdctl-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='mds-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='pschange-mc-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='gds-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='rfds-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='disable' name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </mode>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <mode name='custom' supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-noTSX'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v5'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cooperlake'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cooperlake-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cooperlake-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Denverton'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mpx'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Denverton-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mpx'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Denverton-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Denverton-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Dhyana-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Genoa'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amd-psfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='auto-ibrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='stibp-always-on'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Genoa-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amd-psfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='auto-ibrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='stibp-always-on'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Milan'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Milan-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Milan-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amd-psfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='stibp-always-on'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Rome'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Rome-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Rome-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Rome-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='GraniteRapids'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='prefetchiti'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='GraniteRapids-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='prefetchiti'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='GraniteRapids-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx10'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx10-128'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx10-256'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx10-512'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='prefetchiti'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-noTSX'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-noTSX'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v5'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v6'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v7'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='IvyBridge'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='IvyBridge-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='IvyBridge-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='IvyBridge-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='KnightsMill'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-4fmaps'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-4vnniw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512er'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512pf'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='KnightsMill-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-4fmaps'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-4vnniw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512er'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512pf'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Opteron_G4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fma4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xop'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Opteron_G4-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fma4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xop'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Opteron_G5'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fma4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tbm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xop'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Opteron_G5-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fma4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tbm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xop'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SapphireRapids'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SapphireRapids-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SapphireRapids-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SapphireRapids-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SierraForest'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-ne-convert'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cmpccxadd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SierraForest-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-ne-convert'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cmpccxadd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v5'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='core-capability'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mpx'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='split-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='core-capability'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mpx'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='split-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='core-capability'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='split-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='core-capability'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='split-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='athlon'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnow'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnowext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='athlon-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnow'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnowext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='core2duo'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='core2duo-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='coreduo'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='coreduo-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='n270'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='n270-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='phenom'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnow'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnowext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='phenom-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnow'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnowext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </mode>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <memoryBacking supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <enum name='sourceType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>file</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>anonymous</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>memfd</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </memoryBacking>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <disk supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='diskDevice'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>disk</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>cdrom</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>floppy</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>lun</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='bus'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>fdc</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>scsi</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>usb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>sata</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio-transitional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio-non-transitional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <graphics supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='type'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vnc</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>egl-headless</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>dbus</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </graphics>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <video supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='modelType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vga</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>cirrus</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>none</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>bochs</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>ramfb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <hostdev supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='mode'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>subsystem</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='startupPolicy'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>default</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>mandatory</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>requisite</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>optional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='subsysType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>usb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>pci</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>scsi</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='capsType'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='pciBackend'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </hostdev>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <rng supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio-transitional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio-non-transitional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendModel'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>random</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>egd</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>builtin</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <filesystem supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='driverType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>path</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>handle</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtiofs</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </filesystem>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <tpm supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>tpm-tis</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>tpm-crb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendModel'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>emulator</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>external</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendVersion'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>2.0</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </tpm>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <redirdev supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='bus'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>usb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </redirdev>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <channel supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='type'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>pty</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>unix</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </channel>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <crypto supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='type'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>qemu</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendModel'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>builtin</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </crypto>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <interface supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>default</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>passt</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <panic supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>isa</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>hyperv</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </panic>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <gic supported='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <vmcoreinfo supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <genid supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <backingStoreInput supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <backup supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <async-teardown supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <ps2 supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <sev supported='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <sgx supported='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <hyperv supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='features'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>relaxed</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vapic</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>spinlocks</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vpindex</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>runtime</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>synic</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>stimer</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>reset</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vendor_id</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>frequencies</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>reenlightenment</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>tlbflush</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>ipi</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>avic</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>emsr_bitmap</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>xmm_input</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </hyperv>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <launchSecurity supported='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:35:14 np0005464891 nova_compute[259907]: </domainCapabilities>
Oct  1 12:35:14 np0005464891 nova_compute[259907]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.785 2 WARNING nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.793 2 DEBUG nova.virt.libvirt.volume.mount [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.798 2 DEBUG nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  1 12:35:14 np0005464891 nova_compute[259907]: <domainCapabilities>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <path>/usr/libexec/qemu-kvm</path>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <domain>kvm</domain>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <arch>x86_64</arch>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <vcpu max='240'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <iothreads supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <os supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <enum name='firmware'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <loader supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='type'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>rom</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>pflash</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='readonly'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>yes</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>no</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='secure'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>no</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </loader>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <cpu>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <mode name='host-passthrough' supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='hostPassthroughMigratable'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>on</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>off</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </mode>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <mode name='maximum' supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='maximumMigratable'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>on</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>off</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </mode>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <mode name='host-model' supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <vendor>AMD</vendor>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='x2apic'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='tsc-deadline'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='hypervisor'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='tsc_adjust'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='spec-ctrl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='stibp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='arch-capabilities'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='ssbd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='cmp_legacy'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='overflow-recov'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='succor'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='ibrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='amd-ssbd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='virt-ssbd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='lbrv'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='tsc-scale'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='vmcb-clean'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='flushbyasid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='pause-filter'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='pfthreshold'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='svme-addr-chk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='rdctl-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='mds-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='pschange-mc-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='gds-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='require' name='rfds-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <feature policy='disable' name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </mode>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <mode name='custom' supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-noTSX'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Broadwell-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cascadelake-Server-v5'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cooperlake'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cooperlake-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Cooperlake-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Denverton'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mpx'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Denverton-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mpx'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Denverton-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Denverton-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Dhyana-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Genoa'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amd-psfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='auto-ibrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='stibp-always-on'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Genoa-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amd-psfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='auto-ibrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='stibp-always-on'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Milan'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Milan-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Milan-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amd-psfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='no-nested-data-bp'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='null-sel-clr-base'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='stibp-always-on'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Rome'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Rome-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Rome-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-Rome-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='EPYC-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='GraniteRapids'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='prefetchiti'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='GraniteRapids-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='prefetchiti'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='GraniteRapids-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx10'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx10-128'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx10-256'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx10-512'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='prefetchiti'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-noTSX'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Haswell-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-noTSX'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v5'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v6'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Icelake-Server-v7'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='IvyBridge'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='IvyBridge-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='IvyBridge-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='IvyBridge-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='KnightsMill'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-4fmaps'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-4vnniw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512er'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512pf'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='KnightsMill-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-4fmaps'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-4vnniw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512er'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512pf'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Opteron_G4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fma4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xop'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Opteron_G4-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fma4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xop'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Opteron_G5'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fma4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tbm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xop'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Opteron_G5-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fma4'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tbm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xop'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SapphireRapids'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SapphireRapids-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SapphireRapids-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SapphireRapids-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='amx-tile'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-bf16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-fp16'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512-vpopcntdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bitalg'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vbmi2'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrc'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fzrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='la57'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='taa-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='tsx-ldtrk'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xfd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SierraForest'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-ne-convert'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cmpccxadd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='SierraForest-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-ifma'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-ne-convert'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx-vnni-int8'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='bus-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cmpccxadd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fbsdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='fsrs'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ibrs-all'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mcdt-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pbrsb-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='psdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='sbdr-ssdp-no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='serialize'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vaes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='vpclmulqdq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Client-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='hle'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='rtm'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Skylake-Server-v5'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512bw'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512cd'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512dq'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512f'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='avx512vl'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='invpcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pcid'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='pku'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='core-capability'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mpx'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='split-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='core-capability'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='mpx'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='split-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge-v2'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='core-capability'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='split-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge-v3'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='core-capability'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='split-lock-detect'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='Snowridge-v4'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='cldemote'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='erms'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='gfni'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdir64b'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='movdiri'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='xsaves'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='athlon'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnow'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnowext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='athlon-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnow'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnowext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='core2duo'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='core2duo-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='coreduo'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='coreduo-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='n270'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='n270-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='ss'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='phenom'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnow'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnowext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <blockers model='phenom-v1'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnow'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <feature name='3dnowext'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </blockers>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </mode>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <memoryBacking supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <enum name='sourceType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>file</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>anonymous</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <value>memfd</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </memoryBacking>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <disk supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='diskDevice'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>disk</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>cdrom</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>floppy</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>lun</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='bus'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>ide</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>fdc</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>scsi</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>usb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>sata</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio-transitional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio-non-transitional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <graphics supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='type'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vnc</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>egl-headless</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>dbus</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </graphics>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <video supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='modelType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vga</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>cirrus</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>none</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>bochs</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>ramfb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <hostdev supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='mode'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>subsystem</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='startupPolicy'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>default</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>mandatory</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>requisite</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>optional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='subsysType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>usb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>pci</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>scsi</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='capsType'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='pciBackend'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </hostdev>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <rng supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio-transitional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtio-non-transitional</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendModel'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>random</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>egd</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>builtin</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <filesystem supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='driverType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>path</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>handle</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>virtiofs</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </filesystem>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <tpm supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>tpm-tis</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>tpm-crb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendModel'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>emulator</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>external</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendVersion'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>2.0</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </tpm>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <redirdev supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='bus'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>usb</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </redirdev>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <channel supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='type'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>pty</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>unix</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </channel>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <crypto supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='type'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>qemu</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendModel'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>builtin</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </crypto>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <interface supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='backendType'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>default</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>passt</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <panic supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='model'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>isa</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>hyperv</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </panic>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <gic supported='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <vmcoreinfo supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <genid supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <backingStoreInput supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <backup supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <async-teardown supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <ps2 supported='yes'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <sev supported='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <sgx supported='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <hyperv supported='yes'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      <enum name='features'>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>relaxed</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vapic</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>spinlocks</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vpindex</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>runtime</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>synic</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>stimer</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>reset</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>vendor_id</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>frequencies</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>reenlightenment</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>tlbflush</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>ipi</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>avic</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>emsr_bitmap</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:        <value>xmm_input</value>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:      </enum>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    </hyperv>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:    <launchSecurity supported='no'/>
Oct  1 12:35:14 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:35:14 np0005464891 nova_compute[259907]: </domainCapabilities>
Oct  1 12:35:14 np0005464891 nova_compute[259907]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.858 2 DEBUG nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.866 2 INFO nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Secure Boot support detected#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.868 2 INFO nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.869 2 INFO nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.878 2 DEBUG nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.921 2 INFO nova.virt.node [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Determined node identity bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 from /var/lib/nova/compute_id#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.945 2 WARNING nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Compute nodes ['bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct  1 12:35:14 np0005464891 nova_compute[259907]: 2025-10-01 16:35:14.989 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  1 12:35:15 np0005464891 nova_compute[259907]: 2025-10-01 16:35:15.020 2 WARNING nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  1 12:35:15 np0005464891 nova_compute[259907]: 2025-10-01 16:35:15.021 2 DEBUG oslo_concurrency.lockutils [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:35:15 np0005464891 nova_compute[259907]: 2025-10-01 16:35:15.021 2 DEBUG oslo_concurrency.lockutils [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:35:15 np0005464891 nova_compute[259907]: 2025-10-01 16:35:15.021 2 DEBUG oslo_concurrency.lockutils [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:35:15 np0005464891 nova_compute[259907]: 2025-10-01 16:35:15.021 2 DEBUG nova.compute.resource_tracker [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:35:15 np0005464891 nova_compute[259907]: 2025-10-01 16:35:15.022 2 DEBUG oslo_concurrency.processutils [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:35:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:35:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2754486632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:35:15 np0005464891 nova_compute[259907]: 2025-10-01 16:35:15.441 2 DEBUG oslo_concurrency.processutils [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:35:15 np0005464891 systemd[1]: Starting libvirt nodedev daemon...
Oct  1 12:35:15 np0005464891 systemd[1]: Started libvirt nodedev daemon.
Oct  1 12:35:15 np0005464891 nova_compute[259907]: 2025-10-01 16:35:15.770 2 WARNING nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:35:15 np0005464891 nova_compute[259907]: 2025-10-01 16:35:15.771 2 DEBUG nova.compute.resource_tracker [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5176MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:35:15 np0005464891 nova_compute[259907]: 2025-10-01 16:35:15.771 2 DEBUG oslo_concurrency.lockutils [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:35:15 np0005464891 nova_compute[259907]: 2025-10-01 16:35:15.771 2 DEBUG oslo_concurrency.lockutils [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:35:15 np0005464891 nova_compute[259907]: 2025-10-01 16:35:15.794 2 WARNING nova.compute.resource_tracker [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] No compute node record for compute-0.ctlplane.example.com:bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 could not be found.#033[00m
Oct  1 12:35:15 np0005464891 nova_compute[259907]: 2025-10-01 16:35:15.812 2 INFO nova.compute.resource_tracker [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8#033[00m
Oct  1 12:35:15 np0005464891 nova_compute[259907]: 2025-10-01 16:35:15.879 2 DEBUG nova.compute.resource_tracker [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:35:15 np0005464891 nova_compute[259907]: 2025-10-01 16:35:15.879 2 DEBUG nova.compute.resource_tracker [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:35:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:35:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5657 writes, 23K keys, 5657 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5657 writes, 879 syncs, 6.44 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5605ab5131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5605ab5131f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  1 12:35:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:16 np0005464891 nova_compute[259907]: 2025-10-01 16:35:16.734 2 INFO nova.scheduler.client.report [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [req-70881778-4034-40a1-9174-5381d9060f16] Created resource provider record via placement API for resource provider with UUID bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 and name compute-0.ctlplane.example.com.#033[00m
Oct  1 12:35:17 np0005464891 nova_compute[259907]: 2025-10-01 16:35:17.132 2 DEBUG oslo_concurrency.processutils [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:35:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:35:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274732542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:35:17 np0005464891 nova_compute[259907]: 2025-10-01 16:35:17.634 2 DEBUG oslo_concurrency.processutils [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:35:17 np0005464891 nova_compute[259907]: 2025-10-01 16:35:17.641 2 DEBUG nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct  1 12:35:17 np0005464891 nova_compute[259907]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Oct  1 12:35:17 np0005464891 nova_compute[259907]: 2025-10-01 16:35:17.641 2 INFO nova.virt.libvirt.host [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] kernel doesn't support AMD SEV#033[00m
Oct  1 12:35:17 np0005464891 nova_compute[259907]: 2025-10-01 16:35:17.643 2 DEBUG nova.compute.provider_tree [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Updating inventory in ProviderTree for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 12:35:17 np0005464891 nova_compute[259907]: 2025-10-01 16:35:17.644 2 DEBUG nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:35:17 np0005464891 nova_compute[259907]: 2025-10-01 16:35:17.728 2 DEBUG nova.scheduler.client.report [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Updated inventory for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  1 12:35:17 np0005464891 nova_compute[259907]: 2025-10-01 16:35:17.729 2 DEBUG nova.compute.provider_tree [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Updating resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  1 12:35:17 np0005464891 nova_compute[259907]: 2025-10-01 16:35:17.729 2 DEBUG nova.compute.provider_tree [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Updating inventory in ProviderTree for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 12:35:17 np0005464891 nova_compute[259907]: 2025-10-01 16:35:17.879 2 DEBUG nova.compute.provider_tree [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Updating resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  1 12:35:17 np0005464891 nova_compute[259907]: 2025-10-01 16:35:17.924 2 DEBUG nova.compute.resource_tracker [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:35:17 np0005464891 nova_compute[259907]: 2025-10-01 16:35:17.924 2 DEBUG oslo_concurrency.lockutils [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:35:17 np0005464891 nova_compute[259907]: 2025-10-01 16:35:17.924 2 DEBUG nova.service [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Oct  1 12:35:18 np0005464891 nova_compute[259907]: 2025-10-01 16:35:18.060 2 DEBUG nova.service [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Oct  1 12:35:18 np0005464891 nova_compute[259907]: 2025-10-01 16:35:18.061 2 DEBUG nova.servicegroup.drivers.db [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Oct  1 12:35:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:35:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:35:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6705 writes, 27K keys, 6705 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6705 writes, 1226 syncs, 5.47 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a66b2091f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a66b2091f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:35:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:35:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:35:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:35:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:35:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:35:24 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 33c62061-9238-4612-8530-403e2076076d does not exist
Oct  1 12:35:24 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 0e0ae409-6a34-46dd-af4a-e21e8f5ccf4e does not exist
Oct  1 12:35:24 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 153c9a9a-84a1-4e37-934f-2808b1393517 does not exist
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:35:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:35:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:24 np0005464891 podman[260680]: 2025-10-01 16:35:24.716281739 +0000 UTC m=+0.035925227 container create cbea02737e5e0ffaa28eabe771a95d7e10eff0bc3eec9f685fbae5586dfb6c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:35:24 np0005464891 systemd[1]: Started libpod-conmon-cbea02737e5e0ffaa28eabe771a95d7e10eff0bc3eec9f685fbae5586dfb6c6c.scope.
Oct  1 12:35:24 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:35:24 np0005464891 podman[260680]: 2025-10-01 16:35:24.789081936 +0000 UTC m=+0.108725504 container init cbea02737e5e0ffaa28eabe771a95d7e10eff0bc3eec9f685fbae5586dfb6c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 12:35:24 np0005464891 podman[260680]: 2025-10-01 16:35:24.700017823 +0000 UTC m=+0.019661331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:35:24 np0005464891 podman[260680]: 2025-10-01 16:35:24.796705479 +0000 UTC m=+0.116348967 container start cbea02737e5e0ffaa28eabe771a95d7e10eff0bc3eec9f685fbae5586dfb6c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:35:24 np0005464891 podman[260680]: 2025-10-01 16:35:24.800279319 +0000 UTC m=+0.119922897 container attach cbea02737e5e0ffaa28eabe771a95d7e10eff0bc3eec9f685fbae5586dfb6c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 12:35:24 np0005464891 pedantic_driscoll[260697]: 167 167
Oct  1 12:35:24 np0005464891 systemd[1]: libpod-cbea02737e5e0ffaa28eabe771a95d7e10eff0bc3eec9f685fbae5586dfb6c6c.scope: Deactivated successfully.
Oct  1 12:35:24 np0005464891 conmon[260697]: conmon cbea02737e5e0ffaa28e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cbea02737e5e0ffaa28eabe771a95d7e10eff0bc3eec9f685fbae5586dfb6c6c.scope/container/memory.events
Oct  1 12:35:24 np0005464891 podman[260680]: 2025-10-01 16:35:24.802176173 +0000 UTC m=+0.121819671 container died cbea02737e5e0ffaa28eabe771a95d7e10eff0bc3eec9f685fbae5586dfb6c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:35:24 np0005464891 systemd[1]: var-lib-containers-storage-overlay-72e998bc96c81b13baf47a1724c837b5e7988920cee7e2ba61141ccf1acda43f-merged.mount: Deactivated successfully.
Oct  1 12:35:24 np0005464891 podman[260680]: 2025-10-01 16:35:24.841980536 +0000 UTC m=+0.161624064 container remove cbea02737e5e0ffaa28eabe771a95d7e10eff0bc3eec9f685fbae5586dfb6c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_driscoll, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:35:24 np0005464891 systemd[1]: libpod-conmon-cbea02737e5e0ffaa28eabe771a95d7e10eff0bc3eec9f685fbae5586dfb6c6c.scope: Deactivated successfully.
Oct  1 12:35:24 np0005464891 podman[260694]: 2025-10-01 16:35:24.860225546 +0000 UTC m=+0.100660917 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 12:35:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:35:25 np0005464891 podman[260739]: 2025-10-01 16:35:25.081286453 +0000 UTC m=+0.062111229 container create 4c209ffa0e41edfa36226675c7cb173eca393632db34fa8228dd6f5e0a88543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hertz, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 12:35:25 np0005464891 systemd[1]: Started libpod-conmon-4c209ffa0e41edfa36226675c7cb173eca393632db34fa8228dd6f5e0a88543e.scope.
Oct  1 12:35:25 np0005464891 podman[260739]: 2025-10-01 16:35:25.052091686 +0000 UTC m=+0.032916522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:35:25 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:35:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1512ff20dd96da8293abd9db998cddeecc1bc908edd151bbf5d4c623d9218e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1512ff20dd96da8293abd9db998cddeecc1bc908edd151bbf5d4c623d9218e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1512ff20dd96da8293abd9db998cddeecc1bc908edd151bbf5d4c623d9218e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1512ff20dd96da8293abd9db998cddeecc1bc908edd151bbf5d4c623d9218e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1512ff20dd96da8293abd9db998cddeecc1bc908edd151bbf5d4c623d9218e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:25 np0005464891 podman[260739]: 2025-10-01 16:35:25.201025404 +0000 UTC m=+0.181850260 container init 4c209ffa0e41edfa36226675c7cb173eca393632db34fa8228dd6f5e0a88543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:35:25 np0005464891 podman[260739]: 2025-10-01 16:35:25.21195723 +0000 UTC m=+0.192782006 container start 4c209ffa0e41edfa36226675c7cb173eca393632db34fa8228dd6f5e0a88543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:35:25 np0005464891 podman[260739]: 2025-10-01 16:35:25.216546618 +0000 UTC m=+0.197371464 container attach 4c209ffa0e41edfa36226675c7cb173eca393632db34fa8228dd6f5e0a88543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hertz, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 12:35:26 np0005464891 pensive_hertz[260755]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:35:26 np0005464891 pensive_hertz[260755]: --> relative data size: 1.0
Oct  1 12:35:26 np0005464891 pensive_hertz[260755]: --> All data devices are unavailable
Oct  1 12:35:26 np0005464891 systemd[1]: libpod-4c209ffa0e41edfa36226675c7cb173eca393632db34fa8228dd6f5e0a88543e.scope: Deactivated successfully.
Oct  1 12:35:26 np0005464891 podman[260739]: 2025-10-01 16:35:26.401735044 +0000 UTC m=+1.382559850 container died 4c209ffa0e41edfa36226675c7cb173eca393632db34fa8228dd6f5e0a88543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hertz, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:35:26 np0005464891 systemd[1]: libpod-4c209ffa0e41edfa36226675c7cb173eca393632db34fa8228dd6f5e0a88543e.scope: Consumed 1.145s CPU time.
Oct  1 12:35:26 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6f1512ff20dd96da8293abd9db998cddeecc1bc908edd151bbf5d4c623d9218e-merged.mount: Deactivated successfully.
Oct  1 12:35:26 np0005464891 podman[260739]: 2025-10-01 16:35:26.475717634 +0000 UTC m=+1.456542370 container remove 4c209ffa0e41edfa36226675c7cb173eca393632db34fa8228dd6f5e0a88543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:35:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:26 np0005464891 systemd[1]: libpod-conmon-4c209ffa0e41edfa36226675c7cb173eca393632db34fa8228dd6f5e0a88543e.scope: Deactivated successfully.
Oct  1 12:35:27 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:35:27 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.3 total, 600.0 interval#012Cumulative writes: 5660 writes, 23K keys, 5660 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5660 writes, 869 syncs, 6.51 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56404cdaf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56404cdaf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Oct  1 12:35:27 np0005464891 podman[260938]: 2025-10-01 16:35:27.312610314 +0000 UTC m=+0.048010024 container create c287dae213a63b72ba654f9329ee78fbbc384a7aa4a93450bd86fc931059b7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:35:27 np0005464891 systemd[1]: Started libpod-conmon-c287dae213a63b72ba654f9329ee78fbbc384a7aa4a93450bd86fc931059b7b5.scope.
Oct  1 12:35:27 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:35:27 np0005464891 podman[260938]: 2025-10-01 16:35:27.291588465 +0000 UTC m=+0.026988315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:35:27 np0005464891 podman[260938]: 2025-10-01 16:35:27.396016268 +0000 UTC m=+0.131415998 container init c287dae213a63b72ba654f9329ee78fbbc384a7aa4a93450bd86fc931059b7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 12:35:27 np0005464891 podman[260938]: 2025-10-01 16:35:27.407956422 +0000 UTC m=+0.143356142 container start c287dae213a63b72ba654f9329ee78fbbc384a7aa4a93450bd86fc931059b7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:35:27 np0005464891 podman[260938]: 2025-10-01 16:35:27.411729617 +0000 UTC m=+0.147129357 container attach c287dae213a63b72ba654f9329ee78fbbc384a7aa4a93450bd86fc931059b7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pare, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:35:27 np0005464891 sad_pare[260954]: 167 167
Oct  1 12:35:27 np0005464891 systemd[1]: libpod-c287dae213a63b72ba654f9329ee78fbbc384a7aa4a93450bd86fc931059b7b5.scope: Deactivated successfully.
Oct  1 12:35:27 np0005464891 podman[260938]: 2025-10-01 16:35:27.414105594 +0000 UTC m=+0.149505344 container died c287dae213a63b72ba654f9329ee78fbbc384a7aa4a93450bd86fc931059b7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 12:35:27 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ad9f95081f905cf322e63e357fb1ae2159da324c6f5a4d92f346acd1a1db8c15-merged.mount: Deactivated successfully.
Oct  1 12:35:27 np0005464891 podman[260938]: 2025-10-01 16:35:27.461202492 +0000 UTC m=+0.196602202 container remove c287dae213a63b72ba654f9329ee78fbbc384a7aa4a93450bd86fc931059b7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:35:27 np0005464891 systemd[1]: libpod-conmon-c287dae213a63b72ba654f9329ee78fbbc384a7aa4a93450bd86fc931059b7b5.scope: Deactivated successfully.
Oct  1 12:35:27 np0005464891 podman[260976]: 2025-10-01 16:35:27.650743026 +0000 UTC m=+0.054840347 container create 1d18ba7edc5e5eb78cdb9990ac0523e1672b580c9ca926c2ea31c232b0102a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:35:27 np0005464891 systemd[1]: Started libpod-conmon-1d18ba7edc5e5eb78cdb9990ac0523e1672b580c9ca926c2ea31c232b0102a36.scope.
Oct  1 12:35:27 np0005464891 podman[260976]: 2025-10-01 16:35:27.619428419 +0000 UTC m=+0.023525819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:35:27 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:35:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aed2ede68f4dfa52191533f92a58715f3995cb86984705d25060768d60c5743/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aed2ede68f4dfa52191533f92a58715f3995cb86984705d25060768d60c5743/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aed2ede68f4dfa52191533f92a58715f3995cb86984705d25060768d60c5743/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aed2ede68f4dfa52191533f92a58715f3995cb86984705d25060768d60c5743/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:27 np0005464891 podman[260976]: 2025-10-01 16:35:27.77024037 +0000 UTC m=+0.174337730 container init 1d18ba7edc5e5eb78cdb9990ac0523e1672b580c9ca926c2ea31c232b0102a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:35:27 np0005464891 podman[260976]: 2025-10-01 16:35:27.777558115 +0000 UTC m=+0.181655445 container start 1d18ba7edc5e5eb78cdb9990ac0523e1672b580c9ca926c2ea31c232b0102a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 12:35:27 np0005464891 podman[260976]: 2025-10-01 16:35:27.788594884 +0000 UTC m=+0.192692234 container attach 1d18ba7edc5e5eb78cdb9990ac0523e1672b580c9ca926c2ea31c232b0102a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 12:35:28 np0005464891 ceph-mgr[74592]: [devicehealth INFO root] Check health
Oct  1 12:35:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:28 np0005464891 musing_galileo[260992]: {
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:    "0": [
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:        {
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "devices": [
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "/dev/loop3"
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            ],
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "lv_name": "ceph_lv0",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "lv_size": "21470642176",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "name": "ceph_lv0",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "tags": {
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.cluster_name": "ceph",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.crush_device_class": "",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.encrypted": "0",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.osd_id": "0",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.type": "block",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.vdo": "0"
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            },
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "type": "block",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "vg_name": "ceph_vg0"
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:        }
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:    ],
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:    "1": [
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:        {
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "devices": [
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "/dev/loop4"
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            ],
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "lv_name": "ceph_lv1",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "lv_size": "21470642176",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "name": "ceph_lv1",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "tags": {
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.cluster_name": "ceph",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.crush_device_class": "",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.encrypted": "0",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.osd_id": "1",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.type": "block",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.vdo": "0"
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            },
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "type": "block",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "vg_name": "ceph_vg1"
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:        }
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:    ],
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:    "2": [
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:        {
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "devices": [
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "/dev/loop5"
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            ],
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "lv_name": "ceph_lv2",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "lv_size": "21470642176",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "name": "ceph_lv2",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "tags": {
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.cluster_name": "ceph",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.crush_device_class": "",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.encrypted": "0",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.osd_id": "2",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.type": "block",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:                "ceph.vdo": "0"
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            },
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "type": "block",
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:            "vg_name": "ceph_vg2"
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:        }
Oct  1 12:35:28 np0005464891 musing_galileo[260992]:    ]
Oct  1 12:35:28 np0005464891 musing_galileo[260992]: }
Oct  1 12:35:28 np0005464891 systemd[1]: libpod-1d18ba7edc5e5eb78cdb9990ac0523e1672b580c9ca926c2ea31c232b0102a36.scope: Deactivated successfully.
Oct  1 12:35:28 np0005464891 podman[260976]: 2025-10-01 16:35:28.598923335 +0000 UTC m=+1.003020765 container died 1d18ba7edc5e5eb78cdb9990ac0523e1672b580c9ca926c2ea31c232b0102a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct  1 12:35:28 np0005464891 systemd[1]: var-lib-containers-storage-overlay-8aed2ede68f4dfa52191533f92a58715f3995cb86984705d25060768d60c5743-merged.mount: Deactivated successfully.
Oct  1 12:35:28 np0005464891 podman[260976]: 2025-10-01 16:35:28.682687391 +0000 UTC m=+1.086784711 container remove 1d18ba7edc5e5eb78cdb9990ac0523e1672b580c9ca926c2ea31c232b0102a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:35:28 np0005464891 systemd[1]: libpod-conmon-1d18ba7edc5e5eb78cdb9990ac0523e1672b580c9ca926c2ea31c232b0102a36.scope: Deactivated successfully.
Oct  1 12:35:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:35:29 np0005464891 podman[261155]: 2025-10-01 16:35:29.509847397 +0000 UTC m=+0.070468279 container create ceb6c317037775657e1cb648b8d59adbde254a182453bd16eb0bd553b7fe344f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_greider, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 12:35:29 np0005464891 systemd[1]: Started libpod-conmon-ceb6c317037775657e1cb648b8d59adbde254a182453bd16eb0bd553b7fe344f.scope.
Oct  1 12:35:29 np0005464891 podman[261155]: 2025-10-01 16:35:29.479180825 +0000 UTC m=+0.039801767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:35:29 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:35:29 np0005464891 podman[261155]: 2025-10-01 16:35:29.614519185 +0000 UTC m=+0.175140077 container init ceb6c317037775657e1cb648b8d59adbde254a182453bd16eb0bd553b7fe344f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:35:29 np0005464891 podman[261155]: 2025-10-01 16:35:29.624904733 +0000 UTC m=+0.185525605 container start ceb6c317037775657e1cb648b8d59adbde254a182453bd16eb0bd553b7fe344f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_greider, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:35:29 np0005464891 podman[261155]: 2025-10-01 16:35:29.629326526 +0000 UTC m=+0.189947448 container attach ceb6c317037775657e1cb648b8d59adbde254a182453bd16eb0bd553b7fe344f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_greider, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:35:29 np0005464891 bold_greider[261171]: 167 167
Oct  1 12:35:29 np0005464891 systemd[1]: libpod-ceb6c317037775657e1cb648b8d59adbde254a182453bd16eb0bd553b7fe344f.scope: Deactivated successfully.
Oct  1 12:35:29 np0005464891 podman[261155]: 2025-10-01 16:35:29.631968019 +0000 UTC m=+0.192589261 container died ceb6c317037775657e1cb648b8d59adbde254a182453bd16eb0bd553b7fe344f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_greider, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 12:35:29 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c5e5404cf23be0971384de6c78e231c410402e17c595e4f9e4e6bcf443768390-merged.mount: Deactivated successfully.
Oct  1 12:35:29 np0005464891 podman[261155]: 2025-10-01 16:35:29.686571525 +0000 UTC m=+0.247192397 container remove ceb6c317037775657e1cb648b8d59adbde254a182453bd16eb0bd553b7fe344f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_greider, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 12:35:29 np0005464891 systemd[1]: libpod-conmon-ceb6c317037775657e1cb648b8d59adbde254a182453bd16eb0bd553b7fe344f.scope: Deactivated successfully.
Oct  1 12:35:29 np0005464891 podman[261194]: 2025-10-01 16:35:29.908193361 +0000 UTC m=+0.058188846 container create 7f4e024e7a76793e760fad7f8207d381c7a498bb7259375bf9e8f416da33c793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 12:35:29 np0005464891 systemd[1]: Started libpod-conmon-7f4e024e7a76793e760fad7f8207d381c7a498bb7259375bf9e8f416da33c793.scope.
Oct  1 12:35:29 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:35:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb02e3b0b85c0d103234d1fe170289aa5346fa98af2894a416b53fd27289f775/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb02e3b0b85c0d103234d1fe170289aa5346fa98af2894a416b53fd27289f775/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb02e3b0b85c0d103234d1fe170289aa5346fa98af2894a416b53fd27289f775/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb02e3b0b85c0d103234d1fe170289aa5346fa98af2894a416b53fd27289f775/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:35:29 np0005464891 podman[261194]: 2025-10-01 16:35:29.886813738 +0000 UTC m=+0.036809303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:35:29 np0005464891 podman[261194]: 2025-10-01 16:35:29.988903113 +0000 UTC m=+0.138898688 container init 7f4e024e7a76793e760fad7f8207d381c7a498bb7259375bf9e8f416da33c793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 12:35:30 np0005464891 podman[261194]: 2025-10-01 16:35:30.001179345 +0000 UTC m=+0.151174850 container start 7f4e024e7a76793e760fad7f8207d381c7a498bb7259375bf9e8f416da33c793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:35:30 np0005464891 podman[261194]: 2025-10-01 16:35:30.005041922 +0000 UTC m=+0.155037437 container attach 7f4e024e7a76793e760fad7f8207d381c7a498bb7259375bf9e8f416da33c793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 12:35:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]: {
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:        "osd_id": 2,
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:        "type": "bluestore"
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:    },
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:        "osd_id": 0,
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:        "type": "bluestore"
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:    },
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:        "osd_id": 1,
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:        "type": "bluestore"
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]:    }
Oct  1 12:35:31 np0005464891 competent_chatterjee[261211]: }
Oct  1 12:35:31 np0005464891 systemd[1]: libpod-7f4e024e7a76793e760fad7f8207d381c7a498bb7259375bf9e8f416da33c793.scope: Deactivated successfully.
Oct  1 12:35:31 np0005464891 podman[261194]: 2025-10-01 16:35:31.046644914 +0000 UTC m=+1.196640429 container died 7f4e024e7a76793e760fad7f8207d381c7a498bb7259375bf9e8f416da33c793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 12:35:31 np0005464891 systemd[1]: libpod-7f4e024e7a76793e760fad7f8207d381c7a498bb7259375bf9e8f416da33c793.scope: Consumed 1.051s CPU time.
Oct  1 12:35:31 np0005464891 systemd[1]: var-lib-containers-storage-overlay-eb02e3b0b85c0d103234d1fe170289aa5346fa98af2894a416b53fd27289f775-merged.mount: Deactivated successfully.
Oct  1 12:35:31 np0005464891 podman[261194]: 2025-10-01 16:35:31.105761556 +0000 UTC m=+1.255757031 container remove 7f4e024e7a76793e760fad7f8207d381c7a498bb7259375bf9e8f416da33c793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:35:31 np0005464891 systemd[1]: libpod-conmon-7f4e024e7a76793e760fad7f8207d381c7a498bb7259375bf9e8f416da33c793.scope: Deactivated successfully.
Oct  1 12:35:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:35:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:35:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:35:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:35:31 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 5b2df51b-d98f-43ae-b153-aba4eba28cf1 does not exist
Oct  1 12:35:31 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev ecd56fea-72b5-4a0b-bdb4-b8006a28e6b7 does not exist
Oct  1 12:35:32 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:35:32 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:35:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:35:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:35:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1874489839' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:35:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:35:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1874489839' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:35:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:35:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2181219862' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:35:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:35:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2181219862' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:35:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:35:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/774688912' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:35:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:35:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/774688912' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:35:38 np0005464891 podman[261306]: 2025-10-01 16:35:38.050249319 +0000 UTC m=+0.148379623 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct  1 12:35:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:35:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:41 np0005464891 podman[261332]: 2025-10-01 16:35:41.954526896 +0000 UTC m=+0.067379463 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:35:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:35:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:35:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:35:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:35:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:35:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:35:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:43 np0005464891 podman[261352]: 2025-10-01 16:35:43.992959667 +0000 UTC m=+0.095536204 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:35:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:35:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:49 np0005464891 nova_compute[259907]: 2025-10-01 16:35:49.063 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:35:49 np0005464891 nova_compute[259907]: 2025-10-01 16:35:49.090 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:35:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:35:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:35:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:55 np0005464891 podman[261373]: 2025-10-01 16:35:55.954841105 +0000 UTC m=+0.063928227 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 12:35:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:35:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:36:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:36:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:36:08.956062) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336568956161, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1451, "num_deletes": 251, "total_data_size": 2338167, "memory_usage": 2366920, "flush_reason": "Manual Compaction"}
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336568970235, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2295052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14915, "largest_seqno": 16365, "table_properties": {"data_size": 2288235, "index_size": 3952, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13800, "raw_average_key_size": 19, "raw_value_size": 2274651, "raw_average_value_size": 3235, "num_data_blocks": 181, "num_entries": 703, "num_filter_entries": 703, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336414, "oldest_key_time": 1759336414, "file_creation_time": 1759336568, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 14190 microseconds, and 6338 cpu microseconds.
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:36:08.970274) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2295052 bytes OK
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:36:08.970292) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:36:08.971284) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:36:08.971296) EVENT_LOG_v1 {"time_micros": 1759336568971293, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:36:08.971311) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2331791, prev total WAL file size 2331791, number of live WAL files 2.
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:36:08.972073) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2241KB)], [35(7090KB)]
Oct  1 12:36:08 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336568972152, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9556079, "oldest_snapshot_seqno": -1}
Oct  1 12:36:09 np0005464891 podman[261391]: 2025-10-01 16:36:09.009305242 +0000 UTC m=+0.123423940 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4016 keys, 7790915 bytes, temperature: kUnknown
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336569037661, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7790915, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7761696, "index_size": 18101, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 98233, "raw_average_key_size": 24, "raw_value_size": 7686578, "raw_average_value_size": 1913, "num_data_blocks": 765, "num_entries": 4016, "num_filter_entries": 4016, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759336568, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:36:09.037924) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7790915 bytes
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:36:09.039977) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.7 rd, 118.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 6.9 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.6) write-amplify(3.4) OK, records in: 4530, records dropped: 514 output_compression: NoCompression
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:36:09.040000) EVENT_LOG_v1 {"time_micros": 1759336569039989, "job": 16, "event": "compaction_finished", "compaction_time_micros": 65595, "compaction_time_cpu_micros": 19155, "output_level": 6, "num_output_files": 1, "total_output_size": 7790915, "num_input_records": 4530, "num_output_records": 4016, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336569040643, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336569042211, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:36:08.971922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:36:09.042279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:36:09.042287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:36:09.042291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:36:09.042294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:36:09.042297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:36:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:36:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:36:12
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'images', '.mgr', 'volumes', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'backups']
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:36:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:36:12.433 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:36:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:36:12.434 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:36:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:36:12.434 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:36:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:12 np0005464891 podman[261417]: 2025-10-01 16:36:12.941712442 +0000 UTC m=+0.050550335 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  1 12:36:13 np0005464891 nova_compute[259907]: 2025-10-01 16:36:13.806 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:36:13 np0005464891 nova_compute[259907]: 2025-10-01 16:36:13.807 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:36:13 np0005464891 nova_compute[259907]: 2025-10-01 16:36:13.807 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:36:13 np0005464891 nova_compute[259907]: 2025-10-01 16:36:13.807 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:36:14 np0005464891 nova_compute[259907]: 2025-10-01 16:36:14.167 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 12:36:14 np0005464891 nova_compute[259907]: 2025-10-01 16:36:14.168 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:36:14 np0005464891 nova_compute[259907]: 2025-10-01 16:36:14.168 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:36:14 np0005464891 nova_compute[259907]: 2025-10-01 16:36:14.168 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:36:14 np0005464891 nova_compute[259907]: 2025-10-01 16:36:14.169 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:36:14 np0005464891 nova_compute[259907]: 2025-10-01 16:36:14.169 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:36:14 np0005464891 nova_compute[259907]: 2025-10-01 16:36:14.169 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:36:14 np0005464891 nova_compute[259907]: 2025-10-01 16:36:14.169 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:36:14 np0005464891 nova_compute[259907]: 2025-10-01 16:36:14.169 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:36:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:36:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:14 np0005464891 nova_compute[259907]: 2025-10-01 16:36:14.910 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:36:14 np0005464891 nova_compute[259907]: 2025-10-01 16:36:14.911 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:36:14 np0005464891 nova_compute[259907]: 2025-10-01 16:36:14.911 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:36:14 np0005464891 nova_compute[259907]: 2025-10-01 16:36:14.911 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:36:14 np0005464891 nova_compute[259907]: 2025-10-01 16:36:14.912 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:36:14 np0005464891 podman[261439]: 2025-10-01 16:36:14.978279149 +0000 UTC m=+0.075902349 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:36:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:36:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139625951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:36:15 np0005464891 nova_compute[259907]: 2025-10-01 16:36:15.318 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:36:15 np0005464891 nova_compute[259907]: 2025-10-01 16:36:15.467 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:36:15 np0005464891 nova_compute[259907]: 2025-10-01 16:36:15.468 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5185MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:36:15 np0005464891 nova_compute[259907]: 2025-10-01 16:36:15.468 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:36:15 np0005464891 nova_compute[259907]: 2025-10-01 16:36:15.468 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:36:15 np0005464891 nova_compute[259907]: 2025-10-01 16:36:15.572 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:36:15 np0005464891 nova_compute[259907]: 2025-10-01 16:36:15.572 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:36:15 np0005464891 nova_compute[259907]: 2025-10-01 16:36:15.592 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:36:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:36:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/163619823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:36:16 np0005464891 nova_compute[259907]: 2025-10-01 16:36:16.039 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:36:16 np0005464891 nova_compute[259907]: 2025-10-01 16:36:16.048 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:36:16 np0005464891 nova_compute[259907]: 2025-10-01 16:36:16.088 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:36:16 np0005464891 nova_compute[259907]: 2025-10-01 16:36:16.091 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:36:16 np0005464891 nova_compute[259907]: 2025-10-01 16:36:16.092 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:36:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:36:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:36:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:36:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct  1 12:36:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1951543585' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct  1 12:36:22 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14345 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct  1 12:36:22 np0005464891 ceph-mgr[74592]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  1 12:36:22 np0005464891 ceph-mgr[74592]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  1 12:36:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:36:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:26 np0005464891 podman[261507]: 2025-10-01 16:36:26.963525767 +0000 UTC m=+0.072418473 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true)
Oct  1 12:36:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:36:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  1 12:36:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 12:36:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:36:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:36:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:36:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:36:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:36:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:36:32 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev f05ad7fd-b5bb-4b5a-86dd-99116779ffcc does not exist
Oct  1 12:36:32 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 2f271e62-8813-4bd2-8b25-e5b1e04875c6 does not exist
Oct  1 12:36:32 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 17504d48-237b-4be3-acae-c8d8cf09fb0b does not exist
Oct  1 12:36:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:36:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:36:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:36:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:36:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:36:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:36:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:33 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 12:36:33 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:36:33 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:36:33 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:36:33 np0005464891 podman[261802]: 2025-10-01 16:36:33.118349966 +0000 UTC m=+0.055273485 container create 57973819b1aea9c09589d858e924ac89c06919548d4277cc45f916dd527c963f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_einstein, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 12:36:33 np0005464891 systemd[1]: Started libpod-conmon-57973819b1aea9c09589d858e924ac89c06919548d4277cc45f916dd527c963f.scope.
Oct  1 12:36:33 np0005464891 podman[261802]: 2025-10-01 16:36:33.089235828 +0000 UTC m=+0.026159417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:36:33 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:36:33 np0005464891 podman[261802]: 2025-10-01 16:36:33.242336111 +0000 UTC m=+0.179259620 container init 57973819b1aea9c09589d858e924ac89c06919548d4277cc45f916dd527c963f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:36:33 np0005464891 podman[261802]: 2025-10-01 16:36:33.254877719 +0000 UTC m=+0.191801228 container start 57973819b1aea9c09589d858e924ac89c06919548d4277cc45f916dd527c963f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_einstein, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:36:33 np0005464891 podman[261802]: 2025-10-01 16:36:33.259645442 +0000 UTC m=+0.196568931 container attach 57973819b1aea9c09589d858e924ac89c06919548d4277cc45f916dd527c963f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_einstein, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:36:33 np0005464891 exciting_einstein[261818]: 167 167
Oct  1 12:36:33 np0005464891 systemd[1]: libpod-57973819b1aea9c09589d858e924ac89c06919548d4277cc45f916dd527c963f.scope: Deactivated successfully.
Oct  1 12:36:33 np0005464891 podman[261802]: 2025-10-01 16:36:33.265015421 +0000 UTC m=+0.201938940 container died 57973819b1aea9c09589d858e924ac89c06919548d4277cc45f916dd527c963f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_einstein, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:36:33 np0005464891 systemd[1]: var-lib-containers-storage-overlay-3bcfab94e265738ebddfa1058df7da060ae89fd7186a5280e6e163528281db33-merged.mount: Deactivated successfully.
Oct  1 12:36:33 np0005464891 podman[261802]: 2025-10-01 16:36:33.327117566 +0000 UTC m=+0.264041085 container remove 57973819b1aea9c09589d858e924ac89c06919548d4277cc45f916dd527c963f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_einstein, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:36:33 np0005464891 systemd[1]: libpod-conmon-57973819b1aea9c09589d858e924ac89c06919548d4277cc45f916dd527c963f.scope: Deactivated successfully.
Oct  1 12:36:33 np0005464891 podman[261841]: 2025-10-01 16:36:33.548039292 +0000 UTC m=+0.063521906 container create 227b1dee090a68d18aa24073c0d32dab993dd61904015cd8dc376259c2d7b31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:36:33 np0005464891 systemd[1]: Started libpod-conmon-227b1dee090a68d18aa24073c0d32dab993dd61904015cd8dc376259c2d7b31e.scope.
Oct  1 12:36:33 np0005464891 podman[261841]: 2025-10-01 16:36:33.521529995 +0000 UTC m=+0.037012669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:36:33 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:36:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c4ad3ba33376c82170c17f6d5ceea66c3c5f44ce1b48bf5d009d1e889db909/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:36:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c4ad3ba33376c82170c17f6d5ceea66c3c5f44ce1b48bf5d009d1e889db909/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:36:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c4ad3ba33376c82170c17f6d5ceea66c3c5f44ce1b48bf5d009d1e889db909/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:36:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c4ad3ba33376c82170c17f6d5ceea66c3c5f44ce1b48bf5d009d1e889db909/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:36:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c4ad3ba33376c82170c17f6d5ceea66c3c5f44ce1b48bf5d009d1e889db909/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:36:33 np0005464891 podman[261841]: 2025-10-01 16:36:33.640319685 +0000 UTC m=+0.155802299 container init 227b1dee090a68d18aa24073c0d32dab993dd61904015cd8dc376259c2d7b31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shamir, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 12:36:33 np0005464891 podman[261841]: 2025-10-01 16:36:33.65383644 +0000 UTC m=+0.169319064 container start 227b1dee090a68d18aa24073c0d32dab993dd61904015cd8dc376259c2d7b31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shamir, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:36:33 np0005464891 podman[261841]: 2025-10-01 16:36:33.658664404 +0000 UTC m=+0.174147028 container attach 227b1dee090a68d18aa24073c0d32dab993dd61904015cd8dc376259c2d7b31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shamir, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 12:36:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:36:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:34 np0005464891 goofy_shamir[261858]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:36:34 np0005464891 goofy_shamir[261858]: --> relative data size: 1.0
Oct  1 12:36:34 np0005464891 goofy_shamir[261858]: --> All data devices are unavailable
Oct  1 12:36:34 np0005464891 systemd[1]: libpod-227b1dee090a68d18aa24073c0d32dab993dd61904015cd8dc376259c2d7b31e.scope: Deactivated successfully.
Oct  1 12:36:34 np0005464891 systemd[1]: libpod-227b1dee090a68d18aa24073c0d32dab993dd61904015cd8dc376259c2d7b31e.scope: Consumed 1.097s CPU time.
Oct  1 12:36:34 np0005464891 podman[261841]: 2025-10-01 16:36:34.788648242 +0000 UTC m=+1.304130896 container died 227b1dee090a68d18aa24073c0d32dab993dd61904015cd8dc376259c2d7b31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 12:36:34 np0005464891 systemd[1]: var-lib-containers-storage-overlay-79c4ad3ba33376c82170c17f6d5ceea66c3c5f44ce1b48bf5d009d1e889db909-merged.mount: Deactivated successfully.
Oct  1 12:36:34 np0005464891 podman[261841]: 2025-10-01 16:36:34.855122468 +0000 UTC m=+1.370605052 container remove 227b1dee090a68d18aa24073c0d32dab993dd61904015cd8dc376259c2d7b31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:36:34 np0005464891 systemd[1]: libpod-conmon-227b1dee090a68d18aa24073c0d32dab993dd61904015cd8dc376259c2d7b31e.scope: Deactivated successfully.
Oct  1 12:36:35 np0005464891 podman[262041]: 2025-10-01 16:36:35.742956789 +0000 UTC m=+0.064600416 container create 1c643159320a2d5491074dfde89ce36bdf37054ce3eaee5b0635ec0295e2c725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 12:36:35 np0005464891 systemd[1]: Started libpod-conmon-1c643159320a2d5491074dfde89ce36bdf37054ce3eaee5b0635ec0295e2c725.scope.
Oct  1 12:36:35 np0005464891 podman[262041]: 2025-10-01 16:36:35.713310256 +0000 UTC m=+0.034953943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:36:35 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:36:35 np0005464891 podman[262041]: 2025-10-01 16:36:35.853050126 +0000 UTC m=+0.174693763 container init 1c643159320a2d5491074dfde89ce36bdf37054ce3eaee5b0635ec0295e2c725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:36:35 np0005464891 podman[262041]: 2025-10-01 16:36:35.86468467 +0000 UTC m=+0.186328297 container start 1c643159320a2d5491074dfde89ce36bdf37054ce3eaee5b0635ec0295e2c725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:36:35 np0005464891 podman[262041]: 2025-10-01 16:36:35.868763453 +0000 UTC m=+0.190407130 container attach 1c643159320a2d5491074dfde89ce36bdf37054ce3eaee5b0635ec0295e2c725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 12:36:35 np0005464891 focused_kowalevski[262057]: 167 167
Oct  1 12:36:35 np0005464891 systemd[1]: libpod-1c643159320a2d5491074dfde89ce36bdf37054ce3eaee5b0635ec0295e2c725.scope: Deactivated successfully.
Oct  1 12:36:35 np0005464891 podman[262062]: 2025-10-01 16:36:35.937637077 +0000 UTC m=+0.044529138 container died 1c643159320a2d5491074dfde89ce36bdf37054ce3eaee5b0635ec0295e2c725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:36:35 np0005464891 systemd[1]: var-lib-containers-storage-overlay-70ee6d2e9cbdfb791883435a8de90dfd0f56cd9627bae147a8342d1ec0ddca35-merged.mount: Deactivated successfully.
Oct  1 12:36:35 np0005464891 podman[262062]: 2025-10-01 16:36:35.987668567 +0000 UTC m=+0.094560578 container remove 1c643159320a2d5491074dfde89ce36bdf37054ce3eaee5b0635ec0295e2c725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:36:35 np0005464891 systemd[1]: libpod-conmon-1c643159320a2d5491074dfde89ce36bdf37054ce3eaee5b0635ec0295e2c725.scope: Deactivated successfully.
Oct  1 12:36:36 np0005464891 podman[262084]: 2025-10-01 16:36:36.247330528 +0000 UTC m=+0.065973653 container create 23cb7fcf8d32855e3d2a976bd96588e255f10b8d5e085a8920d9a0ab215523fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_heisenberg, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:36:36 np0005464891 systemd[1]: Started libpod-conmon-23cb7fcf8d32855e3d2a976bd96588e255f10b8d5e085a8920d9a0ab215523fb.scope.
Oct  1 12:36:36 np0005464891 podman[262084]: 2025-10-01 16:36:36.217197151 +0000 UTC m=+0.035840326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:36:36 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:36:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9353e185977bf88a62ea0c7ebc2fbfa5f78570967f3b1675282e0809356267/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:36:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9353e185977bf88a62ea0c7ebc2fbfa5f78570967f3b1675282e0809356267/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:36:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9353e185977bf88a62ea0c7ebc2fbfa5f78570967f3b1675282e0809356267/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:36:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9353e185977bf88a62ea0c7ebc2fbfa5f78570967f3b1675282e0809356267/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:36:36 np0005464891 podman[262084]: 2025-10-01 16:36:36.366278503 +0000 UTC m=+0.184921638 container init 23cb7fcf8d32855e3d2a976bd96588e255f10b8d5e085a8920d9a0ab215523fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:36:36 np0005464891 podman[262084]: 2025-10-01 16:36:36.375422217 +0000 UTC m=+0.194065342 container start 23cb7fcf8d32855e3d2a976bd96588e255f10b8d5e085a8920d9a0ab215523fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:36:36 np0005464891 podman[262084]: 2025-10-01 16:36:36.390879465 +0000 UTC m=+0.209522610 container attach 23cb7fcf8d32855e3d2a976bd96588e255f10b8d5e085a8920d9a0ab215523fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_heisenberg, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:36:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:36:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1192345315' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:36:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:36:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1192345315' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]: {
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:    "0": [
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:        {
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "devices": [
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "/dev/loop3"
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            ],
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "lv_name": "ceph_lv0",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "lv_size": "21470642176",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "name": "ceph_lv0",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "tags": {
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.cluster_name": "ceph",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.crush_device_class": "",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.encrypted": "0",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.osd_id": "0",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.type": "block",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.vdo": "0"
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            },
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "type": "block",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "vg_name": "ceph_vg0"
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:        }
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:    ],
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:    "1": [
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:        {
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "devices": [
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "/dev/loop4"
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            ],
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "lv_name": "ceph_lv1",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "lv_size": "21470642176",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "name": "ceph_lv1",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "tags": {
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.cluster_name": "ceph",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.crush_device_class": "",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.encrypted": "0",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.osd_id": "1",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.type": "block",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.vdo": "0"
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            },
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "type": "block",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "vg_name": "ceph_vg1"
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:        }
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:    ],
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:    "2": [
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:        {
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "devices": [
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "/dev/loop5"
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            ],
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "lv_name": "ceph_lv2",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "lv_size": "21470642176",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "name": "ceph_lv2",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "tags": {
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.cluster_name": "ceph",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.crush_device_class": "",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.encrypted": "0",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.osd_id": "2",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.type": "block",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:                "ceph.vdo": "0"
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            },
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "type": "block",
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:            "vg_name": "ceph_vg2"
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:        }
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]:    ]
Oct  1 12:36:37 np0005464891 beautiful_heisenberg[262101]: }
Oct  1 12:36:37 np0005464891 systemd[1]: libpod-23cb7fcf8d32855e3d2a976bd96588e255f10b8d5e085a8920d9a0ab215523fb.scope: Deactivated successfully.
Oct  1 12:36:37 np0005464891 podman[262084]: 2025-10-01 16:36:37.180794847 +0000 UTC m=+0.999437982 container died 23cb7fcf8d32855e3d2a976bd96588e255f10b8d5e085a8920d9a0ab215523fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_heisenberg, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:36:37 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4a9353e185977bf88a62ea0c7ebc2fbfa5f78570967f3b1675282e0809356267-merged.mount: Deactivated successfully.
Oct  1 12:36:37 np0005464891 podman[262084]: 2025-10-01 16:36:37.257281852 +0000 UTC m=+1.075924957 container remove 23cb7fcf8d32855e3d2a976bd96588e255f10b8d5e085a8920d9a0ab215523fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_heisenberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:36:37 np0005464891 systemd[1]: libpod-conmon-23cb7fcf8d32855e3d2a976bd96588e255f10b8d5e085a8920d9a0ab215523fb.scope: Deactivated successfully.
Oct  1 12:36:38 np0005464891 podman[262264]: 2025-10-01 16:36:38.094807125 +0000 UTC m=+0.063119434 container create cb0f5ba80070dc53e57f7b9df54046c1b4b7814f13ba48f23d59badd8c02e814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sutherland, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:36:38 np0005464891 systemd[1]: Started libpod-conmon-cb0f5ba80070dc53e57f7b9df54046c1b4b7814f13ba48f23d59badd8c02e814.scope.
Oct  1 12:36:38 np0005464891 podman[262264]: 2025-10-01 16:36:38.065759818 +0000 UTC m=+0.034072187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:36:38 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:36:38 np0005464891 podman[262264]: 2025-10-01 16:36:38.184376743 +0000 UTC m=+0.152689092 container init cb0f5ba80070dc53e57f7b9df54046c1b4b7814f13ba48f23d59badd8c02e814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 12:36:38 np0005464891 podman[262264]: 2025-10-01 16:36:38.196325055 +0000 UTC m=+0.164637344 container start cb0f5ba80070dc53e57f7b9df54046c1b4b7814f13ba48f23d59badd8c02e814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 12:36:38 np0005464891 podman[262264]: 2025-10-01 16:36:38.200353406 +0000 UTC m=+0.168665775 container attach cb0f5ba80070dc53e57f7b9df54046c1b4b7814f13ba48f23d59badd8c02e814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sutherland, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct  1 12:36:38 np0005464891 wizardly_sutherland[262281]: 167 167
Oct  1 12:36:38 np0005464891 systemd[1]: libpod-cb0f5ba80070dc53e57f7b9df54046c1b4b7814f13ba48f23d59badd8c02e814.scope: Deactivated successfully.
Oct  1 12:36:38 np0005464891 podman[262264]: 2025-10-01 16:36:38.201799727 +0000 UTC m=+0.170112016 container died cb0f5ba80070dc53e57f7b9df54046c1b4b7814f13ba48f23d59badd8c02e814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 12:36:38 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b5331292a557d85f74f4f9ab411ead86cd0e6085bc6a186afc4f55815ef27eb7-merged.mount: Deactivated successfully.
Oct  1 12:36:38 np0005464891 podman[262264]: 2025-10-01 16:36:38.239822493 +0000 UTC m=+0.208134772 container remove cb0f5ba80070dc53e57f7b9df54046c1b4b7814f13ba48f23d59badd8c02e814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sutherland, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:36:38 np0005464891 systemd[1]: libpod-conmon-cb0f5ba80070dc53e57f7b9df54046c1b4b7814f13ba48f23d59badd8c02e814.scope: Deactivated successfully.
Oct  1 12:36:38 np0005464891 podman[262307]: 2025-10-01 16:36:38.454165987 +0000 UTC m=+0.056330326 container create 9931fb750275565ce51a37b080d9358cea66de98407efb3d078a78ebdab32430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_haibt, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:36:38 np0005464891 systemd[1]: Started libpod-conmon-9931fb750275565ce51a37b080d9358cea66de98407efb3d078a78ebdab32430.scope.
Oct  1 12:36:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:38 np0005464891 podman[262307]: 2025-10-01 16:36:38.428480054 +0000 UTC m=+0.030644383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:36:38 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:36:38 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45dc7339e3ec3ae9caa1964471ff75201604407c0bf27627723c02c89331dabd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:36:38 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45dc7339e3ec3ae9caa1964471ff75201604407c0bf27627723c02c89331dabd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:36:38 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45dc7339e3ec3ae9caa1964471ff75201604407c0bf27627723c02c89331dabd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:36:38 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45dc7339e3ec3ae9caa1964471ff75201604407c0bf27627723c02c89331dabd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:36:38 np0005464891 podman[262307]: 2025-10-01 16:36:38.558163245 +0000 UTC m=+0.160327594 container init 9931fb750275565ce51a37b080d9358cea66de98407efb3d078a78ebdab32430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:36:38 np0005464891 podman[262307]: 2025-10-01 16:36:38.575726923 +0000 UTC m=+0.177891252 container start 9931fb750275565ce51a37b080d9358cea66de98407efb3d078a78ebdab32430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  1 12:36:38 np0005464891 podman[262307]: 2025-10-01 16:36:38.58028085 +0000 UTC m=+0.182445199 container attach 9931fb750275565ce51a37b080d9358cea66de98407efb3d078a78ebdab32430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_haibt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:36:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]: {
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:        "osd_id": 2,
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:        "type": "bluestore"
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:    },
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:        "osd_id": 0,
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:        "type": "bluestore"
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:    },
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:        "osd_id": 1,
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:        "type": "bluestore"
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]:    }
Oct  1 12:36:39 np0005464891 gracious_haibt[262323]: }
Oct  1 12:36:39 np0005464891 systemd[1]: libpod-9931fb750275565ce51a37b080d9358cea66de98407efb3d078a78ebdab32430.scope: Deactivated successfully.
Oct  1 12:36:39 np0005464891 podman[262307]: 2025-10-01 16:36:39.677651661 +0000 UTC m=+1.279816010 container died 9931fb750275565ce51a37b080d9358cea66de98407efb3d078a78ebdab32430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:36:39 np0005464891 systemd[1]: libpod-9931fb750275565ce51a37b080d9358cea66de98407efb3d078a78ebdab32430.scope: Consumed 1.109s CPU time.
Oct  1 12:36:39 np0005464891 systemd[1]: var-lib-containers-storage-overlay-45dc7339e3ec3ae9caa1964471ff75201604407c0bf27627723c02c89331dabd-merged.mount: Deactivated successfully.
Oct  1 12:36:39 np0005464891 podman[262307]: 2025-10-01 16:36:39.759132214 +0000 UTC m=+1.361296533 container remove 9931fb750275565ce51a37b080d9358cea66de98407efb3d078a78ebdab32430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:36:39 np0005464891 systemd[1]: libpod-conmon-9931fb750275565ce51a37b080d9358cea66de98407efb3d078a78ebdab32430.scope: Deactivated successfully.
Oct  1 12:36:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:36:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:36:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:36:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:36:39 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 9dcbd16b-2785-4f46-b7cd-d0abe9123586 does not exist
Oct  1 12:36:39 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 46ee80d3-e55e-43c5-b8b0-a8eb754a602d does not exist
Oct  1 12:36:39 np0005464891 podman[262357]: 2025-10-01 16:36:39.871474775 +0000 UTC m=+0.152977501 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  1 12:36:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:40 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:36:40 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:36:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:36:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:36:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:36:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:36:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:36:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:36:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:43 np0005464891 podman[262449]: 2025-10-01 16:36:43.943477111 +0000 UTC m=+0.057109038 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=multipathd, managed_by=edpm_ansible)
Oct  1 12:36:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:36:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct  1 12:36:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1112909676' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct  1 12:36:45 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.14359 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct  1 12:36:45 np0005464891 ceph-mgr[74592]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  1 12:36:45 np0005464891 ceph-mgr[74592]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  1 12:36:46 np0005464891 podman[262471]: 2025-10-01 16:36:46.004103678 +0000 UTC m=+0.108516096 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  1 12:36:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:36:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 12:36:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:36:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 12:36:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 12:36:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:36:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 12:36:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 12:36:57 np0005464891 podman[262492]: 2025-10-01 16:36:57.977152887 +0000 UTC m=+0.080485487 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  1 12:36:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 12:36:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:37:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:37:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:37:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:10 np0005464891 podman[262511]: 2025-10-01 16:37:10.992150738 +0000 UTC m=+0.101964853 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:37:12
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['vms', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.meta', 'default.rgw.log']
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:37:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:37:12.434 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:37:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:37:12.435 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:37:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:37:12.435 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:37:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:37:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:14 np0005464891 podman[262537]: 2025-10-01 16:37:14.985225441 +0000 UTC m=+0.088639082 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd)
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.085 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.086 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.110 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.110 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.111 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.174 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.174 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.175 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.176 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.176 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.177 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.177 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.178 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.178 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.203 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.204 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.205 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.206 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.206 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:37:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:37:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1672273458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.684 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.938 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.939 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5164MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.939 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:37:16 np0005464891 nova_compute[259907]: 2025-10-01 16:37:16.940 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:37:16 np0005464891 podman[262580]: 2025-10-01 16:37:16.96238124 +0000 UTC m=+0.076112225 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  1 12:37:17 np0005464891 nova_compute[259907]: 2025-10-01 16:37:17.029 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:37:17 np0005464891 nova_compute[259907]: 2025-10-01 16:37:17.029 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:37:17 np0005464891 nova_compute[259907]: 2025-10-01 16:37:17.054 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:37:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:37:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2365380328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:37:17 np0005464891 nova_compute[259907]: 2025-10-01 16:37:17.460 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:37:17 np0005464891 nova_compute[259907]: 2025-10-01 16:37:17.467 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:37:17 np0005464891 nova_compute[259907]: 2025-10-01 16:37:17.493 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:37:17 np0005464891 nova_compute[259907]: 2025-10-01 16:37:17.496 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:37:17 np0005464891 nova_compute[259907]: 2025-10-01 16:37:17.497 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:37:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:37:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:37:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:37:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:37:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:28 np0005464891 podman[262622]: 2025-10-01 16:37:28.977911939 +0000 UTC m=+0.090875395 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:37:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:37:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:37:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:37:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3811946828' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:37:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:37:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3811946828' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:37:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:37:37.219 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:37:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:37:37.220 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:37:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:37:37.221 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:37:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:37:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:37:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:37:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:37:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:37:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:37:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:37:40 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev f2381d46-23ce-45c6-9390-54fb3f230a72 does not exist
Oct  1 12:37:40 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 76ea8643-c4e6-4e71-aba7-f3029fcdb9c7 does not exist
Oct  1 12:37:40 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 5ea02183-c64b-4432-9f05-62fbc5866595 does not exist
Oct  1 12:37:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:37:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:37:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:37:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:37:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:37:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:37:41 np0005464891 podman[262798]: 2025-10-01 16:37:41.233213842 +0000 UTC m=+0.118477234 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct  1 12:37:41 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:37:41 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:37:41 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:37:41 np0005464891 podman[262942]: 2025-10-01 16:37:41.737525161 +0000 UTC m=+0.073594513 container create a7869c2ccac479d20f890ba002130cefac9cb95424a3cf08353a45f29e4beb5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:37:41 np0005464891 podman[262942]: 2025-10-01 16:37:41.708394247 +0000 UTC m=+0.044463649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:37:41 np0005464891 systemd[1]: Started libpod-conmon-a7869c2ccac479d20f890ba002130cefac9cb95424a3cf08353a45f29e4beb5d.scope.
Oct  1 12:37:41 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:37:42 np0005464891 podman[262942]: 2025-10-01 16:37:42.017730341 +0000 UTC m=+0.353799753 container init a7869c2ccac479d20f890ba002130cefac9cb95424a3cf08353a45f29e4beb5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_tesla, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 12:37:42 np0005464891 podman[262942]: 2025-10-01 16:37:42.029912637 +0000 UTC m=+0.365981959 container start a7869c2ccac479d20f890ba002130cefac9cb95424a3cf08353a45f29e4beb5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:37:42 np0005464891 hopeful_tesla[262958]: 167 167
Oct  1 12:37:42 np0005464891 systemd[1]: libpod-a7869c2ccac479d20f890ba002130cefac9cb95424a3cf08353a45f29e4beb5d.scope: Deactivated successfully.
Oct  1 12:37:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:37:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:37:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:37:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:37:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:37:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:37:42 np0005464891 podman[262942]: 2025-10-01 16:37:42.068005 +0000 UTC m=+0.404074412 container attach a7869c2ccac479d20f890ba002130cefac9cb95424a3cf08353a45f29e4beb5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_tesla, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  1 12:37:42 np0005464891 podman[262942]: 2025-10-01 16:37:42.06874457 +0000 UTC m=+0.404813962 container died a7869c2ccac479d20f890ba002130cefac9cb95424a3cf08353a45f29e4beb5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_tesla, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:37:42 np0005464891 systemd[1]: var-lib-containers-storage-overlay-00c4f2229e03612f84f2b37869aa6a412a4892e8e93d1694dfc12ea6f2244486-merged.mount: Deactivated successfully.
Oct  1 12:37:42 np0005464891 podman[262942]: 2025-10-01 16:37:42.147633559 +0000 UTC m=+0.483702891 container remove a7869c2ccac479d20f890ba002130cefac9cb95424a3cf08353a45f29e4beb5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:37:42 np0005464891 systemd[1]: libpod-conmon-a7869c2ccac479d20f890ba002130cefac9cb95424a3cf08353a45f29e4beb5d.scope: Deactivated successfully.
Oct  1 12:37:42 np0005464891 podman[262983]: 2025-10-01 16:37:42.371813121 +0000 UTC m=+0.043618965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:37:42 np0005464891 podman[262983]: 2025-10-01 16:37:42.48506485 +0000 UTC m=+0.156870634 container create e51da1a2c254afb9620e9236ebcd1b34d5fecb376eb0a347bdc0dc4595f3454a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:37:42 np0005464891 systemd[1]: Started libpod-conmon-e51da1a2c254afb9620e9236ebcd1b34d5fecb376eb0a347bdc0dc4595f3454a.scope.
Oct  1 12:37:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:42 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:37:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0618ba7ff6eb58c6d79702a7f12712d729f270980386e7f682c7af17c2d19aa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:37:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0618ba7ff6eb58c6d79702a7f12712d729f270980386e7f682c7af17c2d19aa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:37:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0618ba7ff6eb58c6d79702a7f12712d729f270980386e7f682c7af17c2d19aa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:37:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0618ba7ff6eb58c6d79702a7f12712d729f270980386e7f682c7af17c2d19aa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:37:42 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0618ba7ff6eb58c6d79702a7f12712d729f270980386e7f682c7af17c2d19aa9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:37:42 np0005464891 podman[262983]: 2025-10-01 16:37:42.690220846 +0000 UTC m=+0.362026630 container init e51da1a2c254afb9620e9236ebcd1b34d5fecb376eb0a347bdc0dc4595f3454a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:37:42 np0005464891 podman[262983]: 2025-10-01 16:37:42.702867456 +0000 UTC m=+0.374673240 container start e51da1a2c254afb9620e9236ebcd1b34d5fecb376eb0a347bdc0dc4595f3454a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 12:37:42 np0005464891 podman[262983]: 2025-10-01 16:37:42.725186802 +0000 UTC m=+0.396992586 container attach e51da1a2c254afb9620e9236ebcd1b34d5fecb376eb0a347bdc0dc4595f3454a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:37:43 np0005464891 unruffled_bell[262999]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:37:43 np0005464891 unruffled_bell[262999]: --> relative data size: 1.0
Oct  1 12:37:43 np0005464891 unruffled_bell[262999]: --> All data devices are unavailable
Oct  1 12:37:43 np0005464891 systemd[1]: libpod-e51da1a2c254afb9620e9236ebcd1b34d5fecb376eb0a347bdc0dc4595f3454a.scope: Deactivated successfully.
Oct  1 12:37:43 np0005464891 podman[262983]: 2025-10-01 16:37:43.792446932 +0000 UTC m=+1.464252686 container died e51da1a2c254afb9620e9236ebcd1b34d5fecb376eb0a347bdc0dc4595f3454a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:37:43 np0005464891 systemd[1]: libpod-e51da1a2c254afb9620e9236ebcd1b34d5fecb376eb0a347bdc0dc4595f3454a.scope: Consumed 1.040s CPU time.
Oct  1 12:37:43 np0005464891 systemd[1]: var-lib-containers-storage-overlay-0618ba7ff6eb58c6d79702a7f12712d729f270980386e7f682c7af17c2d19aa9-merged.mount: Deactivated successfully.
Oct  1 12:37:44 np0005464891 podman[262983]: 2025-10-01 16:37:44.038686512 +0000 UTC m=+1.710492296 container remove e51da1a2c254afb9620e9236ebcd1b34d5fecb376eb0a347bdc0dc4595f3454a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:37:44 np0005464891 systemd[1]: libpod-conmon-e51da1a2c254afb9620e9236ebcd1b34d5fecb376eb0a347bdc0dc4595f3454a.scope: Deactivated successfully.
Oct  1 12:37:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:37:44 np0005464891 podman[263185]: 2025-10-01 16:37:44.88358822 +0000 UTC m=+0.044889761 container create 7d2e41c778db264cae7777d2bb7b9ec23b5420a8b8f5acfd38c4d3f9e2a9eff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:37:44 np0005464891 systemd[1]: Started libpod-conmon-7d2e41c778db264cae7777d2bb7b9ec23b5420a8b8f5acfd38c4d3f9e2a9eff8.scope.
Oct  1 12:37:44 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:37:44 np0005464891 podman[263185]: 2025-10-01 16:37:44.862726554 +0000 UTC m=+0.024028095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:37:44 np0005464891 podman[263185]: 2025-10-01 16:37:44.961127372 +0000 UTC m=+0.122428943 container init 7d2e41c778db264cae7777d2bb7b9ec23b5420a8b8f5acfd38c4d3f9e2a9eff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:37:44 np0005464891 podman[263185]: 2025-10-01 16:37:44.969552275 +0000 UTC m=+0.130853786 container start 7d2e41c778db264cae7777d2bb7b9ec23b5420a8b8f5acfd38c4d3f9e2a9eff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:37:44 np0005464891 heuristic_burnell[263201]: 167 167
Oct  1 12:37:44 np0005464891 systemd[1]: libpod-7d2e41c778db264cae7777d2bb7b9ec23b5420a8b8f5acfd38c4d3f9e2a9eff8.scope: Deactivated successfully.
Oct  1 12:37:44 np0005464891 podman[263185]: 2025-10-01 16:37:44.975053867 +0000 UTC m=+0.136355448 container attach 7d2e41c778db264cae7777d2bb7b9ec23b5420a8b8f5acfd38c4d3f9e2a9eff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:37:44 np0005464891 podman[263185]: 2025-10-01 16:37:44.975600402 +0000 UTC m=+0.136901933 container died 7d2e41c778db264cae7777d2bb7b9ec23b5420a8b8f5acfd38c4d3f9e2a9eff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:37:45 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4bc92438d202632d0eace4470ef92ec75b81b1c7d04b7e7d15b1237e6599c92f-merged.mount: Deactivated successfully.
Oct  1 12:37:45 np0005464891 podman[263185]: 2025-10-01 16:37:45.015566175 +0000 UTC m=+0.176867696 container remove 7d2e41c778db264cae7777d2bb7b9ec23b5420a8b8f5acfd38c4d3f9e2a9eff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct  1 12:37:45 np0005464891 systemd[1]: libpod-conmon-7d2e41c778db264cae7777d2bb7b9ec23b5420a8b8f5acfd38c4d3f9e2a9eff8.scope: Deactivated successfully.
Oct  1 12:37:45 np0005464891 podman[263215]: 2025-10-01 16:37:45.111753562 +0000 UTC m=+0.077173442 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:37:45 np0005464891 podman[263245]: 2025-10-01 16:37:45.193351347 +0000 UTC m=+0.038804113 container create 4ecf815b3233797c0253880be436cda3a32388c1b669186b86c569f2ed4a5212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 12:37:45 np0005464891 systemd[1]: Started libpod-conmon-4ecf815b3233797c0253880be436cda3a32388c1b669186b86c569f2ed4a5212.scope.
Oct  1 12:37:45 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:37:45 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a346045cfce9168c00176a4b07764b3c4f454458ce10db07ddc29523007655d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:37:45 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a346045cfce9168c00176a4b07764b3c4f454458ce10db07ddc29523007655d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:37:45 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a346045cfce9168c00176a4b07764b3c4f454458ce10db07ddc29523007655d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:37:45 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a346045cfce9168c00176a4b07764b3c4f454458ce10db07ddc29523007655d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:37:45 np0005464891 podman[263245]: 2025-10-01 16:37:45.178061174 +0000 UTC m=+0.023513970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:37:45 np0005464891 podman[263245]: 2025-10-01 16:37:45.277719507 +0000 UTC m=+0.123172303 container init 4ecf815b3233797c0253880be436cda3a32388c1b669186b86c569f2ed4a5212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dhawan, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:37:45 np0005464891 podman[263245]: 2025-10-01 16:37:45.284681449 +0000 UTC m=+0.130134225 container start 4ecf815b3233797c0253880be436cda3a32388c1b669186b86c569f2ed4a5212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 12:37:45 np0005464891 podman[263245]: 2025-10-01 16:37:45.288247668 +0000 UTC m=+0.133700444 container attach 4ecf815b3233797c0253880be436cda3a32388c1b669186b86c569f2ed4a5212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dhawan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]: {
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:    "0": [
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:        {
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "devices": [
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "/dev/loop3"
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            ],
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "lv_name": "ceph_lv0",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "lv_size": "21470642176",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "name": "ceph_lv0",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "tags": {
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.cluster_name": "ceph",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.crush_device_class": "",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.encrypted": "0",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.osd_id": "0",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.type": "block",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.vdo": "0"
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            },
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "type": "block",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "vg_name": "ceph_vg0"
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:        }
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:    ],
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:    "1": [
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:        {
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "devices": [
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "/dev/loop4"
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            ],
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "lv_name": "ceph_lv1",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "lv_size": "21470642176",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "name": "ceph_lv1",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "tags": {
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.cluster_name": "ceph",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.crush_device_class": "",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.encrypted": "0",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.osd_id": "1",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.type": "block",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.vdo": "0"
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            },
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "type": "block",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "vg_name": "ceph_vg1"
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:        }
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:    ],
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:    "2": [
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:        {
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "devices": [
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "/dev/loop5"
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            ],
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "lv_name": "ceph_lv2",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "lv_size": "21470642176",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "name": "ceph_lv2",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "tags": {
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.cluster_name": "ceph",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.crush_device_class": "",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.encrypted": "0",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.osd_id": "2",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.type": "block",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:                "ceph.vdo": "0"
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            },
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "type": "block",
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:            "vg_name": "ceph_vg2"
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:        }
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]:    ]
Oct  1 12:37:46 np0005464891 hardcore_dhawan[263261]: }
Oct  1 12:37:46 np0005464891 systemd[1]: libpod-4ecf815b3233797c0253880be436cda3a32388c1b669186b86c569f2ed4a5212.scope: Deactivated successfully.
Oct  1 12:37:46 np0005464891 podman[263245]: 2025-10-01 16:37:46.074230577 +0000 UTC m=+0.919683363 container died 4ecf815b3233797c0253880be436cda3a32388c1b669186b86c569f2ed4a5212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:37:46 np0005464891 systemd[1]: var-lib-containers-storage-overlay-1a346045cfce9168c00176a4b07764b3c4f454458ce10db07ddc29523007655d-merged.mount: Deactivated successfully.
Oct  1 12:37:46 np0005464891 podman[263245]: 2025-10-01 16:37:46.121244876 +0000 UTC m=+0.966697652 container remove 4ecf815b3233797c0253880be436cda3a32388c1b669186b86c569f2ed4a5212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 12:37:46 np0005464891 systemd[1]: libpod-conmon-4ecf815b3233797c0253880be436cda3a32388c1b669186b86c569f2ed4a5212.scope: Deactivated successfully.
Oct  1 12:37:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:47 np0005464891 podman[263424]: 2025-10-01 16:37:47.01561721 +0000 UTC m=+0.068903884 container create 0a84785e6d7ffb9c30d7d79977843153b43c48799b5c35cb8bd03b29591d1e97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:37:47 np0005464891 systemd[1]: Started libpod-conmon-0a84785e6d7ffb9c30d7d79977843153b43c48799b5c35cb8bd03b29591d1e97.scope.
Oct  1 12:37:47 np0005464891 podman[263424]: 2025-10-01 16:37:46.994224638 +0000 UTC m=+0.047511392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:37:47 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:37:47 np0005464891 podman[263424]: 2025-10-01 16:37:47.121225267 +0000 UTC m=+0.174511971 container init 0a84785e6d7ffb9c30d7d79977843153b43c48799b5c35cb8bd03b29591d1e97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:37:47 np0005464891 podman[263424]: 2025-10-01 16:37:47.135693266 +0000 UTC m=+0.188979960 container start 0a84785e6d7ffb9c30d7d79977843153b43c48799b5c35cb8bd03b29591d1e97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:37:47 np0005464891 podman[263424]: 2025-10-01 16:37:47.141312392 +0000 UTC m=+0.194599156 container attach 0a84785e6d7ffb9c30d7d79977843153b43c48799b5c35cb8bd03b29591d1e97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 12:37:47 np0005464891 vibrant_rhodes[263442]: 167 167
Oct  1 12:37:47 np0005464891 systemd[1]: libpod-0a84785e6d7ffb9c30d7d79977843153b43c48799b5c35cb8bd03b29591d1e97.scope: Deactivated successfully.
Oct  1 12:37:47 np0005464891 podman[263424]: 2025-10-01 16:37:47.146531666 +0000 UTC m=+0.199818370 container died 0a84785e6d7ffb9c30d7d79977843153b43c48799b5c35cb8bd03b29591d1e97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 12:37:47 np0005464891 systemd[1]: var-lib-containers-storage-overlay-90dc410c580a4d8bea411a280dc7b59da6766285baf10e662f1163510db1bfb3-merged.mount: Deactivated successfully.
Oct  1 12:37:47 np0005464891 podman[263439]: 2025-10-01 16:37:47.180894675 +0000 UTC m=+0.110821482 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  1 12:37:47 np0005464891 podman[263424]: 2025-10-01 16:37:47.204835376 +0000 UTC m=+0.258122050 container remove 0a84785e6d7ffb9c30d7d79977843153b43c48799b5c35cb8bd03b29591d1e97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 12:37:47 np0005464891 systemd[1]: libpod-conmon-0a84785e6d7ffb9c30d7d79977843153b43c48799b5c35cb8bd03b29591d1e97.scope: Deactivated successfully.
Oct  1 12:37:47 np0005464891 podman[263485]: 2025-10-01 16:37:47.412220725 +0000 UTC m=+0.071130187 container create 4704c305f14bb0a12a0926d93495519c83202d02605c050691c433106f33adc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_rubin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:37:47 np0005464891 podman[263485]: 2025-10-01 16:37:47.378438412 +0000 UTC m=+0.037347874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:37:47 np0005464891 systemd[1]: Started libpod-conmon-4704c305f14bb0a12a0926d93495519c83202d02605c050691c433106f33adc0.scope.
Oct  1 12:37:47 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:37:47 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4a00858e18440af3a863c0b23a0b1b6c944ac5024ea9248c3a4349892c82d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:37:47 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4a00858e18440af3a863c0b23a0b1b6c944ac5024ea9248c3a4349892c82d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:37:47 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4a00858e18440af3a863c0b23a0b1b6c944ac5024ea9248c3a4349892c82d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:37:47 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4a00858e18440af3a863c0b23a0b1b6c944ac5024ea9248c3a4349892c82d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:37:47 np0005464891 podman[263485]: 2025-10-01 16:37:47.536419765 +0000 UTC m=+0.195329197 container init 4704c305f14bb0a12a0926d93495519c83202d02605c050691c433106f33adc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:37:47 np0005464891 podman[263485]: 2025-10-01 16:37:47.549725673 +0000 UTC m=+0.208635125 container start 4704c305f14bb0a12a0926d93495519c83202d02605c050691c433106f33adc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:37:47 np0005464891 podman[263485]: 2025-10-01 16:37:47.554831834 +0000 UTC m=+0.213741276 container attach 4704c305f14bb0a12a0926d93495519c83202d02605c050691c433106f33adc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_rubin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:37:48 np0005464891 serene_rubin[263502]: {
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:        "osd_id": 2,
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:        "type": "bluestore"
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:    },
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:        "osd_id": 0,
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:        "type": "bluestore"
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:    },
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:        "osd_id": 1,
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:        "type": "bluestore"
Oct  1 12:37:48 np0005464891 serene_rubin[263502]:    }
Oct  1 12:37:48 np0005464891 serene_rubin[263502]: }
Oct  1 12:37:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:48 np0005464891 systemd[1]: libpod-4704c305f14bb0a12a0926d93495519c83202d02605c050691c433106f33adc0.scope: Deactivated successfully.
Oct  1 12:37:48 np0005464891 podman[263485]: 2025-10-01 16:37:48.572640317 +0000 UTC m=+1.231549769 container died 4704c305f14bb0a12a0926d93495519c83202d02605c050691c433106f33adc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 12:37:48 np0005464891 systemd[1]: libpod-4704c305f14bb0a12a0926d93495519c83202d02605c050691c433106f33adc0.scope: Consumed 1.033s CPU time.
Oct  1 12:37:48 np0005464891 systemd[1]: var-lib-containers-storage-overlay-cf4a00858e18440af3a863c0b23a0b1b6c944ac5024ea9248c3a4349892c82d4-merged.mount: Deactivated successfully.
Oct  1 12:37:49 np0005464891 podman[263485]: 2025-10-01 16:37:49.060186854 +0000 UTC m=+1.719096306 container remove 4704c305f14bb0a12a0926d93495519c83202d02605c050691c433106f33adc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_rubin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:37:49 np0005464891 systemd[1]: libpod-conmon-4704c305f14bb0a12a0926d93495519c83202d02605c050691c433106f33adc0.scope: Deactivated successfully.
Oct  1 12:37:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:37:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:37:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:37:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:37:49 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev f259a28b-397c-4367-be3b-32eed4bc047a does not exist
Oct  1 12:37:49 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 82adf671-8699-4c3a-a3c8-8e87952beec2 does not exist
Oct  1 12:37:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:37:50 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:37:50 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:37:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:37:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:37:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:38:00 np0005464891 podman[263597]: 2025-10-01 16:37:59.999784819 +0000 UTC m=+0.098458590 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  1 12:38:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:38:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:38:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:38:12
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'vms', 'volumes', 'default.rgw.meta', '.rgw.root', '.mgr', 'backups', 'default.rgw.log']
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:38:12 np0005464891 podman[263617]: 2025-10-01 16:38:12.037260863 +0000 UTC m=+0.141359025 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:38:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:38:12.435 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:38:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:38:12.436 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:38:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:38:12.436 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:38:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:38:15 np0005464891 podman[263646]: 2025-10-01 16:38:15.962433871 +0000 UTC m=+0.078296254 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:38:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.498 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.499 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.499 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.500 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.580 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.581 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.581 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.582 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.582 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.582 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.582 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.582 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.583 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.617 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.617 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.618 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.618 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:38:17 np0005464891 nova_compute[259907]: 2025-10-01 16:38:17.618 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:38:17 np0005464891 podman[263685]: 2025-10-01 16:38:17.986761825 +0000 UTC m=+0.091492278 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:38:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:38:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2737634207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:38:18 np0005464891 nova_compute[259907]: 2025-10-01 16:38:18.073 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:38:18 np0005464891 nova_compute[259907]: 2025-10-01 16:38:18.290 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:38:18 np0005464891 nova_compute[259907]: 2025-10-01 16:38:18.292 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5178MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:38:18 np0005464891 nova_compute[259907]: 2025-10-01 16:38:18.293 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:38:18 np0005464891 nova_compute[259907]: 2025-10-01 16:38:18.293 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:38:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:19 np0005464891 nova_compute[259907]: 2025-10-01 16:38:19.484 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:38:19 np0005464891 nova_compute[259907]: 2025-10-01 16:38:19.484 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:38:19 np0005464891 nova_compute[259907]: 2025-10-01 16:38:19.502 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:38:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:38:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:38:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/448709722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:38:20 np0005464891 nova_compute[259907]: 2025-10-01 16:38:20.030 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:38:20 np0005464891 nova_compute[259907]: 2025-10-01 16:38:20.036 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:38:20 np0005464891 nova_compute[259907]: 2025-10-01 16:38:20.050 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:38:20 np0005464891 nova_compute[259907]: 2025-10-01 16:38:20.052 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:38:20 np0005464891 nova_compute[259907]: 2025-10-01 16:38:20.053 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:38:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:38:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:38:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:38:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:38:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:30 np0005464891 podman[263729]: 2025-10-01 16:38:30.985362642 +0000 UTC m=+0.077686387 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  1 12:38:31 np0005464891 systemd[1]: packagekit.service: Deactivated successfully.
Oct  1 12:38:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:38:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:38:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2791206025' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:38:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:38:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2791206025' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:38:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:38:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:38:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:38:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:38:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:38:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:38:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:38:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:43 np0005464891 podman[263750]: 2025-10-01 16:38:43.032613202 +0000 UTC m=+0.136419078 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  1 12:38:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:38:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:46 np0005464891 podman[263778]: 2025-10-01 16:38:46.974121493 +0000 UTC m=+0.078403987 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 12:38:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:48 np0005464891 podman[263800]: 2025-10-01 16:38:48.990446226 +0000 UTC m=+0.099499109 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct  1 12:38:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:38:50 np0005464891 podman[263992]: 2025-10-01 16:38:50.465379557 +0000 UTC m=+0.097164246 container exec 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct  1 12:38:50 np0005464891 podman[263992]: 2025-10-01 16:38:50.574064998 +0000 UTC m=+0.205849677 container exec_died 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:38:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:38:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:38:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:38:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:38:51 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:38:51 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:38:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:38:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:38:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:38:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:38:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:38:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:38:52 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev c81e359e-79b8-4f4c-9e62-6e0cbdf67d32 does not exist
Oct  1 12:38:52 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev e34f3913-5529-41e3-9203-b1261d72375f does not exist
Oct  1 12:38:52 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev d41f0027-67e8-4b54-874c-2e987950275a does not exist
Oct  1 12:38:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:38:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:38:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:38:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:38:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:38:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:38:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:52 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:38:52 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:38:52 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:38:53 np0005464891 podman[264424]: 2025-10-01 16:38:53.227678335 +0000 UTC m=+0.067545527 container create e77d6658f4bafa3e1cc7d0bb154eeda317c9d2eef8cd537fe23c5f3a6365cc99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct  1 12:38:53 np0005464891 systemd[1]: Started libpod-conmon-e77d6658f4bafa3e1cc7d0bb154eeda317c9d2eef8cd537fe23c5f3a6365cc99.scope.
Oct  1 12:38:53 np0005464891 podman[264424]: 2025-10-01 16:38:53.200065362 +0000 UTC m=+0.039932594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:38:53 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:38:53 np0005464891 podman[264424]: 2025-10-01 16:38:53.341120948 +0000 UTC m=+0.180988190 container init e77d6658f4bafa3e1cc7d0bb154eeda317c9d2eef8cd537fe23c5f3a6365cc99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 12:38:53 np0005464891 podman[264424]: 2025-10-01 16:38:53.352359429 +0000 UTC m=+0.192226611 container start e77d6658f4bafa3e1cc7d0bb154eeda317c9d2eef8cd537fe23c5f3a6365cc99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gates, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:38:53 np0005464891 podman[264424]: 2025-10-01 16:38:53.357780428 +0000 UTC m=+0.197647670 container attach e77d6658f4bafa3e1cc7d0bb154eeda317c9d2eef8cd537fe23c5f3a6365cc99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gates, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 12:38:53 np0005464891 modest_gates[264440]: 167 167
Oct  1 12:38:53 np0005464891 systemd[1]: libpod-e77d6658f4bafa3e1cc7d0bb154eeda317c9d2eef8cd537fe23c5f3a6365cc99.scope: Deactivated successfully.
Oct  1 12:38:53 np0005464891 podman[264424]: 2025-10-01 16:38:53.359803705 +0000 UTC m=+0.199670897 container died e77d6658f4bafa3e1cc7d0bb154eeda317c9d2eef8cd537fe23c5f3a6365cc99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:38:53 np0005464891 systemd[1]: var-lib-containers-storage-overlay-29d6f0df969f1ae56aec6a98b03ccaeecc27ebf25736610babfc016b33204df0-merged.mount: Deactivated successfully.
Oct  1 12:38:53 np0005464891 podman[264424]: 2025-10-01 16:38:53.416802349 +0000 UTC m=+0.256669531 container remove e77d6658f4bafa3e1cc7d0bb154eeda317c9d2eef8cd537fe23c5f3a6365cc99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gates, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:38:53 np0005464891 systemd[1]: libpod-conmon-e77d6658f4bafa3e1cc7d0bb154eeda317c9d2eef8cd537fe23c5f3a6365cc99.scope: Deactivated successfully.
Oct  1 12:38:53 np0005464891 podman[264463]: 2025-10-01 16:38:53.628092705 +0000 UTC m=+0.055084023 container create f735c02e9142e7fef38dc7048fe7ecde11f209a0196acaa089f3b490430de239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 12:38:53 np0005464891 systemd[1]: Started libpod-conmon-f735c02e9142e7fef38dc7048fe7ecde11f209a0196acaa089f3b490430de239.scope.
Oct  1 12:38:53 np0005464891 podman[264463]: 2025-10-01 16:38:53.605985955 +0000 UTC m=+0.032977303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:38:53 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:38:53 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cfe904c95456efb18984b17b9de6e2b7835d6e45119d4a3b87deecd2698f29e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:38:53 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cfe904c95456efb18984b17b9de6e2b7835d6e45119d4a3b87deecd2698f29e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:38:53 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cfe904c95456efb18984b17b9de6e2b7835d6e45119d4a3b87deecd2698f29e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:38:53 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cfe904c95456efb18984b17b9de6e2b7835d6e45119d4a3b87deecd2698f29e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:38:53 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cfe904c95456efb18984b17b9de6e2b7835d6e45119d4a3b87deecd2698f29e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:38:53 np0005464891 podman[264463]: 2025-10-01 16:38:53.73255354 +0000 UTC m=+0.159544878 container init f735c02e9142e7fef38dc7048fe7ecde11f209a0196acaa089f3b490430de239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 12:38:53 np0005464891 podman[264463]: 2025-10-01 16:38:53.746093685 +0000 UTC m=+0.173085033 container start f735c02e9142e7fef38dc7048fe7ecde11f209a0196acaa089f3b490430de239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ishizaka, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:38:53 np0005464891 podman[264463]: 2025-10-01 16:38:53.750267299 +0000 UTC m=+0.177258617 container attach f735c02e9142e7fef38dc7048fe7ecde11f209a0196acaa089f3b490430de239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:38:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:38:54 np0005464891 hungry_ishizaka[264480]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:38:54 np0005464891 hungry_ishizaka[264480]: --> relative data size: 1.0
Oct  1 12:38:54 np0005464891 hungry_ishizaka[264480]: --> All data devices are unavailable
Oct  1 12:38:54 np0005464891 systemd[1]: libpod-f735c02e9142e7fef38dc7048fe7ecde11f209a0196acaa089f3b490430de239.scope: Deactivated successfully.
Oct  1 12:38:54 np0005464891 systemd[1]: libpod-f735c02e9142e7fef38dc7048fe7ecde11f209a0196acaa089f3b490430de239.scope: Consumed 1.054s CPU time.
Oct  1 12:38:54 np0005464891 podman[264463]: 2025-10-01 16:38:54.839444954 +0000 UTC m=+1.266436302 container died f735c02e9142e7fef38dc7048fe7ecde11f209a0196acaa089f3b490430de239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 12:38:54 np0005464891 systemd[1]: var-lib-containers-storage-overlay-3cfe904c95456efb18984b17b9de6e2b7835d6e45119d4a3b87deecd2698f29e-merged.mount: Deactivated successfully.
Oct  1 12:38:54 np0005464891 podman[264463]: 2025-10-01 16:38:54.924611856 +0000 UTC m=+1.351603214 container remove f735c02e9142e7fef38dc7048fe7ecde11f209a0196acaa089f3b490430de239 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:38:54 np0005464891 systemd[1]: libpod-conmon-f735c02e9142e7fef38dc7048fe7ecde11f209a0196acaa089f3b490430de239.scope: Deactivated successfully.
Oct  1 12:38:55 np0005464891 podman[264660]: 2025-10-01 16:38:55.776206138 +0000 UTC m=+0.056947174 container create 44f66c2c815acccba63fa621ce77e17e6dc387b932c9b1a81fa5a3d002ccb2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:38:55 np0005464891 systemd[1]: Started libpod-conmon-44f66c2c815acccba63fa621ce77e17e6dc387b932c9b1a81fa5a3d002ccb2a8.scope.
Oct  1 12:38:55 np0005464891 podman[264660]: 2025-10-01 16:38:55.748709618 +0000 UTC m=+0.029450694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:38:55 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:38:55 np0005464891 podman[264660]: 2025-10-01 16:38:55.872483758 +0000 UTC m=+0.153224834 container init 44f66c2c815acccba63fa621ce77e17e6dc387b932c9b1a81fa5a3d002ccb2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 12:38:55 np0005464891 podman[264660]: 2025-10-01 16:38:55.883190134 +0000 UTC m=+0.163931160 container start 44f66c2c815acccba63fa621ce77e17e6dc387b932c9b1a81fa5a3d002ccb2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:38:55 np0005464891 podman[264660]: 2025-10-01 16:38:55.887485892 +0000 UTC m=+0.168226938 container attach 44f66c2c815acccba63fa621ce77e17e6dc387b932c9b1a81fa5a3d002ccb2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 12:38:55 np0005464891 loving_mayer[264676]: 167 167
Oct  1 12:38:55 np0005464891 systemd[1]: libpod-44f66c2c815acccba63fa621ce77e17e6dc387b932c9b1a81fa5a3d002ccb2a8.scope: Deactivated successfully.
Oct  1 12:38:55 np0005464891 podman[264660]: 2025-10-01 16:38:55.891990667 +0000 UTC m=+0.172731703 container died 44f66c2c815acccba63fa621ce77e17e6dc387b932c9b1a81fa5a3d002ccb2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 12:38:55 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b07a5e4a193899633886b89f3524af33523a9b2988d591e6634344722790adb2-merged.mount: Deactivated successfully.
Oct  1 12:38:55 np0005464891 podman[264660]: 2025-10-01 16:38:55.945302949 +0000 UTC m=+0.226043985 container remove 44f66c2c815acccba63fa621ce77e17e6dc387b932c9b1a81fa5a3d002ccb2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:38:55 np0005464891 systemd[1]: libpod-conmon-44f66c2c815acccba63fa621ce77e17e6dc387b932c9b1a81fa5a3d002ccb2a8.scope: Deactivated successfully.
Oct  1 12:38:56 np0005464891 podman[264700]: 2025-10-01 16:38:56.203395378 +0000 UTC m=+0.076210306 container create 09b8884347c98308c6891d67e9682a9a5381d7d1a2fb593b83b06a9754892065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 12:38:56 np0005464891 podman[264700]: 2025-10-01 16:38:56.166379106 +0000 UTC m=+0.039194104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:38:56 np0005464891 systemd[1]: Started libpod-conmon-09b8884347c98308c6891d67e9682a9a5381d7d1a2fb593b83b06a9754892065.scope.
Oct  1 12:38:56 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:38:56 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/094cfe1a129b5e33c528dd2dde6b63e95c4d4750240fc7dd751f1318e454e647/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:38:56 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/094cfe1a129b5e33c528dd2dde6b63e95c4d4750240fc7dd751f1318e454e647/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:38:56 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/094cfe1a129b5e33c528dd2dde6b63e95c4d4750240fc7dd751f1318e454e647/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:38:56 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/094cfe1a129b5e33c528dd2dde6b63e95c4d4750240fc7dd751f1318e454e647/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:38:56 np0005464891 podman[264700]: 2025-10-01 16:38:56.316396829 +0000 UTC m=+0.189211817 container init 09b8884347c98308c6891d67e9682a9a5381d7d1a2fb593b83b06a9754892065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 12:38:56 np0005464891 podman[264700]: 2025-10-01 16:38:56.330281932 +0000 UTC m=+0.203096870 container start 09b8884347c98308c6891d67e9682a9a5381d7d1a2fb593b83b06a9754892065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 12:38:56 np0005464891 podman[264700]: 2025-10-01 16:38:56.334061717 +0000 UTC m=+0.206876655 container attach 09b8884347c98308c6891d67e9682a9a5381d7d1a2fb593b83b06a9754892065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 12:38:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:57 np0005464891 modest_swirles[264717]: {
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:    "0": [
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:        {
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "devices": [
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "/dev/loop3"
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            ],
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "lv_name": "ceph_lv0",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "lv_size": "21470642176",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "name": "ceph_lv0",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "tags": {
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.cluster_name": "ceph",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.crush_device_class": "",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.encrypted": "0",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.osd_id": "0",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.type": "block",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.vdo": "0"
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            },
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "type": "block",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "vg_name": "ceph_vg0"
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:        }
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:    ],
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:    "1": [
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:        {
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "devices": [
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "/dev/loop4"
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            ],
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "lv_name": "ceph_lv1",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "lv_size": "21470642176",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "name": "ceph_lv1",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "tags": {
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.cluster_name": "ceph",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.crush_device_class": "",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.encrypted": "0",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.osd_id": "1",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.type": "block",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.vdo": "0"
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            },
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "type": "block",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "vg_name": "ceph_vg1"
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:        }
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:    ],
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:    "2": [
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:        {
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "devices": [
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "/dev/loop5"
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            ],
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "lv_name": "ceph_lv2",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "lv_size": "21470642176",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "name": "ceph_lv2",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "tags": {
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.cluster_name": "ceph",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.crush_device_class": "",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.encrypted": "0",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.osd_id": "2",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.type": "block",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:                "ceph.vdo": "0"
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            },
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "type": "block",
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:            "vg_name": "ceph_vg2"
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:        }
Oct  1 12:38:57 np0005464891 modest_swirles[264717]:    ]
Oct  1 12:38:57 np0005464891 modest_swirles[264717]: }
Oct  1 12:38:57 np0005464891 systemd[1]: libpod-09b8884347c98308c6891d67e9682a9a5381d7d1a2fb593b83b06a9754892065.scope: Deactivated successfully.
Oct  1 12:38:57 np0005464891 podman[264700]: 2025-10-01 16:38:57.141207292 +0000 UTC m=+1.014022180 container died 09b8884347c98308c6891d67e9682a9a5381d7d1a2fb593b83b06a9754892065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:38:57 np0005464891 systemd[1]: var-lib-containers-storage-overlay-094cfe1a129b5e33c528dd2dde6b63e95c4d4750240fc7dd751f1318e454e647-merged.mount: Deactivated successfully.
Oct  1 12:38:57 np0005464891 podman[264700]: 2025-10-01 16:38:57.191219313 +0000 UTC m=+1.064034211 container remove 09b8884347c98308c6891d67e9682a9a5381d7d1a2fb593b83b06a9754892065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 12:38:57 np0005464891 systemd[1]: libpod-conmon-09b8884347c98308c6891d67e9682a9a5381d7d1a2fb593b83b06a9754892065.scope: Deactivated successfully.
Oct  1 12:38:57 np0005464891 podman[264878]: 2025-10-01 16:38:57.985017089 +0000 UTC m=+0.056298726 container create 2e1d178b59014eac6f23929170333f35c60033b96f03a6da4336cc6c66274234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:38:58 np0005464891 systemd[1]: Started libpod-conmon-2e1d178b59014eac6f23929170333f35c60033b96f03a6da4336cc6c66274234.scope.
Oct  1 12:38:58 np0005464891 podman[264878]: 2025-10-01 16:38:57.956212843 +0000 UTC m=+0.027494530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:38:58 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:38:58 np0005464891 podman[264878]: 2025-10-01 16:38:58.088855927 +0000 UTC m=+0.160137544 container init 2e1d178b59014eac6f23929170333f35c60033b96f03a6da4336cc6c66274234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 12:38:58 np0005464891 podman[264878]: 2025-10-01 16:38:58.101003182 +0000 UTC m=+0.172284809 container start 2e1d178b59014eac6f23929170333f35c60033b96f03a6da4336cc6c66274234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 12:38:58 np0005464891 podman[264878]: 2025-10-01 16:38:58.105339493 +0000 UTC m=+0.176621110 container attach 2e1d178b59014eac6f23929170333f35c60033b96f03a6da4336cc6c66274234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:38:58 np0005464891 serene_wozniak[264895]: 167 167
Oct  1 12:38:58 np0005464891 systemd[1]: libpod-2e1d178b59014eac6f23929170333f35c60033b96f03a6da4336cc6c66274234.scope: Deactivated successfully.
Oct  1 12:38:58 np0005464891 podman[264878]: 2025-10-01 16:38:58.108747207 +0000 UTC m=+0.180028844 container died 2e1d178b59014eac6f23929170333f35c60033b96f03a6da4336cc6c66274234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 12:38:58 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c7b7ac2ad0da151a5fc6e41f44a3754da3a5b19bec09e36227c62f6cd9bcc543-merged.mount: Deactivated successfully.
Oct  1 12:38:58 np0005464891 podman[264878]: 2025-10-01 16:38:58.178220136 +0000 UTC m=+0.249501743 container remove 2e1d178b59014eac6f23929170333f35c60033b96f03a6da4336cc6c66274234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 12:38:58 np0005464891 systemd[1]: libpod-conmon-2e1d178b59014eac6f23929170333f35c60033b96f03a6da4336cc6c66274234.scope: Deactivated successfully.
Oct  1 12:38:58 np0005464891 podman[264919]: 2025-10-01 16:38:58.462061455 +0000 UTC m=+0.085740329 container create 84b61d94dc501bcd194566e677446b903b1790461c8a33cef8a344ad35990c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:38:58 np0005464891 podman[264919]: 2025-10-01 16:38:58.41480126 +0000 UTC m=+0.038480124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:38:58 np0005464891 systemd[1]: Started libpod-conmon-84b61d94dc501bcd194566e677446b903b1790461c8a33cef8a344ad35990c60.scope.
Oct  1 12:38:58 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:38:58 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceeeaf8019b3a65a1a4a942a60997ba70052830f4b1fe53c7e99c26934284c5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:38:58 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceeeaf8019b3a65a1a4a942a60997ba70052830f4b1fe53c7e99c26934284c5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:38:58 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceeeaf8019b3a65a1a4a942a60997ba70052830f4b1fe53c7e99c26934284c5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:38:58 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceeeaf8019b3a65a1a4a942a60997ba70052830f4b1fe53c7e99c26934284c5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:38:58 np0005464891 podman[264919]: 2025-10-01 16:38:58.580759284 +0000 UTC m=+0.204438218 container init 84b61d94dc501bcd194566e677446b903b1790461c8a33cef8a344ad35990c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 12:38:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:38:58 np0005464891 podman[264919]: 2025-10-01 16:38:58.594819912 +0000 UTC m=+0.218498786 container start 84b61d94dc501bcd194566e677446b903b1790461c8a33cef8a344ad35990c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:38:58 np0005464891 podman[264919]: 2025-10-01 16:38:58.599920003 +0000 UTC m=+0.223598947 container attach 84b61d94dc501bcd194566e677446b903b1790461c8a33cef8a344ad35990c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wilson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:38:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]: {
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:        "osd_id": 2,
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:        "type": "bluestore"
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:    },
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:        "osd_id": 0,
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:        "type": "bluestore"
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:    },
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:        "osd_id": 1,
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:        "type": "bluestore"
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]:    }
Oct  1 12:38:59 np0005464891 gifted_wilson[264935]: }
Oct  1 12:38:59 np0005464891 systemd[1]: libpod-84b61d94dc501bcd194566e677446b903b1790461c8a33cef8a344ad35990c60.scope: Deactivated successfully.
Oct  1 12:38:59 np0005464891 systemd[1]: libpod-84b61d94dc501bcd194566e677446b903b1790461c8a33cef8a344ad35990c60.scope: Consumed 1.199s CPU time.
Oct  1 12:38:59 np0005464891 podman[264968]: 2025-10-01 16:38:59.846154296 +0000 UTC m=+0.040217882 container died 84b61d94dc501bcd194566e677446b903b1790461c8a33cef8a344ad35990c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wilson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 12:38:59 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ceeeaf8019b3a65a1a4a942a60997ba70052830f4b1fe53c7e99c26934284c5d-merged.mount: Deactivated successfully.
Oct  1 12:38:59 np0005464891 podman[264968]: 2025-10-01 16:38:59.932902292 +0000 UTC m=+0.126965788 container remove 84b61d94dc501bcd194566e677446b903b1790461c8a33cef8a344ad35990c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:38:59 np0005464891 systemd[1]: libpod-conmon-84b61d94dc501bcd194566e677446b903b1790461c8a33cef8a344ad35990c60.scope: Deactivated successfully.
Oct  1 12:38:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:39:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:39:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:39:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:39:00 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 82173bee-93e7-4b4d-99a2-fb753495db32 does not exist
Oct  1 12:39:00 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 37d11a4f-567f-4efa-acd9-d0915d989c62 does not exist
Oct  1 12:39:00 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:39:00 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:39:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:39:02 np0005464891 podman[265034]: 2025-10-01 16:39:02.00115998 +0000 UTC m=+0.100504056 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  1 12:39:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:39:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:39:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:39:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:39:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:39:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:39:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:39:12
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'volumes', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.meta', '.mgr']
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:39:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:39:12.436 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:39:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:39:12.437 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:39:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:39:12.438 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:39:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:39:14 np0005464891 podman[265053]: 2025-10-01 16:39:14.039736671 +0000 UTC m=+0.148563484 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:39:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:39:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:39:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.354 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.355 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.372 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.373 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.373 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.386 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.386 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.386 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.386 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.387 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.387 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.387 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.387 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.412 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.413 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.413 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.413 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.414 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:39:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:39:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/391846762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:39:17 np0005464891 nova_compute[259907]: 2025-10-01 16:39:17.833 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:39:17 np0005464891 podman[265101]: 2025-10-01 16:39:17.954207474 +0000 UTC m=+0.068560404 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 12:39:18 np0005464891 nova_compute[259907]: 2025-10-01 16:39:18.022 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:39:18 np0005464891 nova_compute[259907]: 2025-10-01 16:39:18.023 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5183MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:39:18 np0005464891 nova_compute[259907]: 2025-10-01 16:39:18.023 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:39:18 np0005464891 nova_compute[259907]: 2025-10-01 16:39:18.024 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:39:18 np0005464891 nova_compute[259907]: 2025-10-01 16:39:18.170 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:39:18 np0005464891 nova_compute[259907]: 2025-10-01 16:39:18.170 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:39:18 np0005464891 nova_compute[259907]: 2025-10-01 16:39:18.185 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:39:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:39:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:39:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2683855551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:39:18 np0005464891 nova_compute[259907]: 2025-10-01 16:39:18.650 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:39:18 np0005464891 nova_compute[259907]: 2025-10-01 16:39:18.655 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:39:18 np0005464891 nova_compute[259907]: 2025-10-01 16:39:18.670 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:39:18 np0005464891 nova_compute[259907]: 2025-10-01 16:39:18.671 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:39:18 np0005464891 nova_compute[259907]: 2025-10-01 16:39:18.671 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:39:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:39:19 np0005464891 podman[265143]: 2025-10-01 16:39:19.959036731 +0000 UTC m=+0.071329552 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 12:39:20 np0005464891 nova_compute[259907]: 2025-10-01 16:39:20.090 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:39:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:39:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:39:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:39:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:39:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:39:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:41:05 np0005464891 podman[266361]: 2025-10-01 16:41:05.981975028 +0000 UTC m=+0.085014692 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:41:06 np0005464891 rsyslogd[1011]: imjournal: 649 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  1 12:41:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 23 op/s
Oct  1 12:41:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.2 MiB/s wr, 20 op/s
Oct  1 12:41:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:41:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.9 MiB/s wr, 17 op/s
Oct  1 12:41:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:41:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:41:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:41:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:41:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:41:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:41:11 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev f2e6996a-01fd-4cda-9212-2f1f9b2f3e20 does not exist
Oct  1 12:41:11 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev d54bd0f8-1d88-4d7b-96c6-5283b8a3c86a does not exist
Oct  1 12:41:11 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 7b6dcfce-e221-4282-855e-907663150fe3 does not exist
Oct  1 12:41:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:41:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:41:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:41:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:41:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:41:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:41:11 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:41:11 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:41:11 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:41:12
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', '.rgw.root', 'vms', 'backups', 'volumes', '.mgr']
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:41:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:41:12.439 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:41:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:41:12.439 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:41:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:41:12.439 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:41:12 np0005464891 podman[266651]: 2025-10-01 16:41:12.557361496 +0000 UTC m=+0.044931524 container create 75a4e835fedbf50daf79c0f12339a7faf69fb4f9159c4fa1c1276500ad3f7472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:41:12 np0005464891 systemd[1]: Started libpod-conmon-75a4e835fedbf50daf79c0f12339a7faf69fb4f9159c4fa1c1276500ad3f7472.scope.
Oct  1 12:41:12 np0005464891 podman[266651]: 2025-10-01 16:41:12.536771436 +0000 UTC m=+0.024341514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:41:12 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:41:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 15 op/s
Oct  1 12:41:12 np0005464891 podman[266651]: 2025-10-01 16:41:12.65403738 +0000 UTC m=+0.141607388 container init 75a4e835fedbf50daf79c0f12339a7faf69fb4f9159c4fa1c1276500ad3f7472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:41:12 np0005464891 podman[266651]: 2025-10-01 16:41:12.666362121 +0000 UTC m=+0.153932109 container start 75a4e835fedbf50daf79c0f12339a7faf69fb4f9159c4fa1c1276500ad3f7472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:41:12 np0005464891 podman[266651]: 2025-10-01 16:41:12.670376562 +0000 UTC m=+0.157946640 container attach 75a4e835fedbf50daf79c0f12339a7faf69fb4f9159c4fa1c1276500ad3f7472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct  1 12:41:12 np0005464891 magical_rhodes[266667]: 167 167
Oct  1 12:41:12 np0005464891 systemd[1]: libpod-75a4e835fedbf50daf79c0f12339a7faf69fb4f9159c4fa1c1276500ad3f7472.scope: Deactivated successfully.
Oct  1 12:41:12 np0005464891 podman[266651]: 2025-10-01 16:41:12.674798564 +0000 UTC m=+0.162368592 container died 75a4e835fedbf50daf79c0f12339a7faf69fb4f9159c4fa1c1276500ad3f7472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 12:41:12 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ee7dbb9bebe12dd86ef127876c5830e783261c0eebaca97420d1e1ba8dde9291-merged.mount: Deactivated successfully.
Oct  1 12:41:12 np0005464891 podman[266651]: 2025-10-01 16:41:12.725682133 +0000 UTC m=+0.213252161 container remove 75a4e835fedbf50daf79c0f12339a7faf69fb4f9159c4fa1c1276500ad3f7472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:41:12 np0005464891 systemd[1]: libpod-conmon-75a4e835fedbf50daf79c0f12339a7faf69fb4f9159c4fa1c1276500ad3f7472.scope: Deactivated successfully.
Oct  1 12:41:12 np0005464891 podman[266693]: 2025-10-01 16:41:12.937870972 +0000 UTC m=+0.059123256 container create 60165847c57fc8e4db158eb5f88649fe613ac790560fec74cdf46fb9ee78821d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_easley, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 12:41:12 np0005464891 systemd[1]: Started libpod-conmon-60165847c57fc8e4db158eb5f88649fe613ac790560fec74cdf46fb9ee78821d.scope.
Oct  1 12:41:13 np0005464891 podman[266693]: 2025-10-01 16:41:12.913444587 +0000 UTC m=+0.034696851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:41:13 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:41:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73092fd7628b135dc9e767bc53c4c4075a2c4746d56e4f9a666d1db67e736d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:41:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73092fd7628b135dc9e767bc53c4c4075a2c4746d56e4f9a666d1db67e736d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:41:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73092fd7628b135dc9e767bc53c4c4075a2c4746d56e4f9a666d1db67e736d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:41:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73092fd7628b135dc9e767bc53c4c4075a2c4746d56e4f9a666d1db67e736d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:41:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73092fd7628b135dc9e767bc53c4c4075a2c4746d56e4f9a666d1db67e736d8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:41:13 np0005464891 podman[266693]: 2025-10-01 16:41:13.036701947 +0000 UTC m=+0.157954191 container init 60165847c57fc8e4db158eb5f88649fe613ac790560fec74cdf46fb9ee78821d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_easley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:41:13 np0005464891 podman[266693]: 2025-10-01 16:41:13.045266434 +0000 UTC m=+0.166518708 container start 60165847c57fc8e4db158eb5f88649fe613ac790560fec74cdf46fb9ee78821d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_easley, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:41:13 np0005464891 podman[266693]: 2025-10-01 16:41:13.049235383 +0000 UTC m=+0.170487657 container attach 60165847c57fc8e4db158eb5f88649fe613ac790560fec74cdf46fb9ee78821d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_easley, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:41:14 np0005464891 vibrant_easley[266709]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:41:14 np0005464891 vibrant_easley[266709]: --> relative data size: 1.0
Oct  1 12:41:14 np0005464891 vibrant_easley[266709]: --> All data devices are unavailable
Oct  1 12:41:14 np0005464891 systemd[1]: libpod-60165847c57fc8e4db158eb5f88649fe613ac790560fec74cdf46fb9ee78821d.scope: Deactivated successfully.
Oct  1 12:41:14 np0005464891 systemd[1]: libpod-60165847c57fc8e4db158eb5f88649fe613ac790560fec74cdf46fb9ee78821d.scope: Consumed 1.071s CPU time.
Oct  1 12:41:14 np0005464891 podman[266693]: 2025-10-01 16:41:14.169402853 +0000 UTC m=+1.290655117 container died 60165847c57fc8e4db158eb5f88649fe613ac790560fec74cdf46fb9ee78821d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_easley, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct  1 12:41:14 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e73092fd7628b135dc9e767bc53c4c4075a2c4746d56e4f9a666d1db67e736d8-merged.mount: Deactivated successfully.
Oct  1 12:41:14 np0005464891 podman[266693]: 2025-10-01 16:41:14.406580675 +0000 UTC m=+1.527832909 container remove 60165847c57fc8e4db158eb5f88649fe613ac790560fec74cdf46fb9ee78821d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_easley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 12:41:14 np0005464891 systemd[1]: libpod-conmon-60165847c57fc8e4db158eb5f88649fe613ac790560fec74cdf46fb9ee78821d.scope: Deactivated successfully.
Oct  1 12:41:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 15 op/s
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:41:14.691602) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336874691677, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 859, "num_deletes": 255, "total_data_size": 1122016, "memory_usage": 1146224, "flush_reason": "Manual Compaction"}
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336874712220, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1111408, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18733, "largest_seqno": 19591, "table_properties": {"data_size": 1107021, "index_size": 2039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9190, "raw_average_key_size": 18, "raw_value_size": 1098144, "raw_average_value_size": 2222, "num_data_blocks": 92, "num_entries": 494, "num_filter_entries": 494, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336804, "oldest_key_time": 1759336804, "file_creation_time": 1759336874, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 20794 microseconds, and 3746 cpu microseconds.
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:41:14.712402) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1111408 bytes OK
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:41:14.712528) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:41:14.714179) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:41:14.714202) EVENT_LOG_v1 {"time_micros": 1759336874714194, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:41:14.714228) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1117752, prev total WAL file size 1117752, number of live WAL files 2.
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:41:14.735160) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1085KB)], [44(6029KB)]
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336874735225, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7285584, "oldest_snapshot_seqno": -1}
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4174 keys, 7152929 bytes, temperature: kUnknown
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336874772716, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7152929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7124227, "index_size": 17136, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10501, "raw_key_size": 103457, "raw_average_key_size": 24, "raw_value_size": 7047764, "raw_average_value_size": 1688, "num_data_blocks": 718, "num_entries": 4174, "num_filter_entries": 4174, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759336874, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:41:14.773040) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7152929 bytes
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:41:14.774280) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.8 rd, 190.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 5.9 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(13.0) write-amplify(6.4) OK, records in: 4700, records dropped: 526 output_compression: NoCompression
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:41:14.774306) EVENT_LOG_v1 {"time_micros": 1759336874774294, "job": 22, "event": "compaction_finished", "compaction_time_micros": 37591, "compaction_time_cpu_micros": 16445, "output_level": 6, "num_output_files": 1, "total_output_size": 7152929, "num_input_records": 4700, "num_output_records": 4174, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336874774686, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336874776494, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:41:14.715589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:41:14.776550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:41:14.776557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:41:14.776558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:41:14.776560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:41:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:41:14.776561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:41:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:41:14.868 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:41:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:41:14.871 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:41:15 np0005464891 podman[266891]: 2025-10-01 16:41:15.06610866 +0000 UTC m=+0.050309843 container create 35e6800da8de9190dfecc70e935a3a929ed8eacfc411daffdd761cb099c80803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rubin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:41:15 np0005464891 systemd[1]: Started libpod-conmon-35e6800da8de9190dfecc70e935a3a929ed8eacfc411daffdd761cb099c80803.scope.
Oct  1 12:41:15 np0005464891 podman[266891]: 2025-10-01 16:41:15.046693613 +0000 UTC m=+0.030894856 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:41:15 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:41:15 np0005464891 podman[266891]: 2025-10-01 16:41:15.16658824 +0000 UTC m=+0.150789503 container init 35e6800da8de9190dfecc70e935a3a929ed8eacfc411daffdd761cb099c80803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 12:41:15 np0005464891 podman[266891]: 2025-10-01 16:41:15.175938808 +0000 UTC m=+0.160139981 container start 35e6800da8de9190dfecc70e935a3a929ed8eacfc411daffdd761cb099c80803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rubin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 12:41:15 np0005464891 podman[266891]: 2025-10-01 16:41:15.178722215 +0000 UTC m=+0.162923428 container attach 35e6800da8de9190dfecc70e935a3a929ed8eacfc411daffdd761cb099c80803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:41:15 np0005464891 awesome_rubin[266908]: 167 167
Oct  1 12:41:15 np0005464891 podman[266891]: 2025-10-01 16:41:15.188603609 +0000 UTC m=+0.172804792 container died 35e6800da8de9190dfecc70e935a3a929ed8eacfc411daffdd761cb099c80803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rubin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:41:15 np0005464891 systemd[1]: libpod-35e6800da8de9190dfecc70e935a3a929ed8eacfc411daffdd761cb099c80803.scope: Deactivated successfully.
Oct  1 12:41:15 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e13e0b870f7cda19bdc7f2820fae0d84089a01e0f4d0a372c4bfefab126b303c-merged.mount: Deactivated successfully.
Oct  1 12:41:15 np0005464891 podman[266891]: 2025-10-01 16:41:15.241345758 +0000 UTC m=+0.225546961 container remove 35e6800da8de9190dfecc70e935a3a929ed8eacfc411daffdd761cb099c80803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  1 12:41:15 np0005464891 systemd[1]: libpod-conmon-35e6800da8de9190dfecc70e935a3a929ed8eacfc411daffdd761cb099c80803.scope: Deactivated successfully.
Oct  1 12:41:15 np0005464891 podman[266930]: 2025-10-01 16:41:15.450307369 +0000 UTC m=+0.052995908 container create 4f593e694f9d8e86257864d1237fb94b11b0e4dfcafedd991dc23b51fbf9a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lewin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct  1 12:41:15 np0005464891 systemd[1]: Started libpod-conmon-4f593e694f9d8e86257864d1237fb94b11b0e4dfcafedd991dc23b51fbf9a55b.scope.
Oct  1 12:41:15 np0005464891 podman[266930]: 2025-10-01 16:41:15.427353894 +0000 UTC m=+0.030042413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:41:15 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:41:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95d220ccae734a065445eac8961c3cf481f22ae41740c5510eb7210daead2001/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:41:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95d220ccae734a065445eac8961c3cf481f22ae41740c5510eb7210daead2001/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:41:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95d220ccae734a065445eac8961c3cf481f22ae41740c5510eb7210daead2001/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:41:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95d220ccae734a065445eac8961c3cf481f22ae41740c5510eb7210daead2001/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:41:15 np0005464891 podman[266930]: 2025-10-01 16:41:15.547438176 +0000 UTC m=+0.150126735 container init 4f593e694f9d8e86257864d1237fb94b11b0e4dfcafedd991dc23b51fbf9a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lewin, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:41:15 np0005464891 podman[266930]: 2025-10-01 16:41:15.557224826 +0000 UTC m=+0.159913365 container start 4f593e694f9d8e86257864d1237fb94b11b0e4dfcafedd991dc23b51fbf9a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lewin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:41:15 np0005464891 podman[266930]: 2025-10-01 16:41:15.561717811 +0000 UTC m=+0.164406410 container attach 4f593e694f9d8e86257864d1237fb94b11b0e4dfcafedd991dc23b51fbf9a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lewin, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:41:15 np0005464891 nova_compute[259907]: 2025-10-01 16:41:15.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:41:15 np0005464891 nova_compute[259907]: 2025-10-01 16:41:15.806 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:41:15 np0005464891 nova_compute[259907]: 2025-10-01 16:41:15.806 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:41:15 np0005464891 nova_compute[259907]: 2025-10-01 16:41:15.820 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]: {
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:    "0": [
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:        {
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "devices": [
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "/dev/loop3"
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            ],
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "lv_name": "ceph_lv0",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "lv_size": "21470642176",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "name": "ceph_lv0",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "tags": {
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.cluster_name": "ceph",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.crush_device_class": "",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.encrypted": "0",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.osd_id": "0",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.type": "block",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.vdo": "0"
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            },
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "type": "block",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "vg_name": "ceph_vg0"
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:        }
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:    ],
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:    "1": [
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:        {
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "devices": [
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "/dev/loop4"
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            ],
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "lv_name": "ceph_lv1",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "lv_size": "21470642176",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "name": "ceph_lv1",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "tags": {
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.cluster_name": "ceph",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.crush_device_class": "",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.encrypted": "0",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.osd_id": "1",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.type": "block",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.vdo": "0"
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            },
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "type": "block",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "vg_name": "ceph_vg1"
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:        }
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:    ],
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:    "2": [
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:        {
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "devices": [
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "/dev/loop5"
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            ],
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "lv_name": "ceph_lv2",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "lv_size": "21470642176",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "name": "ceph_lv2",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "tags": {
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.cluster_name": "ceph",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.crush_device_class": "",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.encrypted": "0",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.osd_id": "2",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.type": "block",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:                "ceph.vdo": "0"
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            },
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "type": "block",
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:            "vg_name": "ceph_vg2"
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:        }
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]:    ]
Oct  1 12:41:16 np0005464891 optimistic_lewin[266946]: }
Oct  1 12:41:16 np0005464891 systemd[1]: libpod-4f593e694f9d8e86257864d1237fb94b11b0e4dfcafedd991dc23b51fbf9a55b.scope: Deactivated successfully.
Oct  1 12:41:16 np0005464891 podman[266930]: 2025-10-01 16:41:16.358827293 +0000 UTC m=+0.961515822 container died 4f593e694f9d8e86257864d1237fb94b11b0e4dfcafedd991dc23b51fbf9a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lewin, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:41:16 np0005464891 systemd[1]: var-lib-containers-storage-overlay-95d220ccae734a065445eac8961c3cf481f22ae41740c5510eb7210daead2001-merged.mount: Deactivated successfully.
Oct  1 12:41:16 np0005464891 podman[266930]: 2025-10-01 16:41:16.421192048 +0000 UTC m=+1.023880557 container remove 4f593e694f9d8e86257864d1237fb94b11b0e4dfcafedd991dc23b51fbf9a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 12:41:16 np0005464891 systemd[1]: libpod-conmon-4f593e694f9d8e86257864d1237fb94b11b0e4dfcafedd991dc23b51fbf9a55b.scope: Deactivated successfully.
Oct  1 12:41:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:41:16 np0005464891 nova_compute[259907]: 2025-10-01 16:41:16.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:41:16 np0005464891 nova_compute[259907]: 2025-10-01 16:41:16.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:41:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:41:16.874 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:41:17 np0005464891 podman[267110]: 2025-10-01 16:41:17.173085609 +0000 UTC m=+0.044556574 container create 9354c262a50f9760e1de0dc1104c3308a0a664744a3fb1a813b7b31c5b4d72dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_heyrovsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 12:41:17 np0005464891 systemd[1]: Started libpod-conmon-9354c262a50f9760e1de0dc1104c3308a0a664744a3fb1a813b7b31c5b4d72dd.scope.
Oct  1 12:41:17 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:41:17 np0005464891 podman[267110]: 2025-10-01 16:41:17.152117999 +0000 UTC m=+0.023588974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:41:17 np0005464891 podman[267110]: 2025-10-01 16:41:17.252116125 +0000 UTC m=+0.123587100 container init 9354c262a50f9760e1de0dc1104c3308a0a664744a3fb1a813b7b31c5b4d72dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 12:41:17 np0005464891 podman[267110]: 2025-10-01 16:41:17.263023007 +0000 UTC m=+0.134493962 container start 9354c262a50f9760e1de0dc1104c3308a0a664744a3fb1a813b7b31c5b4d72dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_heyrovsky, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:41:17 np0005464891 podman[267110]: 2025-10-01 16:41:17.266905725 +0000 UTC m=+0.138376700 container attach 9354c262a50f9760e1de0dc1104c3308a0a664744a3fb1a813b7b31c5b4d72dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_heyrovsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 12:41:17 np0005464891 confident_heyrovsky[267127]: 167 167
Oct  1 12:41:17 np0005464891 systemd[1]: libpod-9354c262a50f9760e1de0dc1104c3308a0a664744a3fb1a813b7b31c5b4d72dd.scope: Deactivated successfully.
Oct  1 12:41:17 np0005464891 podman[267110]: 2025-10-01 16:41:17.271385578 +0000 UTC m=+0.142856533 container died 9354c262a50f9760e1de0dc1104c3308a0a664744a3fb1a813b7b31c5b4d72dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_heyrovsky, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:41:17 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c01b4e1314b8b2fc12accb97ff1f55b67096b7c05c4ca61ca7a81168e6afc003-merged.mount: Deactivated successfully.
Oct  1 12:41:17 np0005464891 podman[267110]: 2025-10-01 16:41:17.331842281 +0000 UTC m=+0.203313256 container remove 9354c262a50f9760e1de0dc1104c3308a0a664744a3fb1a813b7b31c5b4d72dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 12:41:17 np0005464891 systemd[1]: libpod-conmon-9354c262a50f9760e1de0dc1104c3308a0a664744a3fb1a813b7b31c5b4d72dd.scope: Deactivated successfully.
Oct  1 12:41:17 np0005464891 podman[267129]: 2025-10-01 16:41:17.359968699 +0000 UTC m=+0.121000128 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  1 12:41:17 np0005464891 podman[267175]: 2025-10-01 16:41:17.520190452 +0000 UTC m=+0.060097613 container create 58096e5b6b75a60da4fc87ead1b7b47a91e89b6b32a06cba42c70d653107f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:41:17 np0005464891 systemd[1]: Started libpod-conmon-58096e5b6b75a60da4fc87ead1b7b47a91e89b6b32a06cba42c70d653107f535.scope.
Oct  1 12:41:17 np0005464891 podman[267175]: 2025-10-01 16:41:17.489585815 +0000 UTC m=+0.029493006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:41:17 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:41:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0262ab6d8773236e4d2c78e837e47bf5ddacacf96c2a6e64d4511de0d308b67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:41:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0262ab6d8773236e4d2c78e837e47bf5ddacacf96c2a6e64d4511de0d308b67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:41:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0262ab6d8773236e4d2c78e837e47bf5ddacacf96c2a6e64d4511de0d308b67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:41:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0262ab6d8773236e4d2c78e837e47bf5ddacacf96c2a6e64d4511de0d308b67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:41:17 np0005464891 podman[267175]: 2025-10-01 16:41:17.622972216 +0000 UTC m=+0.162879397 container init 58096e5b6b75a60da4fc87ead1b7b47a91e89b6b32a06cba42c70d653107f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 12:41:17 np0005464891 podman[267175]: 2025-10-01 16:41:17.630820613 +0000 UTC m=+0.170727774 container start 58096e5b6b75a60da4fc87ead1b7b47a91e89b6b32a06cba42c70d653107f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:41:17 np0005464891 podman[267175]: 2025-10-01 16:41:17.642896737 +0000 UTC m=+0.182803908 container attach 58096e5b6b75a60da4fc87ead1b7b47a91e89b6b32a06cba42c70d653107f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 12:41:17 np0005464891 nova_compute[259907]: 2025-10-01 16:41:17.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:41:17 np0005464891 nova_compute[259907]: 2025-10-01 16:41:17.806 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:41:17 np0005464891 nova_compute[259907]: 2025-10-01 16:41:17.806 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:41:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct  1 12:41:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct  1 12:41:18 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]: {
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:        "osd_id": 2,
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:        "type": "bluestore"
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:    },
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:        "osd_id": 0,
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:        "type": "bluestore"
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:    },
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:        "osd_id": 1,
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:        "type": "bluestore"
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]:    }
Oct  1 12:41:18 np0005464891 frosty_kilby[267191]: }
Oct  1 12:41:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:41:18 np0005464891 systemd[1]: libpod-58096e5b6b75a60da4fc87ead1b7b47a91e89b6b32a06cba42c70d653107f535.scope: Deactivated successfully.
Oct  1 12:41:18 np0005464891 systemd[1]: libpod-58096e5b6b75a60da4fc87ead1b7b47a91e89b6b32a06cba42c70d653107f535.scope: Consumed 1.053s CPU time.
Oct  1 12:41:18 np0005464891 podman[267175]: 2025-10-01 16:41:18.686328693 +0000 UTC m=+1.226235904 container died 58096e5b6b75a60da4fc87ead1b7b47a91e89b6b32a06cba42c70d653107f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 12:41:18 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e0262ab6d8773236e4d2c78e837e47bf5ddacacf96c2a6e64d4511de0d308b67-merged.mount: Deactivated successfully.
Oct  1 12:41:18 np0005464891 nova_compute[259907]: 2025-10-01 16:41:18.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:41:18 np0005464891 nova_compute[259907]: 2025-10-01 16:41:18.806 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:41:18 np0005464891 podman[267175]: 2025-10-01 16:41:18.829788182 +0000 UTC m=+1.369695343 container remove 58096e5b6b75a60da4fc87ead1b7b47a91e89b6b32a06cba42c70d653107f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 12:41:18 np0005464891 nova_compute[259907]: 2025-10-01 16:41:18.839 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:41:18 np0005464891 nova_compute[259907]: 2025-10-01 16:41:18.840 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:41:18 np0005464891 nova_compute[259907]: 2025-10-01 16:41:18.841 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:41:18 np0005464891 nova_compute[259907]: 2025-10-01 16:41:18.841 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:41:18 np0005464891 nova_compute[259907]: 2025-10-01 16:41:18.842 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:41:18 np0005464891 systemd[1]: libpod-conmon-58096e5b6b75a60da4fc87ead1b7b47a91e89b6b32a06cba42c70d653107f535.scope: Deactivated successfully.
Oct  1 12:41:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:41:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:41:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:41:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:41:18 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 4f29588d-6354-487f-b8ea-fdcb4753bd7c does not exist
Oct  1 12:41:18 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev dc54d87c-376d-45c9-95ca-19702f1abafa does not exist
Oct  1 12:41:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct  1 12:41:19 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:41:19 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:41:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct  1 12:41:19 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct  1 12:41:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:41:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/689960238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:41:19 np0005464891 nova_compute[259907]: 2025-10-01 16:41:19.358 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:41:19 np0005464891 nova_compute[259907]: 2025-10-01 16:41:19.529 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:41:19 np0005464891 nova_compute[259907]: 2025-10-01 16:41:19.530 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5114MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:41:19 np0005464891 nova_compute[259907]: 2025-10-01 16:41:19.531 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:41:19 np0005464891 nova_compute[259907]: 2025-10-01 16:41:19.531 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:41:19 np0005464891 nova_compute[259907]: 2025-10-01 16:41:19.619 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:41:19 np0005464891 nova_compute[259907]: 2025-10-01 16:41:19.619 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:41:19 np0005464891 nova_compute[259907]: 2025-10-01 16:41:19.651 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:41:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:41:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:41:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3717107369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:41:20 np0005464891 nova_compute[259907]: 2025-10-01 16:41:20.126 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:41:20 np0005464891 nova_compute[259907]: 2025-10-01 16:41:20.133 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:41:20 np0005464891 nova_compute[259907]: 2025-10-01 16:41:20.218 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:41:20 np0005464891 nova_compute[259907]: 2025-10-01 16:41:20.220 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:41:20 np0005464891 nova_compute[259907]: 2025-10-01 16:41:20.220 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:41:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v937: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:41:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct  1 12:41:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct  1 12:41:21 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct  1 12:41:21 np0005464891 nova_compute[259907]: 2025-10-01 16:41:21.215 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:41:21 np0005464891 nova_compute[259907]: 2025-10-01 16:41:21.237 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:41:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:41:21 np0005464891 podman[267333]: 2025-10-01 16:41:21.987974843 +0000 UTC m=+0.092480110 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:41:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:41:22 np0005464891 nova_compute[259907]: 2025-10-01 16:41:22.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:41:23 np0005464891 podman[267353]: 2025-10-01 16:41:23.996857538 +0000 UTC m=+0.101845459 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 12:41:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Oct  1 12:41:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:41:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 1.2 KiB/s wr, 11 op/s
Oct  1 12:41:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct  1 12:41:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct  1 12:41:26 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct  1 12:41:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.0 KiB/s wr, 25 op/s
Oct  1 12:41:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:41:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct  1 12:41:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct  1 12:41:30 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct  1 12:41:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.0 KiB/s wr, 25 op/s
Oct  1 12:41:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:41:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/155702580' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:41:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:41:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/155702580' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:41:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 767 B/s wr, 13 op/s
Oct  1 12:41:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct  1 12:41:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct  1 12:41:33 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct  1 12:41:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.7 KiB/s wr, 36 op/s
Oct  1 12:41:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:41:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 895 B/s wr, 21 op/s
Oct  1 12:41:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:41:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/454369622' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:41:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:41:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/454369622' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:41:36 np0005464891 podman[267374]: 2025-10-01 16:41:36.973384393 +0000 UTC m=+0.083964034 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Oct  1 12:41:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.6 KiB/s wr, 49 op/s
Oct  1 12:41:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:41:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.2 KiB/s wr, 41 op/s
Oct  1 12:41:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:41:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:41:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:41:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:41:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:41:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:41:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.2 KiB/s wr, 41 op/s
Oct  1 12:41:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:41:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3955230803' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:41:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:41:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3955230803' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:41:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 KiB/s wr, 41 op/s
Oct  1 12:41:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:41:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct  1 12:41:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct  1 12:41:45 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct  1 12:41:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.0 KiB/s wr, 45 op/s
Oct  1 12:41:48 np0005464891 podman[267393]: 2025-10-01 16:41:48.046708556 +0000 UTC m=+0.147388829 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:41:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.1 KiB/s wr, 38 op/s
Oct  1 12:41:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:41:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/260578016' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:41:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:41:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/260578016' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:41:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:41:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct  1 12:41:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct  1 12:41:49 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct  1 12:41:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.4 KiB/s wr, 47 op/s
Oct  1 12:41:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 767 B/s wr, 22 op/s
Oct  1 12:41:52 np0005464891 podman[267417]: 2025-10-01 16:41:52.957912894 +0000 UTC m=+0.068596819 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct  1 12:41:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 30 KiB/s wr, 37 op/s
Oct  1 12:41:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 12:41:54 np0005464891 podman[267439]: 2025-10-01 16:41:54.982801892 +0000 UTC m=+0.079982644 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001)
Oct  1 12:41:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct  1 12:41:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct  1 12:41:55 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct  1 12:41:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 34 KiB/s wr, 20 op/s
Oct  1 12:41:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct  1 12:41:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct  1 12:41:56 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct  1 12:41:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 35 KiB/s wr, 42 op/s
Oct  1 12:41:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct  1 12:41:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct  1 12:41:58 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct  1 12:41:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:41:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct  1 12:41:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct  1 12:41:59 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct  1 12:42:00 np0005464891 nova_compute[259907]: 2025-10-01 16:42:00.333 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Acquiring lock "c067f811-99a1-4d7a-a634-3a4c1db5830e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:00 np0005464891 nova_compute[259907]: 2025-10-01 16:42:00.334 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:00 np0005464891 nova_compute[259907]: 2025-10-01 16:42:00.360 2 DEBUG nova.compute.manager [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:42:00 np0005464891 nova_compute[259907]: 2025-10-01 16:42:00.495 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:00 np0005464891 nova_compute[259907]: 2025-10-01 16:42:00.496 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:00 np0005464891 nova_compute[259907]: 2025-10-01 16:42:00.506 2 DEBUG nova.virt.hardware [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:42:00 np0005464891 nova_compute[259907]: 2025-10-01 16:42:00.507 2 INFO nova.compute.claims [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:42:00 np0005464891 nova_compute[259907]: 2025-10-01 16:42:00.629 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.5 KiB/s wr, 36 op/s
Oct  1 12:42:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:42:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/743728658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.105 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.113 2 DEBUG nova.compute.provider_tree [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.131 2 DEBUG nova.scheduler.client.report [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.156 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.157 2 DEBUG nova.compute.manager [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.212 2 DEBUG nova.compute.manager [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.213 2 DEBUG nova.network.neutron [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.241 2 INFO nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.265 2 DEBUG nova.compute.manager [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.370 2 DEBUG nova.compute.manager [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.373 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.373 2 INFO nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Creating image(s)#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.407 2 DEBUG nova.storage.rbd_utils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] rbd image c067f811-99a1-4d7a-a634-3a4c1db5830e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.441 2 DEBUG nova.storage.rbd_utils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] rbd image c067f811-99a1-4d7a-a634-3a4c1db5830e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.474 2 DEBUG nova.storage.rbd_utils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] rbd image c067f811-99a1-4d7a-a634-3a4c1db5830e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.480 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Acquiring lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.483 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:42:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4005020933' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:42:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:42:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4005020933' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.746 2 WARNING oslo_policy.policy [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.747 2 WARNING oslo_policy.policy [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  1 12:42:01 np0005464891 nova_compute[259907]: 2025-10-01 16:42:01.751 2 DEBUG nova.policy [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f99f9a421d8c468bb290009ac8393742', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd89473c2be684cd0bea1fd04915d5d1b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:42:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct  1 12:42:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct  1 12:42:01 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct  1 12:42:02 np0005464891 nova_compute[259907]: 2025-10-01 16:42:02.203 2 DEBUG nova.virt.libvirt.imagebackend [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Image locations are: [{'url': 'rbd://6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5/images/f01c1e7c-fea3-4433-a44a-d71153552c78/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5/images/f01c1e7c-fea3-4433-a44a-d71153552c78/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  1 12:42:02 np0005464891 nova_compute[259907]: 2025-10-01 16:42:02.630 2 DEBUG nova.network.neutron [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Successfully created port: a5d23fa4-4991-45da-a2a2-84f66c06fcee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:42:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 29 op/s
Oct  1 12:42:03 np0005464891 nova_compute[259907]: 2025-10-01 16:42:03.141 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:03 np0005464891 nova_compute[259907]: 2025-10-01 16:42:03.239 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa.part --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:03 np0005464891 nova_compute[259907]: 2025-10-01 16:42:03.241 2 DEBUG nova.virt.images [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] f01c1e7c-fea3-4433-a44a-d71153552c78 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  1 12:42:03 np0005464891 nova_compute[259907]: 2025-10-01 16:42:03.245 2 DEBUG nova.privsep.utils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  1 12:42:03 np0005464891 nova_compute[259907]: 2025-10-01 16:42:03.245 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa.part /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:03 np0005464891 nova_compute[259907]: 2025-10-01 16:42:03.493 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa.part /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa.converted" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:03 np0005464891 nova_compute[259907]: 2025-10-01 16:42:03.502 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:03 np0005464891 nova_compute[259907]: 2025-10-01 16:42:03.562 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa.converted --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:03 np0005464891 nova_compute[259907]: 2025-10-01 16:42:03.564 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:03 np0005464891 nova_compute[259907]: 2025-10-01 16:42:03.595 2 DEBUG nova.storage.rbd_utils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] rbd image c067f811-99a1-4d7a-a634-3a4c1db5830e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:42:03 np0005464891 nova_compute[259907]: 2025-10-01 16:42:03.599 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa c067f811-99a1-4d7a-a634-3a4c1db5830e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:03 np0005464891 nova_compute[259907]: 2025-10-01 16:42:03.662 2 DEBUG nova.network.neutron [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Successfully updated port: a5d23fa4-4991-45da-a2a2-84f66c06fcee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:42:03 np0005464891 nova_compute[259907]: 2025-10-01 16:42:03.700 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Acquiring lock "refresh_cache-c067f811-99a1-4d7a-a634-3a4c1db5830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:42:03 np0005464891 nova_compute[259907]: 2025-10-01 16:42:03.700 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Acquired lock "refresh_cache-c067f811-99a1-4d7a-a634-3a4c1db5830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:42:03 np0005464891 nova_compute[259907]: 2025-10-01 16:42:03.701 2 DEBUG nova.network.neutron [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:42:03 np0005464891 nova_compute[259907]: 2025-10-01 16:42:03.904 2 DEBUG nova.network.neutron [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:42:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct  1 12:42:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct  1 12:42:04 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct  1 12:42:04 np0005464891 nova_compute[259907]: 2025-10-01 16:42:04.212 2 DEBUG nova.compute.manager [req-3c987393-291f-4636-ba4c-8756413c95f2 req-8468b571-49d9-4993-9f3b-860887ddbaa7 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Received event network-changed-a5d23fa4-4991-45da-a2a2-84f66c06fcee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:42:04 np0005464891 nova_compute[259907]: 2025-10-01 16:42:04.212 2 DEBUG nova.compute.manager [req-3c987393-291f-4636-ba4c-8756413c95f2 req-8468b571-49d9-4993-9f3b-860887ddbaa7 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Refreshing instance network info cache due to event network-changed-a5d23fa4-4991-45da-a2a2-84f66c06fcee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:42:04 np0005464891 nova_compute[259907]: 2025-10-01 16:42:04.213 2 DEBUG oslo_concurrency.lockutils [req-3c987393-291f-4636-ba4c-8756413c95f2 req-8468b571-49d9-4993-9f3b-860887ddbaa7 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-c067f811-99a1-4d7a-a634-3a4c1db5830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:42:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.7 KiB/s wr, 125 op/s
Oct  1 12:42:04 np0005464891 nova_compute[259907]: 2025-10-01 16:42:04.776 2 DEBUG nova.network.neutron [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Updating instance_info_cache with network_info: [{"id": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "address": "fa:16:3e:9d:7a:be", "network": {"id": "36957630-badc-42b5-ad26-5cdca3a519c1", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-735724152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d89473c2be684cd0bea1fd04915d5d1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5d23fa4-49", "ovs_interfaceid": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:42:04 np0005464891 nova_compute[259907]: 2025-10-01 16:42:04.796 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Releasing lock "refresh_cache-c067f811-99a1-4d7a-a634-3a4c1db5830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:42:04 np0005464891 nova_compute[259907]: 2025-10-01 16:42:04.797 2 DEBUG nova.compute.manager [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Instance network_info: |[{"id": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "address": "fa:16:3e:9d:7a:be", "network": {"id": "36957630-badc-42b5-ad26-5cdca3a519c1", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-735724152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d89473c2be684cd0bea1fd04915d5d1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5d23fa4-49", "ovs_interfaceid": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:42:04 np0005464891 nova_compute[259907]: 2025-10-01 16:42:04.797 2 DEBUG oslo_concurrency.lockutils [req-3c987393-291f-4636-ba4c-8756413c95f2 req-8468b571-49d9-4993-9f3b-860887ddbaa7 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-c067f811-99a1-4d7a-a634-3a4c1db5830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:42:04 np0005464891 nova_compute[259907]: 2025-10-01 16:42:04.798 2 DEBUG nova.network.neutron [req-3c987393-291f-4636-ba4c-8756413c95f2 req-8468b571-49d9-4993-9f3b-860887ddbaa7 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Refreshing network info cache for port a5d23fa4-4991-45da-a2a2-84f66c06fcee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:42:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:42:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct  1 12:42:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct  1 12:42:05 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.367 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa c067f811-99a1-4d7a-a634-3a4c1db5830e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.768s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.435 2 DEBUG nova.storage.rbd_utils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] resizing rbd image c067f811-99a1-4d7a-a634-3a4c1db5830e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.535 2 DEBUG nova.objects.instance [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lazy-loading 'migration_context' on Instance uuid c067f811-99a1-4d7a-a634-3a4c1db5830e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.557 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.557 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Ensure instance console log exists: /var/lib/nova/instances/c067f811-99a1-4d7a-a634-3a4c1db5830e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.558 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.558 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.559 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.561 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Start _get_guest_xml network_info=[{"id": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "address": "fa:16:3e:9d:7a:be", "network": {"id": "36957630-badc-42b5-ad26-5cdca3a519c1", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-735724152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d89473c2be684cd0bea1fd04915d5d1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5d23fa4-49", "ovs_interfaceid": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'image_id': 'f01c1e7c-fea3-4433-a44a-d71153552c78'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.567 2 WARNING nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.571 2 DEBUG nova.virt.libvirt.host [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.572 2 DEBUG nova.virt.libvirt.host [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.575 2 DEBUG nova.virt.libvirt.host [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.575 2 DEBUG nova.virt.libvirt.host [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.576 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.576 2 DEBUG nova.virt.hardware [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.577 2 DEBUG nova.virt.hardware [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.577 2 DEBUG nova.virt.hardware [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.577 2 DEBUG nova.virt.hardware [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.578 2 DEBUG nova.virt.hardware [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.578 2 DEBUG nova.virt.hardware [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.578 2 DEBUG nova.virt.hardware [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.578 2 DEBUG nova.virt.hardware [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.579 2 DEBUG nova.virt.hardware [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.579 2 DEBUG nova.virt.hardware [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.579 2 DEBUG nova.virt.hardware [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.584 2 DEBUG nova.privsep.utils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.584 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:42:05 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3029178637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.978 2 DEBUG nova.network.neutron [req-3c987393-291f-4636-ba4c-8756413c95f2 req-8468b571-49d9-4993-9f3b-860887ddbaa7 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Updated VIF entry in instance network info cache for port a5d23fa4-4991-45da-a2a2-84f66c06fcee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.980 2 DEBUG nova.network.neutron [req-3c987393-291f-4636-ba4c-8756413c95f2 req-8468b571-49d9-4993-9f3b-860887ddbaa7 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Updating instance_info_cache with network_info: [{"id": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "address": "fa:16:3e:9d:7a:be", "network": {"id": "36957630-badc-42b5-ad26-5cdca3a519c1", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-735724152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d89473c2be684cd0bea1fd04915d5d1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5d23fa4-49", "ovs_interfaceid": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:42:05 np0005464891 nova_compute[259907]: 2025-10-01 16:42:05.992 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.022 2 DEBUG nova.storage.rbd_utils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] rbd image c067f811-99a1-4d7a-a634-3a4c1db5830e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.027 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.058 2 DEBUG oslo_concurrency.lockutils [req-3c987393-291f-4636-ba4c-8756413c95f2 req-8468b571-49d9-4993-9f3b-860887ddbaa7 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-c067f811-99a1-4d7a-a634-3a4c1db5830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:42:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:42:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1907926868' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.507 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.509 2 DEBUG nova.virt.libvirt.vif [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:41:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1104832253',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1104832253',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1104832253',id=1,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMnWoh78fN2kows9o5rLLFpLcGNgTIFnzTsvGOxoeM8MdE94J62h/z7pDu80RzC2YZ/BbirbdlveD3DsdRrs24cEjDPmZJ7NrjUJDw88Ghm5w5DmW0BLAwrnSuWpXfHayg==',key_name='tempest-keypair-588036381',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d89473c2be684cd0bea1fd04915d5d1b',ramdisk_id='',reservation_id='r-8w28e15x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-2134626502',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-2134626502-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:42:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f99f9a421d8c468bb290009ac8393742',uuid=c067f811-99a1-4d7a-a634-3a4c1db5830e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "address": "fa:16:3e:9d:7a:be", "network": {"id": "36957630-badc-42b5-ad26-5cdca3a519c1", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-735724152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d89473c2be684cd0bea1fd04915d5d1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5d23fa4-49", "ovs_interfaceid": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.510 2 DEBUG nova.network.os_vif_util [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Converting VIF {"id": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "address": "fa:16:3e:9d:7a:be", "network": {"id": "36957630-badc-42b5-ad26-5cdca3a519c1", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-735724152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d89473c2be684cd0bea1fd04915d5d1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5d23fa4-49", "ovs_interfaceid": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.511 2 DEBUG nova.network.os_vif_util [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:7a:be,bridge_name='br-int',has_traffic_filtering=True,id=a5d23fa4-4991-45da-a2a2-84f66c06fcee,network=Network(36957630-badc-42b5-ad26-5cdca3a519c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5d23fa4-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.515 2 DEBUG nova.objects.instance [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lazy-loading 'pci_devices' on Instance uuid c067f811-99a1-4d7a-a634-3a4c1db5830e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.535 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  <uuid>c067f811-99a1-4d7a-a634-3a4c1db5830e</uuid>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  <name>instance-00000001</name>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <nova:name>tempest-EncryptedVolumesExtendAttachedTest-instance-1104832253</nova:name>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:42:05</nova:creationTime>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:        <nova:user uuid="f99f9a421d8c468bb290009ac8393742">tempest-EncryptedVolumesExtendAttachedTest-2134626502-project-member</nova:user>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:        <nova:project uuid="d89473c2be684cd0bea1fd04915d5d1b">tempest-EncryptedVolumesExtendAttachedTest-2134626502</nova:project>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="f01c1e7c-fea3-4433-a44a-d71153552c78"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:        <nova:port uuid="a5d23fa4-4991-45da-a2a2-84f66c06fcee">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <entry name="serial">c067f811-99a1-4d7a-a634-3a4c1db5830e</entry>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <entry name="uuid">c067f811-99a1-4d7a-a634-3a4c1db5830e</entry>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/c067f811-99a1-4d7a-a634-3a4c1db5830e_disk">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/c067f811-99a1-4d7a-a634-3a4c1db5830e_disk.config">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:9d:7a:be"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <target dev="tapa5d23fa4-49"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/c067f811-99a1-4d7a-a634-3a4c1db5830e/console.log" append="off"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:42:06 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:42:06 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:42:06 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:42:06 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.538 2 DEBUG nova.compute.manager [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Preparing to wait for external event network-vif-plugged-a5d23fa4-4991-45da-a2a2-84f66c06fcee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.538 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Acquiring lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.539 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.539 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.540 2 DEBUG nova.virt.libvirt.vif [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:41:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1104832253',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1104832253',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1104832253',id=1,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMnWoh78fN2kows9o5rLLFpLcGNgTIFnzTsvGOxoeM8MdE94J62h/z7pDu80RzC2YZ/BbirbdlveD3DsdRrs24cEjDPmZJ7NrjUJDw88Ghm5w5DmW0BLAwrnSuWpXfHayg==',key_name='tempest-keypair-588036381',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d89473c2be684cd0bea1fd04915d5d1b',ramdisk_id='',reservation_id='r-8w28e15x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-2134626502',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-2134626502-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:42:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f99f9a421d8c468bb290009ac8393742',uuid=c067f811-99a1-4d7a-a634-3a4c1db5830e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "address": "fa:16:3e:9d:7a:be", "network": {"id": "36957630-badc-42b5-ad26-5cdca3a519c1", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-735724152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d89473c2be684cd0bea1fd04915d5d1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5d23fa4-49", "ovs_interfaceid": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.541 2 DEBUG nova.network.os_vif_util [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Converting VIF {"id": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "address": "fa:16:3e:9d:7a:be", "network": {"id": "36957630-badc-42b5-ad26-5cdca3a519c1", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-735724152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d89473c2be684cd0bea1fd04915d5d1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5d23fa4-49", "ovs_interfaceid": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.542 2 DEBUG nova.network.os_vif_util [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:7a:be,bridge_name='br-int',has_traffic_filtering=True,id=a5d23fa4-4991-45da-a2a2-84f66c06fcee,network=Network(36957630-badc-42b5-ad26-5cdca3a519c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5d23fa4-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.542 2 DEBUG os_vif [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:7a:be,bridge_name='br-int',has_traffic_filtering=True,id=a5d23fa4-4991-45da-a2a2-84f66c06fcee,network=Network(36957630-badc-42b5-ad26-5cdca3a519c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5d23fa4-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.593 2 DEBUG ovsdbapp.backend.ovs_idl [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.594 2 DEBUG ovsdbapp.backend.ovs_idl [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.594 2 DEBUG ovsdbapp.backend.ovs_idl [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.611 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:42:06 np0005464891 nova_compute[259907]: 2025-10-01 16:42:06.613 2 INFO oslo.privsep.daemon [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp2kjghz_9/privsep.sock']#033[00m
Oct  1 12:42:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.5 KiB/s wr, 119 op/s
Oct  1 12:42:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct  1 12:42:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct  1 12:42:07 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.359 2 INFO oslo.privsep.daemon [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.219 609 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.226 609 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.230 609 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.230 609 INFO oslo.privsep.daemon [-] privsep daemon running as pid 609#033[00m
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5d23fa4-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa5d23fa4-49, col_values=(('external_ids', {'iface-id': 'a5d23fa4-4991-45da-a2a2-84f66c06fcee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:7a:be', 'vm-uuid': 'c067f811-99a1-4d7a-a634-3a4c1db5830e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:07 np0005464891 NetworkManager[44940]: <info>  [1759336927.7382] manager: (tapa5d23fa4-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.753 2 INFO os_vif [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:7a:be,bridge_name='br-int',has_traffic_filtering=True,id=a5d23fa4-4991-45da-a2a2-84f66c06fcee,network=Network(36957630-badc-42b5-ad26-5cdca3a519c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5d23fa4-49')#033[00m
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.822 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.822 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.823 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] No VIF found with MAC fa:16:3e:9d:7a:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.824 2 INFO nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Using config drive#033[00m
Oct  1 12:42:07 np0005464891 nova_compute[259907]: 2025-10-01 16:42:07.857 2 DEBUG nova.storage.rbd_utils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] rbd image c067f811-99a1-4d7a-a634-3a4c1db5830e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:42:07 np0005464891 podman[267749]: 2025-10-01 16:42:07.989662636 +0000 UTC m=+0.102692593 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  1 12:42:08 np0005464891 nova_compute[259907]: 2025-10-01 16:42:08.381 2 INFO nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Creating config drive at /var/lib/nova/instances/c067f811-99a1-4d7a-a634-3a4c1db5830e/disk.config#033[00m
Oct  1 12:42:08 np0005464891 nova_compute[259907]: 2025-10-01 16:42:08.390 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c067f811-99a1-4d7a-a634-3a4c1db5830e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxjd0emih execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:42:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/700781243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:42:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:42:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/700781243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:42:08 np0005464891 nova_compute[259907]: 2025-10-01 16:42:08.538 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c067f811-99a1-4d7a-a634-3a4c1db5830e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxjd0emih" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:08 np0005464891 nova_compute[259907]: 2025-10-01 16:42:08.577 2 DEBUG nova.storage.rbd_utils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] rbd image c067f811-99a1-4d7a-a634-3a4c1db5830e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:42:08 np0005464891 nova_compute[259907]: 2025-10-01 16:42:08.582 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c067f811-99a1-4d7a-a634-3a4c1db5830e/disk.config c067f811-99a1-4d7a-a634-3a4c1db5830e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 259 op/s
Oct  1 12:42:08 np0005464891 nova_compute[259907]: 2025-10-01 16:42:08.721 2 DEBUG oslo_concurrency.processutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c067f811-99a1-4d7a-a634-3a4c1db5830e/disk.config c067f811-99a1-4d7a-a634-3a4c1db5830e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:08 np0005464891 nova_compute[259907]: 2025-10-01 16:42:08.722 2 INFO nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Deleting local config drive /var/lib/nova/instances/c067f811-99a1-4d7a-a634-3a4c1db5830e/disk.config because it was imported into RBD.#033[00m
Oct  1 12:42:08 np0005464891 systemd[1]: Starting libvirt secret daemon...
Oct  1 12:42:08 np0005464891 systemd[1]: Started libvirt secret daemon.
Oct  1 12:42:08 np0005464891 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct  1 12:42:08 np0005464891 NetworkManager[44940]: <info>  [1759336928.8645] manager: (tapa5d23fa4-49): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Oct  1 12:42:08 np0005464891 kernel: tapa5d23fa4-49: entered promiscuous mode
Oct  1 12:42:08 np0005464891 nova_compute[259907]: 2025-10-01 16:42:08.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:42:08Z|00027|binding|INFO|Claiming lport a5d23fa4-4991-45da-a2a2-84f66c06fcee for this chassis.
Oct  1 12:42:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:42:08Z|00028|binding|INFO|a5d23fa4-4991-45da-a2a2-84f66c06fcee: Claiming fa:16:3e:9d:7a:be 10.100.0.8
Oct  1 12:42:08 np0005464891 nova_compute[259907]: 2025-10-01 16:42:08.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:08.885 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:7a:be 10.100.0.8'], port_security=['fa:16:3e:9d:7a:be 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c067f811-99a1-4d7a-a634-3a4c1db5830e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36957630-badc-42b5-ad26-5cdca3a519c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd89473c2be684cd0bea1fd04915d5d1b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcdfa1ae-8f87-403d-9e7b-02099e78c20c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51762d74-115a-4625-9f3e-27d14d10d9f1, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=a5d23fa4-4991-45da-a2a2-84f66c06fcee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:42:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:08.887 162546 INFO neutron.agent.ovn.metadata.agent [-] Port a5d23fa4-4991-45da-a2a2-84f66c06fcee in datapath 36957630-badc-42b5-ad26-5cdca3a519c1 bound to our chassis#033[00m
Oct  1 12:42:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:08.889 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 36957630-badc-42b5-ad26-5cdca3a519c1#033[00m
Oct  1 12:42:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:08.891 162546 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp0buxz53r/privsep.sock']#033[00m
Oct  1 12:42:08 np0005464891 systemd-udevd[267841]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:42:08 np0005464891 NetworkManager[44940]: <info>  [1759336928.9195] device (tapa5d23fa4-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:42:08 np0005464891 NetworkManager[44940]: <info>  [1759336928.9201] device (tapa5d23fa4-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:42:08 np0005464891 systemd-machined[214891]: New machine qemu-1-instance-00000001.
Oct  1 12:42:08 np0005464891 nova_compute[259907]: 2025-10-01 16:42:08.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:08 np0005464891 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct  1 12:42:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:42:08Z|00029|binding|INFO|Setting lport a5d23fa4-4991-45da-a2a2-84f66c06fcee ovn-installed in OVS
Oct  1 12:42:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:42:08Z|00030|binding|INFO|Setting lport a5d23fa4-4991-45da-a2a2-84f66c06fcee up in Southbound
Oct  1 12:42:08 np0005464891 nova_compute[259907]: 2025-10-01 16:42:08.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct  1 12:42:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct  1 12:42:09 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.376 2 DEBUG nova.compute.manager [req-c6c13a0f-c058-4150-bacc-49ff8377d051 req-3610e234-d82a-4a54-b726-7f1684cd4339 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Received event network-vif-plugged-a5d23fa4-4991-45da-a2a2-84f66c06fcee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.377 2 DEBUG oslo_concurrency.lockutils [req-c6c13a0f-c058-4150-bacc-49ff8377d051 req-3610e234-d82a-4a54-b726-7f1684cd4339 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.377 2 DEBUG oslo_concurrency.lockutils [req-c6c13a0f-c058-4150-bacc-49ff8377d051 req-3610e234-d82a-4a54-b726-7f1684cd4339 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.378 2 DEBUG oslo_concurrency.lockutils [req-c6c13a0f-c058-4150-bacc-49ff8377d051 req-3610e234-d82a-4a54-b726-7f1684cd4339 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.378 2 DEBUG nova.compute.manager [req-c6c13a0f-c058-4150-bacc-49ff8377d051 req-3610e234-d82a-4a54-b726-7f1684cd4339 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Processing event network-vif-plugged-a5d23fa4-4991-45da-a2a2-84f66c06fcee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:42:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:09.627 162546 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  1 12:42:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:09.628 162546 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp0buxz53r/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  1 12:42:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:09.485 267902 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  1 12:42:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:09.493 267902 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  1 12:42:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:09.497 267902 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct  1 12:42:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:09.497 267902 INFO oslo.privsep.daemon [-] privsep daemon running as pid 267902#033[00m
Oct  1 12:42:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:09.633 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[12844458-2faa-47ff-8e28-8c39291e858b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:42:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct  1 12:42:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct  1 12:42:09 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.888 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759336929.8877635, c067f811-99a1-4d7a-a634-3a4c1db5830e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.889 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] VM Started (Lifecycle Event)#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.893 2 DEBUG nova.compute.manager [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.897 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.914 2 INFO nova.virt.libvirt.driver [-] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Instance spawned successfully.#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.914 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.919 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.924 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.941 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.942 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.943 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.944 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.945 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.946 2 DEBUG nova.virt.libvirt.driver [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.952 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.953 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759336929.8926222, c067f811-99a1-4d7a-a634-3a4c1db5830e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:42:09 np0005464891 nova_compute[259907]: 2025-10-01 16:42:09.954 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:42:10 np0005464891 nova_compute[259907]: 2025-10-01 16:42:10.060 2 INFO nova.compute.manager [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Took 8.69 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:42:10 np0005464891 nova_compute[259907]: 2025-10-01 16:42:10.062 2 DEBUG nova.compute.manager [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:42:10 np0005464891 nova_compute[259907]: 2025-10-01 16:42:10.096 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:42:10 np0005464891 nova_compute[259907]: 2025-10-01 16:42:10.101 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759336929.8960197, c067f811-99a1-4d7a-a634-3a4c1db5830e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:42:10 np0005464891 nova_compute[259907]: 2025-10-01 16:42:10.102 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:42:10 np0005464891 nova_compute[259907]: 2025-10-01 16:42:10.130 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:42:10 np0005464891 nova_compute[259907]: 2025-10-01 16:42:10.149 2 INFO nova.compute.manager [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Took 9.69 seconds to build instance.#033[00m
Oct  1 12:42:10 np0005464891 nova_compute[259907]: 2025-10-01 16:42:10.153 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:42:10 np0005464891 nova_compute[259907]: 2025-10-01 16:42:10.177 2 DEBUG oslo_concurrency.lockutils [None req-5d6a7d32-2e3d-4a47-806a-8e46054b057b f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:10 np0005464891 nova_compute[259907]: 2025-10-01 16:42:10.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:10.503 267902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:10.503 267902 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:10.504 267902 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 3.8 MiB/s wr, 148 op/s
Oct  1 12:42:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct  1 12:42:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct  1 12:42:11 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct  1 12:42:11 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:11.417 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c9b04c-503f-4556-b879-0c445a803475]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:11 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:11.418 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap36957630-b1 in ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:42:11 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:11.420 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap36957630-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:42:11 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:11.420 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1401ad-627d-4c1c-90a7-2fc776ee1767]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:11 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:11.424 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[036900e3-ac29-49e5-a1db-36d5a34c8b3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:11 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:11.466 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[df4cd67a-1e17-4d2f-ae29-71e97e5eeee8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:11 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:11.503 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[94cd7cab-076a-4ffe-b8be-546b0fa1a391]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:11 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:11.505 162546 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpmngbwk9l/privsep.sock']#033[00m
Oct  1 12:42:11 np0005464891 nova_compute[259907]: 2025-10-01 16:42:11.538 2 DEBUG nova.compute.manager [req-0672d0be-e4ba-43b9-810c-d03efe3530d6 req-f4d9b40c-c85b-4863-a6ad-fb18f07c7b9e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Received event network-vif-plugged-a5d23fa4-4991-45da-a2a2-84f66c06fcee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:42:11 np0005464891 nova_compute[259907]: 2025-10-01 16:42:11.539 2 DEBUG oslo_concurrency.lockutils [req-0672d0be-e4ba-43b9-810c-d03efe3530d6 req-f4d9b40c-c85b-4863-a6ad-fb18f07c7b9e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:11 np0005464891 nova_compute[259907]: 2025-10-01 16:42:11.539 2 DEBUG oslo_concurrency.lockutils [req-0672d0be-e4ba-43b9-810c-d03efe3530d6 req-f4d9b40c-c85b-4863-a6ad-fb18f07c7b9e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:11 np0005464891 nova_compute[259907]: 2025-10-01 16:42:11.539 2 DEBUG oslo_concurrency.lockutils [req-0672d0be-e4ba-43b9-810c-d03efe3530d6 req-f4d9b40c-c85b-4863-a6ad-fb18f07c7b9e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:11 np0005464891 nova_compute[259907]: 2025-10-01 16:42:11.539 2 DEBUG nova.compute.manager [req-0672d0be-e4ba-43b9-810c-d03efe3530d6 req-f4d9b40c-c85b-4863-a6ad-fb18f07c7b9e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] No waiting events found dispatching network-vif-plugged-a5d23fa4-4991-45da-a2a2-84f66c06fcee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:42:11 np0005464891 nova_compute[259907]: 2025-10-01 16:42:11.540 2 WARNING nova.compute.manager [req-0672d0be-e4ba-43b9-810c-d03efe3530d6 req-f4d9b40c-c85b-4863-a6ad-fb18f07c7b9e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Received unexpected event network-vif-plugged-a5d23fa4-4991-45da-a2a2-84f66c06fcee for instance with vm_state active and task_state None.#033[00m
Oct  1 12:42:11 np0005464891 nova_compute[259907]: 2025-10-01 16:42:11.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:11 np0005464891 NetworkManager[44940]: <info>  [1759336931.8461] manager: (patch-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Oct  1 12:42:11 np0005464891 NetworkManager[44940]: <info>  [1759336931.8465] device (patch-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 12:42:11 np0005464891 NetworkManager[44940]: <info>  [1759336931.8472] manager: (patch-br-int-to-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Oct  1 12:42:11 np0005464891 NetworkManager[44940]: <info>  [1759336931.8473] device (patch-br-int-to-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 12:42:11 np0005464891 NetworkManager[44940]: <info>  [1759336931.8479] manager: (patch-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Oct  1 12:42:11 np0005464891 NetworkManager[44940]: <info>  [1759336931.8482] manager: (patch-br-int-to-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct  1 12:42:11 np0005464891 NetworkManager[44940]: <info>  [1759336931.8485] device (patch-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  1 12:42:11 np0005464891 NetworkManager[44940]: <info>  [1759336931.8486] device (patch-br-int-to-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  1 12:42:11 np0005464891 nova_compute[259907]: 2025-10-01 16:42:11.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:11 np0005464891 nova_compute[259907]: 2025-10-01 16:42:11.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:42:12
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'images', 'backups']
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:42:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:12.219 162546 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  1 12:42:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:12.220 162546 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpmngbwk9l/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  1 12:42:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:12.115 267917 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  1 12:42:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:12.118 267917 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  1 12:42:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:12.120 267917 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  1 12:42:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:12.120 267917 INFO oslo.privsep.daemon [-] privsep daemon running as pid 267917#033[00m
Oct  1 12:42:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:12.222 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[dc92b399-995e-4738-a28b-d43ddc387582]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:42:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:12.439 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:12.440 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:12.440 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:12 np0005464891 nova_compute[259907]: 2025-10-01 16:42:12.570 2 DEBUG nova.compute.manager [req-11ada981-6755-41bb-b95f-02372c080ff9 req-09f5c73c-d74f-47a1-a275-ea17c36df94a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Received event network-changed-a5d23fa4-4991-45da-a2a2-84f66c06fcee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:42:12 np0005464891 nova_compute[259907]: 2025-10-01 16:42:12.571 2 DEBUG nova.compute.manager [req-11ada981-6755-41bb-b95f-02372c080ff9 req-09f5c73c-d74f-47a1-a275-ea17c36df94a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Refreshing instance network info cache due to event network-changed-a5d23fa4-4991-45da-a2a2-84f66c06fcee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:42:12 np0005464891 nova_compute[259907]: 2025-10-01 16:42:12.571 2 DEBUG oslo_concurrency.lockutils [req-11ada981-6755-41bb-b95f-02372c080ff9 req-09f5c73c-d74f-47a1-a275-ea17c36df94a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-c067f811-99a1-4d7a-a634-3a4c1db5830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:42:12 np0005464891 nova_compute[259907]: 2025-10-01 16:42:12.571 2 DEBUG oslo_concurrency.lockutils [req-11ada981-6755-41bb-b95f-02372c080ff9 req-09f5c73c-d74f-47a1-a275-ea17c36df94a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-c067f811-99a1-4d7a-a634-3a4c1db5830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:42:12 np0005464891 nova_compute[259907]: 2025-10-01 16:42:12.571 2 DEBUG nova.network.neutron [req-11ada981-6755-41bb-b95f-02372c080ff9 req-09f5c73c-d74f-47a1-a275-ea17c36df94a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Refreshing network info cache for port a5d23fa4-4991-45da-a2a2-84f66c06fcee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:42:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 3.8 MiB/s wr, 149 op/s
Oct  1 12:42:12 np0005464891 nova_compute[259907]: 2025-10-01 16:42:12.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:12.795 267917 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:12.795 267917 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:12.795 267917 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct  1 12:42:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct  1 12:42:13 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.383 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[ff41a349-0017-4b87-aa91-d71cd1e29f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.391 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[99599878-3a41-4cb5-9673-e1d2c6a3752f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:13 np0005464891 NetworkManager[44940]: <info>  [1759336933.3928] manager: (tap36957630-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.432 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3b59ff-7214-43a2-bad6-87bb84fd6212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.436 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[45702911-86ac-4f1f-891a-17ae3482bcd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:13 np0005464891 systemd-udevd[267927]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:42:13 np0005464891 NetworkManager[44940]: <info>  [1759336933.4655] device (tap36957630-b0): carrier: link connected
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.474 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[80305bfc-5ae7-4d57-bbbd-3316528dfebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.502 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[318b52a0-655f-464d-be2e-1687ded19efe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap36957630-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:d4:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388203, 'reachable_time': 42519, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267933, 'error': None, 'target': 'ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.525 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb95632-2e1f-4ef5-9ed0-fa5dd383dd6e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:d4d9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 388203, 'tstamp': 388203}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267947, 'error': None, 'target': 'ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.548 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[9fbd05f9-7d80-4bb3-ab4f-50d5ebabe52e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap36957630-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:d4:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388203, 'reachable_time': 42519, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267948, 'error': None, 'target': 'ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.588 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[717e38cb-1e95-48e4-89d4-3877e59a2d79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.671 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[87b82328-4424-4e46-a067-68d2fc2f3aff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.673 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36957630-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.674 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.675 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36957630-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:42:13 np0005464891 nova_compute[259907]: 2025-10-01 16:42:13.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:13 np0005464891 kernel: tap36957630-b0: entered promiscuous mode
Oct  1 12:42:13 np0005464891 NetworkManager[44940]: <info>  [1759336933.7263] manager: (tap36957630-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.727 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap36957630-b0, col_values=(('external_ids', {'iface-id': '2dd4ed8e-3d34-4f5a-a535-56d5e109ff46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:42:13 np0005464891 nova_compute[259907]: 2025-10-01 16:42:13.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:13 np0005464891 nova_compute[259907]: 2025-10-01 16:42:13.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:13 np0005464891 ovn_controller[152409]: 2025-10-01T16:42:13Z|00031|binding|INFO|Releasing lport 2dd4ed8e-3d34-4f5a-a535-56d5e109ff46 from this chassis (sb_readonly=0)
Oct  1 12:42:13 np0005464891 nova_compute[259907]: 2025-10-01 16:42:13.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.741 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/36957630-badc-42b5-ad26-5cdca3a519c1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/36957630-badc-42b5-ad26-5cdca3a519c1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.743 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[317aba69-2f8f-49e3-9cb5-4bf06776f25f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.748 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-36957630-badc-42b5-ad26-5cdca3a519c1
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/36957630-badc-42b5-ad26-5cdca3a519c1.pid.haproxy
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID 36957630-badc-42b5-ad26-5cdca3a519c1
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:42:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:13.750 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1', 'env', 'PROCESS_TAG=haproxy-36957630-badc-42b5-ad26-5cdca3a519c1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/36957630-badc-42b5-ad26-5cdca3a519c1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:42:13 np0005464891 nova_compute[259907]: 2025-10-01 16:42:13.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:14 np0005464891 podman[267981]: 2025-10-01 16:42:14.247411786 +0000 UTC m=+0.072464056 container create 59154d3e33dc1181a583517ec0139c2c6e3de620cc200036660e89b25aade41e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:42:14 np0005464891 systemd[1]: Started libpod-conmon-59154d3e33dc1181a583517ec0139c2c6e3de620cc200036660e89b25aade41e.scope.
Oct  1 12:42:14 np0005464891 podman[267981]: 2025-10-01 16:42:14.218742113 +0000 UTC m=+0.043794433 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:42:14 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:42:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab1561bd79171943b6245da13b1089a6a6db6a348d321ed0f133cbe996b0b071/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:42:14 np0005464891 podman[267981]: 2025-10-01 16:42:14.341354355 +0000 UTC m=+0.166406625 container init 59154d3e33dc1181a583517ec0139c2c6e3de620cc200036660e89b25aade41e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:42:14 np0005464891 nova_compute[259907]: 2025-10-01 16:42:14.347 2 DEBUG nova.network.neutron [req-11ada981-6755-41bb-b95f-02372c080ff9 req-09f5c73c-d74f-47a1-a275-ea17c36df94a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Updated VIF entry in instance network info cache for port a5d23fa4-4991-45da-a2a2-84f66c06fcee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:42:14 np0005464891 nova_compute[259907]: 2025-10-01 16:42:14.348 2 DEBUG nova.network.neutron [req-11ada981-6755-41bb-b95f-02372c080ff9 req-09f5c73c-d74f-47a1-a275-ea17c36df94a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Updating instance_info_cache with network_info: [{"id": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "address": "fa:16:3e:9d:7a:be", "network": {"id": "36957630-badc-42b5-ad26-5cdca3a519c1", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-735724152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d89473c2be684cd0bea1fd04915d5d1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5d23fa4-49", "ovs_interfaceid": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:42:14 np0005464891 podman[267981]: 2025-10-01 16:42:14.352266576 +0000 UTC m=+0.177318846 container start 59154d3e33dc1181a583517ec0139c2c6e3de620cc200036660e89b25aade41e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  1 12:42:14 np0005464891 nova_compute[259907]: 2025-10-01 16:42:14.373 2 DEBUG oslo_concurrency.lockutils [req-11ada981-6755-41bb-b95f-02372c080ff9 req-09f5c73c-d74f-47a1-a275-ea17c36df94a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-c067f811-99a1-4d7a-a634-3a4c1db5830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:42:14 np0005464891 neutron-haproxy-ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1[267996]: [NOTICE]   (268001) : New worker (268003) forked
Oct  1 12:42:14 np0005464891 neutron-haproxy-ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1[267996]: [NOTICE]   (268001) : Loading success.
Oct  1 12:42:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 34 KiB/s wr, 244 op/s
Oct  1 12:42:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:42:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct  1 12:42:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct  1 12:42:15 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct  1 12:42:15 np0005464891 nova_compute[259907]: 2025-10-01 16:42:15.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:15 np0005464891 nova_compute[259907]: 2025-10-01 16:42:15.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:42:15 np0005464891 nova_compute[259907]: 2025-10-01 16:42:15.807 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:42:15 np0005464891 nova_compute[259907]: 2025-10-01 16:42:15.807 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:42:16 np0005464891 nova_compute[259907]: 2025-10-01 16:42:16.143 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "refresh_cache-c067f811-99a1-4d7a-a634-3a4c1db5830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:42:16 np0005464891 nova_compute[259907]: 2025-10-01 16:42:16.143 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquired lock "refresh_cache-c067f811-99a1-4d7a-a634-3a4c1db5830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:42:16 np0005464891 nova_compute[259907]: 2025-10-01 16:42:16.143 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  1 12:42:16 np0005464891 nova_compute[259907]: 2025-10-01 16:42:16.144 2 DEBUG nova.objects.instance [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c067f811-99a1-4d7a-a634-3a4c1db5830e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:42:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 32 KiB/s wr, 228 op/s
Oct  1 12:42:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct  1 12:42:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct  1 12:42:17 np0005464891 nova_compute[259907]: 2025-10-01 16:42:17.322 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Updating instance_info_cache with network_info: [{"id": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "address": "fa:16:3e:9d:7a:be", "network": {"id": "36957630-badc-42b5-ad26-5cdca3a519c1", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-735724152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d89473c2be684cd0bea1fd04915d5d1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5d23fa4-49", "ovs_interfaceid": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:42:17 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct  1 12:42:17 np0005464891 nova_compute[259907]: 2025-10-01 16:42:17.377 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Releasing lock "refresh_cache-c067f811-99a1-4d7a-a634-3a4c1db5830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:42:17 np0005464891 nova_compute[259907]: 2025-10-01 16:42:17.378 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  1 12:42:17 np0005464891 nova_compute[259907]: 2025-10-01 16:42:17.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:17 np0005464891 nova_compute[259907]: 2025-10-01 16:42:17.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:42:17 np0005464891 nova_compute[259907]: 2025-10-01 16:42:17.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:42:17 np0005464891 nova_compute[259907]: 2025-10-01 16:42:17.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:42:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 34 KiB/s wr, 267 op/s
Oct  1 12:42:18 np0005464891 nova_compute[259907]: 2025-10-01 16:42:18.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:42:18 np0005464891 nova_compute[259907]: 2025-10-01 16:42:18.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:42:18 np0005464891 nova_compute[259907]: 2025-10-01 16:42:18.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:42:18 np0005464891 nova_compute[259907]: 2025-10-01 16:42:18.837 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:18 np0005464891 nova_compute[259907]: 2025-10-01 16:42:18.838 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:18 np0005464891 nova_compute[259907]: 2025-10-01 16:42:18.838 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:18 np0005464891 nova_compute[259907]: 2025-10-01 16:42:18.839 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:42:18 np0005464891 nova_compute[259907]: 2025-10-01 16:42:18.839 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:19 np0005464891 podman[268013]: 2025-10-01 16:42:19.002356631 +0000 UTC m=+0.114472208 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  1 12:42:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct  1 12:42:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct  1 12:42:19 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct  1 12:42:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:42:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3621358361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:42:19 np0005464891 nova_compute[259907]: 2025-10-01 16:42:19.331 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:19 np0005464891 nova_compute[259907]: 2025-10-01 16:42:19.402 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:42:19 np0005464891 nova_compute[259907]: 2025-10-01 16:42:19.403 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:42:19 np0005464891 nova_compute[259907]: 2025-10-01 16:42:19.646 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:42:19 np0005464891 nova_compute[259907]: 2025-10-01 16:42:19.650 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4725MB free_disk=59.96735763549805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:42:19 np0005464891 nova_compute[259907]: 2025-10-01 16:42:19.651 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:19 np0005464891 nova_compute[259907]: 2025-10-01 16:42:19.652 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:19 np0005464891 nova_compute[259907]: 2025-10-01 16:42:19.750 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance c067f811-99a1-4d7a-a634-3a4c1db5830e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:42:19 np0005464891 nova_compute[259907]: 2025-10-01 16:42:19.750 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:42:19 np0005464891 nova_compute[259907]: 2025-10-01 16:42:19.751 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:42:19 np0005464891 nova_compute[259907]: 2025-10-01 16:42:19.800 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:42:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct  1 12:42:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct  1 12:42:19 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:42:20 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 27ecd5a4-54c4-4bb2-90d4-ab25aa1a9eac does not exist
Oct  1 12:42:20 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev c91ba3ab-432b-403b-9ca2-113e0805c34e does not exist
Oct  1 12:42:20 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev f071fc91-cdd1-40d6-b021-c7208131ef07 does not exist
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1690009387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:42:20 np0005464891 nova_compute[259907]: 2025-10-01 16:42:20.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:20 np0005464891 nova_compute[259907]: 2025-10-01 16:42:20.277 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:20 np0005464891 nova_compute[259907]: 2025-10-01 16:42:20.283 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating inventory in ProviderTree for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 12:42:20 np0005464891 nova_compute[259907]: 2025-10-01 16:42:20.324 2 ERROR nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [req-28c48c81-abe5-49b8-9d83-6d981661c712] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-28c48c81-abe5-49b8-9d83-6d981661c712"}]}#033[00m
Oct  1 12:42:20 np0005464891 nova_compute[259907]: 2025-10-01 16:42:20.339 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing inventories for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 12:42:20 np0005464891 nova_compute[259907]: 2025-10-01 16:42:20.359 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating ProviderTree inventory for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 12:42:20 np0005464891 nova_compute[259907]: 2025-10-01 16:42:20.360 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating inventory in ProviderTree for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 12:42:20 np0005464891 nova_compute[259907]: 2025-10-01 16:42:20.399 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing aggregate associations for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 12:42:20 np0005464891 nova_compute[259907]: 2025-10-01 16:42:20.424 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing trait associations for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8, traits: HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 12:42:20 np0005464891 nova_compute[259907]: 2025-10-01 16:42:20.461 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:20 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  1 12:42:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.0 KiB/s wr, 41 op/s
Oct  1 12:42:20 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  1 12:42:20 np0005464891 podman[268377]: 2025-10-01 16:42:20.761500737 +0000 UTC m=+0.058882620 container create 206ecd0a92e727f119acc92b473682cd6f0ed4a67f66b003d97edc7a4f2af038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pare, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/746362364' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/746362364' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:42:20 np0005464891 systemd[1]: Started libpod-conmon-206ecd0a92e727f119acc92b473682cd6f0ed4a67f66b003d97edc7a4f2af038.scope.
Oct  1 12:42:20 np0005464891 podman[268377]: 2025-10-01 16:42:20.732043193 +0000 UTC m=+0.029425086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:42:20 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:42:20 np0005464891 podman[268377]: 2025-10-01 16:42:20.860481785 +0000 UTC m=+0.157863678 container init 206ecd0a92e727f119acc92b473682cd6f0ed4a67f66b003d97edc7a4f2af038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pare, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 12:42:20 np0005464891 podman[268377]: 2025-10-01 16:42:20.866870153 +0000 UTC m=+0.164252036 container start 206ecd0a92e727f119acc92b473682cd6f0ed4a67f66b003d97edc7a4f2af038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pare, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 12:42:20 np0005464891 podman[268377]: 2025-10-01 16:42:20.870613776 +0000 UTC m=+0.167995659 container attach 206ecd0a92e727f119acc92b473682cd6f0ed4a67f66b003d97edc7a4f2af038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:42:20 np0005464891 quizzical_pare[268393]: 167 167
Oct  1 12:42:20 np0005464891 systemd[1]: libpod-206ecd0a92e727f119acc92b473682cd6f0ed4a67f66b003d97edc7a4f2af038.scope: Deactivated successfully.
Oct  1 12:42:20 np0005464891 conmon[268393]: conmon 206ecd0a92e727f119ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-206ecd0a92e727f119acc92b473682cd6f0ed4a67f66b003d97edc7a4f2af038.scope/container/memory.events
Oct  1 12:42:20 np0005464891 podman[268377]: 2025-10-01 16:42:20.87508894 +0000 UTC m=+0.172470813 container died 206ecd0a92e727f119acc92b473682cd6f0ed4a67f66b003d97edc7a4f2af038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pare, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 12:42:20 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c646d6381c406873c78ba170a071dd2d3a63d3a29cbbc35d7cae1729f4df749e-merged.mount: Deactivated successfully.
Oct  1 12:42:20 np0005464891 podman[268377]: 2025-10-01 16:42:20.92028719 +0000 UTC m=+0.217669063 container remove 206ecd0a92e727f119acc92b473682cd6f0ed4a67f66b003d97edc7a4f2af038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pare, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2984530956' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2984530956' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:42:20 np0005464891 systemd[1]: libpod-conmon-206ecd0a92e727f119acc92b473682cd6f0ed4a67f66b003d97edc7a4f2af038.scope: Deactivated successfully.
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:42:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3559656430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:42:20 np0005464891 nova_compute[259907]: 2025-10-01 16:42:20.987 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:20 np0005464891 nova_compute[259907]: 2025-10-01 16:42:20.994 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating inventory in ProviderTree for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 12:42:21 np0005464891 nova_compute[259907]: 2025-10-01 16:42:21.062 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updated inventory for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  1 12:42:21 np0005464891 nova_compute[259907]: 2025-10-01 16:42:21.063 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  1 12:42:21 np0005464891 nova_compute[259907]: 2025-10-01 16:42:21.063 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating inventory in ProviderTree for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 12:42:21 np0005464891 podman[268418]: 2025-10-01 16:42:21.081169071 +0000 UTC m=+0.040450750 container create 7685b544e75c7292f0f42a23e88f1810c1d82761bc5d4a9c382fee6edddc9360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:42:21 np0005464891 nova_compute[259907]: 2025-10-01 16:42:21.090 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:42:21 np0005464891 nova_compute[259907]: 2025-10-01 16:42:21.090 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:21 np0005464891 systemd[1]: Started libpod-conmon-7685b544e75c7292f0f42a23e88f1810c1d82761bc5d4a9c382fee6edddc9360.scope.
Oct  1 12:42:21 np0005464891 ovn_controller[152409]: 2025-10-01T16:42:21Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:7a:be 10.100.0.8
Oct  1 12:42:21 np0005464891 ovn_controller[152409]: 2025-10-01T16:42:21Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:7a:be 10.100.0.8
Oct  1 12:42:21 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:42:21 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b8af901a30cfe479610ba5b7f5cb1c89f774a6f2781294979bcb3744f7a9c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:42:21 np0005464891 podman[268418]: 2025-10-01 16:42:21.061791395 +0000 UTC m=+0.021073084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:42:21 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b8af901a30cfe479610ba5b7f5cb1c89f774a6f2781294979bcb3744f7a9c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:42:21 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b8af901a30cfe479610ba5b7f5cb1c89f774a6f2781294979bcb3744f7a9c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:42:21 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b8af901a30cfe479610ba5b7f5cb1c89f774a6f2781294979bcb3744f7a9c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:42:21 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b8af901a30cfe479610ba5b7f5cb1c89f774a6f2781294979bcb3744f7a9c9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:42:21 np0005464891 podman[268418]: 2025-10-01 16:42:21.27488523 +0000 UTC m=+0.234166979 container init 7685b544e75c7292f0f42a23e88f1810c1d82761bc5d4a9c382fee6edddc9360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mclaren, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:42:21 np0005464891 podman[268418]: 2025-10-01 16:42:21.28678639 +0000 UTC m=+0.246068099 container start 7685b544e75c7292f0f42a23e88f1810c1d82761bc5d4a9c382fee6edddc9360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mclaren, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:42:21 np0005464891 podman[268418]: 2025-10-01 16:42:21.342211042 +0000 UTC m=+0.301492721 container attach 7685b544e75c7292f0f42a23e88f1810c1d82761bc5d4a9c382fee6edddc9360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 12:42:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:42:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4028742708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:42:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:42:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4028742708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:42:21 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003487950323956502 of space, bias 1.0, pg target 0.10463850971869505 quantized to 32 (current 32)
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 4.45134954743765e-06 of space, bias 1.0, pg target 0.0013354048642312951 quantized to 32 (current 32)
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:42:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:42:22 np0005464891 nova_compute[259907]: 2025-10-01 16:42:22.091 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:42:22 np0005464891 nova_compute[259907]: 2025-10-01 16:42:22.092 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:42:22 np0005464891 competent_mclaren[268435]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:42:22 np0005464891 competent_mclaren[268435]: --> relative data size: 1.0
Oct  1 12:42:22 np0005464891 competent_mclaren[268435]: --> All data devices are unavailable
Oct  1 12:42:22 np0005464891 systemd[1]: libpod-7685b544e75c7292f0f42a23e88f1810c1d82761bc5d4a9c382fee6edddc9360.scope: Deactivated successfully.
Oct  1 12:42:22 np0005464891 systemd[1]: libpod-7685b544e75c7292f0f42a23e88f1810c1d82761bc5d4a9c382fee6edddc9360.scope: Consumed 1.111s CPU time.
Oct  1 12:42:22 np0005464891 podman[268418]: 2025-10-01 16:42:22.46771908 +0000 UTC m=+1.427000839 container died 7685b544e75c7292f0f42a23e88f1810c1d82761bc5d4a9c382fee6edddc9360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 12:42:22 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d5b8af901a30cfe479610ba5b7f5cb1c89f774a6f2781294979bcb3744f7a9c9-merged.mount: Deactivated successfully.
Oct  1 12:42:22 np0005464891 podman[268418]: 2025-10-01 16:42:22.581433796 +0000 UTC m=+1.540715495 container remove 7685b544e75c7292f0f42a23e88f1810c1d82761bc5d4a9c382fee6edddc9360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 12:42:22 np0005464891 systemd[1]: libpod-conmon-7685b544e75c7292f0f42a23e88f1810c1d82761bc5d4a9c382fee6edddc9360.scope: Deactivated successfully.
Oct  1 12:42:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 KiB/s wr, 38 op/s
Oct  1 12:42:22 np0005464891 nova_compute[259907]: 2025-10-01 16:42:22.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:22 np0005464891 nova_compute[259907]: 2025-10-01 16:42:22.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:22 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:22.767 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:42:22 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:22.769 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:42:23 np0005464891 podman[268575]: 2025-10-01 16:42:23.175563202 +0000 UTC m=+0.129185244 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct  1 12:42:23 np0005464891 podman[268637]: 2025-10-01 16:42:23.528731933 +0000 UTC m=+0.082751901 container create b715b26e885017cfae66fc73b0e6b26552b1f20eff40c1a0622d947cea0f63bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_panini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 12:42:23 np0005464891 podman[268637]: 2025-10-01 16:42:23.469421182 +0000 UTC m=+0.023441130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:42:23 np0005464891 systemd[1]: Started libpod-conmon-b715b26e885017cfae66fc73b0e6b26552b1f20eff40c1a0622d947cea0f63bc.scope.
Oct  1 12:42:23 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:42:23 np0005464891 podman[268637]: 2025-10-01 16:42:23.654555714 +0000 UTC m=+0.208575682 container init b715b26e885017cfae66fc73b0e6b26552b1f20eff40c1a0622d947cea0f63bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_panini, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:42:23 np0005464891 podman[268637]: 2025-10-01 16:42:23.667023988 +0000 UTC m=+0.221043926 container start b715b26e885017cfae66fc73b0e6b26552b1f20eff40c1a0622d947cea0f63bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_panini, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:42:23 np0005464891 optimistic_panini[268653]: 167 167
Oct  1 12:42:23 np0005464891 systemd[1]: libpod-b715b26e885017cfae66fc73b0e6b26552b1f20eff40c1a0622d947cea0f63bc.scope: Deactivated successfully.
Oct  1 12:42:23 np0005464891 conmon[268653]: conmon b715b26e885017cfae66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b715b26e885017cfae66fc73b0e6b26552b1f20eff40c1a0622d947cea0f63bc.scope/container/memory.events
Oct  1 12:42:23 np0005464891 podman[268637]: 2025-10-01 16:42:23.691985059 +0000 UTC m=+0.246005077 container attach b715b26e885017cfae66fc73b0e6b26552b1f20eff40c1a0622d947cea0f63bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_panini, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:42:23 np0005464891 podman[268637]: 2025-10-01 16:42:23.693345867 +0000 UTC m=+0.247365835 container died b715b26e885017cfae66fc73b0e6b26552b1f20eff40c1a0622d947cea0f63bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 12:42:23 np0005464891 nova_compute[259907]: 2025-10-01 16:42:23.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:42:23 np0005464891 systemd[1]: var-lib-containers-storage-overlay-cc367c75d6ff5e2bac776d40f33e94467615a60bec32cd457bdf46663828fe45-merged.mount: Deactivated successfully.
Oct  1 12:42:24 np0005464891 podman[268637]: 2025-10-01 16:42:24.027346387 +0000 UTC m=+0.581366345 container remove b715b26e885017cfae66fc73b0e6b26552b1f20eff40c1a0622d947cea0f63bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  1 12:42:24 np0005464891 systemd[1]: libpod-conmon-b715b26e885017cfae66fc73b0e6b26552b1f20eff40c1a0622d947cea0f63bc.scope: Deactivated successfully.
Oct  1 12:42:24 np0005464891 podman[268677]: 2025-10-01 16:42:24.293975273 +0000 UTC m=+0.074419239 container create 5a388520a3f45671a6c59fea130151a700a84a2c3fa2e2a9ed8200909230f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:42:24 np0005464891 systemd[1]: Started libpod-conmon-5a388520a3f45671a6c59fea130151a700a84a2c3fa2e2a9ed8200909230f988.scope.
Oct  1 12:42:24 np0005464891 podman[268677]: 2025-10-01 16:42:24.265852665 +0000 UTC m=+0.046296661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:42:24 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:42:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88e57245ff5555ef9ef25d38d13235bbccec5e88dc494e0bb22342b4137dc3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:42:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88e57245ff5555ef9ef25d38d13235bbccec5e88dc494e0bb22342b4137dc3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:42:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88e57245ff5555ef9ef25d38d13235bbccec5e88dc494e0bb22342b4137dc3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:42:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88e57245ff5555ef9ef25d38d13235bbccec5e88dc494e0bb22342b4137dc3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:42:24 np0005464891 podman[268677]: 2025-10-01 16:42:24.412769049 +0000 UTC m=+0.193212995 container init 5a388520a3f45671a6c59fea130151a700a84a2c3fa2e2a9ed8200909230f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:42:24 np0005464891 podman[268677]: 2025-10-01 16:42:24.42870752 +0000 UTC m=+0.209151486 container start 5a388520a3f45671a6c59fea130151a700a84a2c3fa2e2a9ed8200909230f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 12:42:24 np0005464891 podman[268677]: 2025-10-01 16:42:24.433174904 +0000 UTC m=+0.213618910 container attach 5a388520a3f45671a6c59fea130151a700a84a2c3fa2e2a9ed8200909230f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 12:42:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 121 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 499 KiB/s rd, 3.5 MiB/s wr, 194 op/s
Oct  1 12:42:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:42:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct  1 12:42:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct  1 12:42:24 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct  1 12:42:25 np0005464891 condescending_austin[268693]: {
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:    "0": [
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:        {
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "devices": [
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "/dev/loop3"
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            ],
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "lv_name": "ceph_lv0",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "lv_size": "21470642176",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "name": "ceph_lv0",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "tags": {
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.cluster_name": "ceph",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.crush_device_class": "",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.encrypted": "0",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.osd_id": "0",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.type": "block",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.vdo": "0"
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            },
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "type": "block",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "vg_name": "ceph_vg0"
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:        }
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:    ],
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:    "1": [
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:        {
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "devices": [
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "/dev/loop4"
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            ],
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "lv_name": "ceph_lv1",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "lv_size": "21470642176",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "name": "ceph_lv1",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "tags": {
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.cluster_name": "ceph",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.crush_device_class": "",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.encrypted": "0",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.osd_id": "1",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.type": "block",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.vdo": "0"
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            },
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "type": "block",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "vg_name": "ceph_vg1"
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:        }
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:    ],
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:    "2": [
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:        {
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "devices": [
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "/dev/loop5"
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            ],
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "lv_name": "ceph_lv2",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "lv_size": "21470642176",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "name": "ceph_lv2",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "tags": {
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.cluster_name": "ceph",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.crush_device_class": "",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.encrypted": "0",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.osd_id": "2",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.type": "block",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:                "ceph.vdo": "0"
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            },
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "type": "block",
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:            "vg_name": "ceph_vg2"
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:        }
Oct  1 12:42:25 np0005464891 condescending_austin[268693]:    ]
Oct  1 12:42:25 np0005464891 condescending_austin[268693]: }
Oct  1 12:42:25 np0005464891 systemd[1]: libpod-5a388520a3f45671a6c59fea130151a700a84a2c3fa2e2a9ed8200909230f988.scope: Deactivated successfully.
Oct  1 12:42:25 np0005464891 podman[268677]: 2025-10-01 16:42:25.262575259 +0000 UTC m=+1.043019245 container died 5a388520a3f45671a6c59fea130151a700a84a2c3fa2e2a9ed8200909230f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:42:25 np0005464891 nova_compute[259907]: 2025-10-01 16:42:25.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:25 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c88e57245ff5555ef9ef25d38d13235bbccec5e88dc494e0bb22342b4137dc3d-merged.mount: Deactivated successfully.
Oct  1 12:42:25 np0005464891 podman[268677]: 2025-10-01 16:42:25.367149772 +0000 UTC m=+1.147593698 container remove 5a388520a3f45671a6c59fea130151a700a84a2c3fa2e2a9ed8200909230f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:42:25 np0005464891 systemd[1]: libpod-conmon-5a388520a3f45671a6c59fea130151a700a84a2c3fa2e2a9ed8200909230f988.scope: Deactivated successfully.
Oct  1 12:42:25 np0005464891 podman[268703]: 2025-10-01 16:42:25.398309564 +0000 UTC m=+0.098000122 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  1 12:42:26 np0005464891 podman[268872]: 2025-10-01 16:42:26.128070953 +0000 UTC m=+0.046515218 container create 58002bc7ced28c6d4db82a0891b6f36b1a27509ec7bae37332dfa0f13d27a7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_tesla, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:42:26 np0005464891 systemd[1]: Started libpod-conmon-58002bc7ced28c6d4db82a0891b6f36b1a27509ec7bae37332dfa0f13d27a7e2.scope.
Oct  1 12:42:26 np0005464891 podman[268872]: 2025-10-01 16:42:26.10991387 +0000 UTC m=+0.028358145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:42:26 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:42:26 np0005464891 podman[268872]: 2025-10-01 16:42:26.235212837 +0000 UTC m=+0.153657112 container init 58002bc7ced28c6d4db82a0891b6f36b1a27509ec7bae37332dfa0f13d27a7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:42:26 np0005464891 podman[268872]: 2025-10-01 16:42:26.244649908 +0000 UTC m=+0.163094163 container start 58002bc7ced28c6d4db82a0891b6f36b1a27509ec7bae37332dfa0f13d27a7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_tesla, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 12:42:26 np0005464891 podman[268872]: 2025-10-01 16:42:26.247911059 +0000 UTC m=+0.166355354 container attach 58002bc7ced28c6d4db82a0891b6f36b1a27509ec7bae37332dfa0f13d27a7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:42:26 np0005464891 zen_tesla[268889]: 167 167
Oct  1 12:42:26 np0005464891 systemd[1]: libpod-58002bc7ced28c6d4db82a0891b6f36b1a27509ec7bae37332dfa0f13d27a7e2.scope: Deactivated successfully.
Oct  1 12:42:26 np0005464891 conmon[268889]: conmon 58002bc7ced28c6d4db8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58002bc7ced28c6d4db82a0891b6f36b1a27509ec7bae37332dfa0f13d27a7e2.scope/container/memory.events
Oct  1 12:42:26 np0005464891 podman[268872]: 2025-10-01 16:42:26.251373674 +0000 UTC m=+0.169817949 container died 58002bc7ced28c6d4db82a0891b6f36b1a27509ec7bae37332dfa0f13d27a7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:42:26 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b023de4af78736dea205888377a10326104beafdb858c4d5bddd6f73b9386ee8-merged.mount: Deactivated successfully.
Oct  1 12:42:26 np0005464891 podman[268872]: 2025-10-01 16:42:26.293689875 +0000 UTC m=+0.212134120 container remove 58002bc7ced28c6d4db82a0891b6f36b1a27509ec7bae37332dfa0f13d27a7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:42:26 np0005464891 systemd[1]: libpod-conmon-58002bc7ced28c6d4db82a0891b6f36b1a27509ec7bae37332dfa0f13d27a7e2.scope: Deactivated successfully.
Oct  1 12:42:26 np0005464891 podman[268914]: 2025-10-01 16:42:26.481333126 +0000 UTC m=+0.045291844 container create d4b073fce60ee9b8c9c625878fc510fb6becea2c37ee241b4085adc29e5384ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hertz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:42:26 np0005464891 systemd[1]: Started libpod-conmon-d4b073fce60ee9b8c9c625878fc510fb6becea2c37ee241b4085adc29e5384ca.scope.
Oct  1 12:42:26 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:42:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33fbc1cf5ad5c5169dcc1468ac36fd3a3f3ea500855bb411c7f93d50a537b79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:42:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33fbc1cf5ad5c5169dcc1468ac36fd3a3f3ea500855bb411c7f93d50a537b79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:42:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33fbc1cf5ad5c5169dcc1468ac36fd3a3f3ea500855bb411c7f93d50a537b79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:42:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33fbc1cf5ad5c5169dcc1468ac36fd3a3f3ea500855bb411c7f93d50a537b79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:42:26 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 12:42:26 np0005464891 podman[268914]: 2025-10-01 16:42:26.462350041 +0000 UTC m=+0.026308769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:42:26 np0005464891 podman[268914]: 2025-10-01 16:42:26.567392967 +0000 UTC m=+0.131351695 container init d4b073fce60ee9b8c9c625878fc510fb6becea2c37ee241b4085adc29e5384ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 12:42:26 np0005464891 podman[268914]: 2025-10-01 16:42:26.579649046 +0000 UTC m=+0.143607754 container start d4b073fce60ee9b8c9c625878fc510fb6becea2c37ee241b4085adc29e5384ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hertz, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 12:42:26 np0005464891 podman[268914]: 2025-10-01 16:42:26.583205164 +0000 UTC m=+0.147163912 container attach d4b073fce60ee9b8c9c625878fc510fb6becea2c37ee241b4085adc29e5384ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:42:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 121 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 472 KiB/s rd, 3.4 MiB/s wr, 161 op/s
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]: {
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:        "osd_id": 2,
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:        "type": "bluestore"
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:    },
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:        "osd_id": 0,
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:        "type": "bluestore"
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:    },
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:        "osd_id": 1,
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:        "type": "bluestore"
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]:    }
Oct  1 12:42:27 np0005464891 optimistic_hertz[268930]: }
Oct  1 12:42:27 np0005464891 systemd[1]: libpod-d4b073fce60ee9b8c9c625878fc510fb6becea2c37ee241b4085adc29e5384ca.scope: Deactivated successfully.
Oct  1 12:42:27 np0005464891 systemd[1]: libpod-d4b073fce60ee9b8c9c625878fc510fb6becea2c37ee241b4085adc29e5384ca.scope: Consumed 1.008s CPU time.
Oct  1 12:42:27 np0005464891 podman[268965]: 2025-10-01 16:42:27.638765036 +0000 UTC m=+0.038976709 container died d4b073fce60ee9b8c9c625878fc510fb6becea2c37ee241b4085adc29e5384ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct  1 12:42:27 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e33fbc1cf5ad5c5169dcc1468ac36fd3a3f3ea500855bb411c7f93d50a537b79-merged.mount: Deactivated successfully.
Oct  1 12:42:27 np0005464891 podman[268965]: 2025-10-01 16:42:27.728718724 +0000 UTC m=+0.128930367 container remove d4b073fce60ee9b8c9c625878fc510fb6becea2c37ee241b4085adc29e5384ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 12:42:27 np0005464891 systemd[1]: libpod-conmon-d4b073fce60ee9b8c9c625878fc510fb6becea2c37ee241b4085adc29e5384ca.scope: Deactivated successfully.
Oct  1 12:42:27 np0005464891 nova_compute[259907]: 2025-10-01 16:42:27.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:42:27 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:27.772 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:42:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:42:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:42:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:42:27 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 6f17bab4-a6f8-41a6-b7d3-5bba874656d0 does not exist
Oct  1 12:42:27 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 3f9271ef-78b8-4b18-9aeb-b2057df8af24 does not exist
Oct  1 12:42:28 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:42:28 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:42:28 np0005464891 nova_compute[259907]: 2025-10-01 16:42:28.297 2 DEBUG oslo_concurrency.lockutils [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Acquiring lock "c067f811-99a1-4d7a-a634-3a4c1db5830e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:28 np0005464891 nova_compute[259907]: 2025-10-01 16:42:28.297 2 DEBUG oslo_concurrency.lockutils [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:28 np0005464891 nova_compute[259907]: 2025-10-01 16:42:28.313 2 DEBUG nova.objects.instance [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lazy-loading 'flavor' on Instance uuid c067f811-99a1-4d7a-a634-3a4c1db5830e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:42:28 np0005464891 nova_compute[259907]: 2025-10-01 16:42:28.349 2 INFO nova.virt.libvirt.driver [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Ignoring supplied device name: /dev/vdb#033[00m
Oct  1 12:42:28 np0005464891 nova_compute[259907]: 2025-10-01 16:42:28.365 2 DEBUG oslo_concurrency.lockutils [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:28 np0005464891 nova_compute[259907]: 2025-10-01 16:42:28.618 2 DEBUG oslo_concurrency.lockutils [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Acquiring lock "c067f811-99a1-4d7a-a634-3a4c1db5830e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:28 np0005464891 nova_compute[259907]: 2025-10-01 16:42:28.619 2 DEBUG oslo_concurrency.lockutils [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:28 np0005464891 nova_compute[259907]: 2025-10-01 16:42:28.619 2 INFO nova.compute.manager [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Attaching volume 3d985315-9697-4d87-9a3d-150a21033dd3 to /dev/vdb#033[00m
Oct  1 12:42:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 121 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 398 KiB/s rd, 2.9 MiB/s wr, 139 op/s
Oct  1 12:42:28 np0005464891 nova_compute[259907]: 2025-10-01 16:42:28.794 2 DEBUG os_brick.utils [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:42:28 np0005464891 nova_compute[259907]: 2025-10-01 16:42:28.798 2 INFO oslo.privsep.daemon [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpvh8subx1/privsep.sock']#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.547 2 INFO oslo.privsep.daemon [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.401 741 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.407 741 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.412 741 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.412 741 INFO oslo.privsep.daemon [-] privsep daemon running as pid 741#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.553 741 DEBUG oslo.privsep.daemon [-] privsep: reply[21f81037-5270-42b2-a2e3-3a929feee06a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:42:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1149262885' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:42:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:42:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1149262885' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.677 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.700 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.700 741 DEBUG oslo.privsep.daemon [-] privsep: reply[f95d71cf-eba6-4848-8b85-88ad4f90f58c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.703 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.712 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.713 741 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd719d3-df27-465e-9c84-d782a7bcd84d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.716 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.733 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.733 741 DEBUG oslo.privsep.daemon [-] privsep: reply[681d2e5e-59a7-4c0c-a4bc-6d3e4a1ba318]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.737 741 DEBUG oslo.privsep.daemon [-] privsep: reply[3da34b66-6dd8-4acd-827d-456584a89f81]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.737 2 DEBUG oslo_concurrency.processutils [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.760 2 DEBUG oslo_concurrency.processutils [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.765 2 DEBUG os_brick.initiator.connectors.lightos [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.766 2 DEBUG os_brick.initiator.connectors.lightos [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.766 2 DEBUG os_brick.initiator.connectors.lightos [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.767 2 DEBUG os_brick.utils [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] <== get_connector_properties: return (971ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:42:29 np0005464891 nova_compute[259907]: 2025-10-01 16:42:29.770 2 DEBUG nova.virt.block_device [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Updating existing volume attachment record: 0f0cd518-4073-4bdc-99bf-d1099f7cfa71 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:42:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:42:30 np0005464891 nova_compute[259907]: 2025-10-01 16:42:30.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:42:30 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2886303775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:42:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 121 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 352 KiB/s rd, 2.6 MiB/s wr, 123 op/s
Oct  1 12:42:30 np0005464891 nova_compute[259907]: 2025-10-01 16:42:30.805 2 DEBUG os_brick.encryptors [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Using volume encryption metadata '{'encryption_key_id': '768b2a0a-2a52-4046-977b-800388b52ced', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3d985315-9697-4d87-9a3d-150a21033dd3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c067f811-99a1-4d7a-a634-3a4c1db5830e', 'attached_at': '', 'detached_at': '', 'volume_id': '3d985315-9697-4d87-9a3d-150a21033dd3', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct  1 12:42:30 np0005464891 nova_compute[259907]: 2025-10-01 16:42:30.809 2 DEBUG oslo_concurrency.lockutils [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:30 np0005464891 nova_compute[259907]: 2025-10-01 16:42:30.809 2 DEBUG oslo_concurrency.lockutils [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:30 np0005464891 nova_compute[259907]: 2025-10-01 16:42:30.811 2 DEBUG oslo_concurrency.lockutils [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:30 np0005464891 nova_compute[259907]: 2025-10-01 16:42:30.821 2 DEBUG barbicanclient.client [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct  1 12:42:30 np0005464891 nova_compute[259907]: 2025-10-01 16:42:30.848 2 DEBUG barbicanclient.v1.secrets [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/768b2a0a-2a52-4046-977b-800388b52ced get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct  1 12:42:30 np0005464891 nova_compute[259907]: 2025-10-01 16:42:30.849 2 INFO barbicanclient.base [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Calculated Secrets uuid ref: secrets/768b2a0a-2a52-4046-977b-800388b52ced#033[00m
Oct  1 12:42:30 np0005464891 nova_compute[259907]: 2025-10-01 16:42:30.885 2 DEBUG barbicanclient.client [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:42:30 np0005464891 nova_compute[259907]: 2025-10-01 16:42:30.886 2 INFO barbicanclient.base [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Calculated Secrets uuid ref: secrets/768b2a0a-2a52-4046-977b-800388b52ced#033[00m
Oct  1 12:42:30 np0005464891 nova_compute[259907]: 2025-10-01 16:42:30.918 2 DEBUG barbicanclient.client [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:42:30 np0005464891 nova_compute[259907]: 2025-10-01 16:42:30.919 2 INFO barbicanclient.base [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Calculated Secrets uuid ref: secrets/768b2a0a-2a52-4046-977b-800388b52ced#033[00m
Oct  1 12:42:30 np0005464891 nova_compute[259907]: 2025-10-01 16:42:30.965 2 DEBUG barbicanclient.client [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:42:30 np0005464891 nova_compute[259907]: 2025-10-01 16:42:30.966 2 INFO barbicanclient.base [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Calculated Secrets uuid ref: secrets/768b2a0a-2a52-4046-977b-800388b52ced#033[00m
Oct  1 12:42:30 np0005464891 nova_compute[259907]: 2025-10-01 16:42:30.993 2 DEBUG barbicanclient.client [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:42:30 np0005464891 nova_compute[259907]: 2025-10-01 16:42:30.993 2 INFO barbicanclient.base [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Calculated Secrets uuid ref: secrets/768b2a0a-2a52-4046-977b-800388b52ced#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.032 2 DEBUG barbicanclient.client [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.034 2 INFO barbicanclient.base [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Calculated Secrets uuid ref: secrets/768b2a0a-2a52-4046-977b-800388b52ced#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.075 2 DEBUG barbicanclient.client [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.075 2 INFO barbicanclient.base [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Calculated Secrets uuid ref: secrets/768b2a0a-2a52-4046-977b-800388b52ced#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.105 2 DEBUG barbicanclient.client [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.106 2 INFO barbicanclient.base [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Calculated Secrets uuid ref: secrets/768b2a0a-2a52-4046-977b-800388b52ced#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.167 2 DEBUG barbicanclient.client [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.167 2 INFO barbicanclient.base [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Calculated Secrets uuid ref: secrets/768b2a0a-2a52-4046-977b-800388b52ced#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.193 2 DEBUG barbicanclient.client [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.194 2 INFO barbicanclient.base [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Calculated Secrets uuid ref: secrets/768b2a0a-2a52-4046-977b-800388b52ced#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.221 2 DEBUG barbicanclient.client [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.221 2 INFO barbicanclient.base [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Calculated Secrets uuid ref: secrets/768b2a0a-2a52-4046-977b-800388b52ced#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.241 2 DEBUG barbicanclient.client [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.241 2 INFO barbicanclient.base [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Calculated Secrets uuid ref: secrets/768b2a0a-2a52-4046-977b-800388b52ced#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.260 2 DEBUG barbicanclient.client [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.261 2 INFO barbicanclient.base [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Calculated Secrets uuid ref: secrets/768b2a0a-2a52-4046-977b-800388b52ced#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.281 2 DEBUG barbicanclient.client [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.282 2 INFO barbicanclient.base [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Calculated Secrets uuid ref: secrets/768b2a0a-2a52-4046-977b-800388b52ced#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.299 2 DEBUG barbicanclient.client [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.299 2 INFO barbicanclient.base [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Calculated Secrets uuid ref: secrets/768b2a0a-2a52-4046-977b-800388b52ced#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.319 2 DEBUG barbicanclient.client [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.319 2 DEBUG nova.virt.libvirt.host [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct  1 12:42:31 np0005464891 nova_compute[259907]:  <usage type="volume">
Oct  1 12:42:31 np0005464891 nova_compute[259907]:    <volume>3d985315-9697-4d87-9a3d-150a21033dd3</volume>
Oct  1 12:42:31 np0005464891 nova_compute[259907]:  </usage>
Oct  1 12:42:31 np0005464891 nova_compute[259907]: </secret>
Oct  1 12:42:31 np0005464891 nova_compute[259907]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.373 2 DEBUG nova.objects.instance [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lazy-loading 'flavor' on Instance uuid c067f811-99a1-4d7a-a634-3a4c1db5830e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.490 2 DEBUG nova.virt.libvirt.driver [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Attempting to attach volume 3d985315-9697-4d87-9a3d-150a21033dd3 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  1 12:42:31 np0005464891 nova_compute[259907]: 2025-10-01 16:42:31.494 2 DEBUG nova.virt.libvirt.guest [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] attach device xml: <disk type="network" device="disk">
Oct  1 12:42:31 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:42:31 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3">
Oct  1 12:42:31 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:42:31 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:42:31 np0005464891 nova_compute[259907]:  <auth username="openstack">
Oct  1 12:42:31 np0005464891 nova_compute[259907]:    <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:42:31 np0005464891 nova_compute[259907]:  </auth>
Oct  1 12:42:31 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:42:31 np0005464891 nova_compute[259907]:  <serial>3d985315-9697-4d87-9a3d-150a21033dd3</serial>
Oct  1 12:42:31 np0005464891 nova_compute[259907]:  <encryption format="luks">
Oct  1 12:42:31 np0005464891 nova_compute[259907]:    <secret type="passphrase" uuid="3cd6045e-3c62-45b2-bcfc-245984582bcc"/>
Oct  1 12:42:31 np0005464891 nova_compute[259907]:  </encryption>
Oct  1 12:42:31 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:42:31 np0005464891 nova_compute[259907]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  1 12:42:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 121 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 352 KiB/s rd, 2.6 MiB/s wr, 123 op/s
Oct  1 12:42:32 np0005464891 nova_compute[259907]: 2025-10-01 16:42:32.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:42:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3129050006' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:42:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:42:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3129050006' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:42:34 np0005464891 nova_compute[259907]: 2025-10-01 16:42:34.067 2 DEBUG nova.virt.libvirt.driver [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:42:34 np0005464891 nova_compute[259907]: 2025-10-01 16:42:34.067 2 DEBUG nova.virt.libvirt.driver [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:42:34 np0005464891 nova_compute[259907]: 2025-10-01 16:42:34.067 2 DEBUG nova.virt.libvirt.driver [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:42:34 np0005464891 nova_compute[259907]: 2025-10-01 16:42:34.067 2 DEBUG nova.virt.libvirt.driver [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] No VIF found with MAC fa:16:3e:9d:7a:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:42:34 np0005464891 nova_compute[259907]: 2025-10-01 16:42:34.381 2 DEBUG oslo_concurrency.lockutils [None req-a75fa24d-28bb-4147-aa5e-fb84b6291546 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 121 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 17 KiB/s wr, 54 op/s
Oct  1 12:42:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:42:35 np0005464891 nova_compute[259907]: 2025-10-01 16:42:35.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:42:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/508346234' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:42:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:42:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/508346234' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:42:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 121 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 15 KiB/s wr, 45 op/s
Oct  1 12:42:36 np0005464891 nova_compute[259907]: 2025-10-01 16:42:36.711 2 DEBUG nova.compute.manager [req-8ebe89c6-fa2a-45b1-856c-ad0d4549a9fb req-9442b65c-1e2e-4c8d-8542-cc77c56d3fc4 4a6d461bdac245229e2e40492503a6e4 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Received event volume-extended-3d985315-9697-4d87-9a3d-150a21033dd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:42:36 np0005464891 nova_compute[259907]: 2025-10-01 16:42:36.730 2 DEBUG nova.compute.manager [req-8ebe89c6-fa2a-45b1-856c-ad0d4549a9fb req-9442b65c-1e2e-4c8d-8542-cc77c56d3fc4 4a6d461bdac245229e2e40492503a6e4 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Handling volume-extended event for volume 3d985315-9697-4d87-9a3d-150a21033dd3 extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896#033[00m
Oct  1 12:42:36 np0005464891 nova_compute[259907]: 2025-10-01 16:42:36.750 2 INFO nova.compute.manager [req-8ebe89c6-fa2a-45b1-856c-ad0d4549a9fb req-9442b65c-1e2e-4c8d-8542-cc77c56d3fc4 4a6d461bdac245229e2e40492503a6e4 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Cinder extended volume 3d985315-9697-4d87-9a3d-150a21033dd3; extending it to detect new size#033[00m
Oct  1 12:42:37 np0005464891 nova_compute[259907]: 2025-10-01 16:42:37.267 2 DEBUG os_brick.encryptors [req-8ebe89c6-fa2a-45b1-856c-ad0d4549a9fb req-9442b65c-1e2e-4c8d-8542-cc77c56d3fc4 4a6d461bdac245229e2e40492503a6e4 887664ec11194266978ceeac8bd7b17e - - default default] Using volume encryption metadata '{'encryption_key_id': '768b2a0a-2a52-4046-977b-800388b52ced', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3d985315-9697-4d87-9a3d-150a21033dd3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c067f811-99a1-4d7a-a634-3a4c1db5830e', 'attached_at': '', 'detached_at': '', 'volume_id': '3d985315-9697-4d87-9a3d-150a21033dd3', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct  1 12:42:37 np0005464891 nova_compute[259907]: 2025-10-01 16:42:37.272 2 INFO oslo.privsep.daemon [req-8ebe89c6-fa2a-45b1-856c-ad0d4549a9fb req-9442b65c-1e2e-4c8d-8542-cc77c56d3fc4 4a6d461bdac245229e2e40492503a6e4 887664ec11194266978ceeac8bd7b17e - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpel2jncz9/privsep.sock']#033[00m
Oct  1 12:42:37 np0005464891 nova_compute[259907]: 2025-10-01 16:42:37.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:42:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4085151708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:42:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:42:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4085151708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:42:37 np0005464891 nova_compute[259907]: 2025-10-01 16:42:37.994 2 INFO oslo.privsep.daemon [req-8ebe89c6-fa2a-45b1-856c-ad0d4549a9fb req-9442b65c-1e2e-4c8d-8542-cc77c56d3fc4 4a6d461bdac245229e2e40492503a6e4 887664ec11194266978ceeac8bd7b17e - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  1 12:42:37 np0005464891 nova_compute[259907]: 2025-10-01 16:42:37.855 754 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  1 12:42:37 np0005464891 nova_compute[259907]: 2025-10-01 16:42:37.862 754 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  1 12:42:37 np0005464891 nova_compute[259907]: 2025-10-01 16:42:37.866 754 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  1 12:42:37 np0005464891 nova_compute[259907]: 2025-10-01 16:42:37.866 754 INFO oslo.privsep.daemon [-] privsep daemon running as pid 754#033[00m
Oct  1 12:42:38 np0005464891 systemd[1]: Created slice Slice /system/systemd-coredump.
Oct  1 12:42:38 np0005464891 systemd[1]: Started Process Core Dump (PID 269088/UID 0).
Oct  1 12:42:38 np0005464891 podman[269089]: 2025-10-01 16:42:38.43092833 +0000 UTC m=+0.075830009 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:42:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 121 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 17 KiB/s wr, 65 op/s
Oct  1 12:42:39 np0005464891 systemd-coredump[269090]: Process 269069 (qemu-img) of user 0 dumped core.#012#012Stack trace of thread 766:#012#0  0x00007f0cc3ba603c __pthread_kill_implementation (libc.so.6 + 0x8d03c)#012#1  0x00007f0cc3b58b86 raise (libc.so.6 + 0x3fb86)#012#2  0x00007f0cc3b42873 abort (libc.so.6 + 0x29873)#012#3  0x00005635e99e156f ___interceptor_pthread_create (qemu-img + 0x4e56f)#012#4  0x00007f0cc0d7cff4 _ZN6Thread10try_createEm (libceph-common.so.2 + 0x258ff4)#012#5  0x00007f0cc0d7f6ae _ZN6Thread6createEPKcm (libceph-common.so.2 + 0x25b6ae)#012#6  0x00007f0cc1c8626b _ZNSt8_Rb_treeISt4pairINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt10type_indexES0_IKS8_N4ceph12immobile_anyILm576EEEESt10_Select1stISD_ENSA_6common11CephContext19associated_objs_cmpESaISD_EE22_M_emplace_hint_uniqueIJRKSt21piecewise_construct_tSt5tupleIJRSt17basic_string_viewIcS4_ERS7_EESP_IJRKSt15in_place_type_tIN6librbd21TaskFinisherSingletonEERPSH_EEEEESt17_Rb_tree_iteratorISD_ESt23_Rb_tree_const_iteratorISD_EDpOT_.constprop.0 (librbd.so.1 + 0x51126b)#012#7  0x00007f0cc18b37a6 _ZN6librbd8ImageCtx4initEv (librbd.so.1 + 0x13e7a6)#012#8  0x00007f0cc198d2d3 _ZN6librbd5image11OpenRequestINS_8ImageCtxEE12send_refreshEv (librbd.so.1 + 0x2182d3)#012#9  0x00007f0cc198df46 _ZN6librbd5image11OpenRequestINS_8ImageCtxEE23handle_v2_get_data_poolEPi (librbd.so.1 + 0x218f46)#012#10 0x00007f0cc198e2a7 _ZN6librbd4util6detail20rados_state_callbackINS_5image11OpenRequestINS_8ImageCtxEEEXadL_ZNS6_23handle_v2_get_data_poolEPiEELb1EEEvPvS8_ (librbd.so.1 + 0x2192a7)#012#11 0x00007f0cc168c0ac _ZN5boost4asio6detail18completion_handlerINS1_7binder0IN8librados14CB_AioCompleteEEENS0_10io_context19basic_executor_typeISaIvELm0EEEE11do_completeEPvPNS1_19scheduler_operationERKNS_6system10error_codeEm (librados.so.2 + 0xad0ac)#012#12 0x00007f0cc168b585 _ZN5boost4asio6detail14strand_service11do_completeEPvPNS1_19scheduler_operationERKNS_6system10error_codeEm (librados.so.2 + 0xac585)#012#13 0x00007f0cc1706498 _ZN5boost4asio6detail9scheduler3runERNS_6system10error_codeE.constprop.0.isra.0 (librados.so.2 + 0x127498)#012#14 0x00007f0cc16a54e4 _ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZ17make_named_threadIZN4ceph5async15io_context_pool5startEsEUlvE_JEES_St17basic_string_viewIcSt11char_traitsIcEEOT_DpOT0_EUlSD_SG_E_S7_EEEEE6_M_runEv (librados.so.2 + 0xc64e4)#012#15 0x00007f0cc0413ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)#012#16 0x00007f0cc3ba42fa start_thread (libc.so.6 + 0x8b2fa)#012#17 0x00007f0cc3c29540 __clone3 (libc.so.6 + 0x110540)#012#012Stack trace of thread 756:#012#0  0x00007f0cc3ba138a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)#012#1  0x00007f0cc3ba38e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)#012#2  0x00007f0cc040d6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)#012#3  0x00007f0cc18baeb3 _ZN6librbd10ImageStateINS_8ImageCtxEE4openEm (librbd.so.1 + 0x145eb3)#012#4  0x00007f0cc188afcb rbd_open (librbd.so.1 + 0x115fcb)#012#5  0x00007f0cc1e3589d qemu_rbd_open (block-rbd.so + 0x489d)#012#6  0x00005635e99f1e4c bdrv_open_driver.llvm.6332234179151191066 (qemu-img + 0x5ee4c)#012#7  0x00005635e99f6b6b bdrv_open_inherit.llvm.6332234179151191066 (qemu-img + 0x63b6b)#012#8  0x00005635e9a035ce bdrv_open_child_bs.llvm.6332234179151191066 (qemu-img + 0x705ce)#012#9  0x00005635e99f6396 bdrv_open_inherit.llvm.6332234179151191066 (qemu-img + 0x63396)#012#10 0x00005635e9a241f5 blk_new_open (qemu-img + 0x911f5)#012#11 0x00005635e9adfe16 img_open_file (qemu-img + 0x14ce16)#012#12 0x00005635e9adf9e0 img_open (qemu-img + 0x14c9e0)#012#13 0x00005635e9adbc1d img_info (qemu-img + 0x148c1d)#012#14 0x00005635e9ad5638 main (qemu-img + 0x142638)#012#15 0x00007f0cc3b43610 __libc_start_call_main (libc.so.6 + 0x2a610)#012#16 0x00007f0cc3b436c0 __libc_start_main@@GLIBC_2.34 (libc.so.6 + 0x2a6c0)#012#17 0x00005635e99e1215 _start (qemu-img + 0x4e215)#012#012Stack trace of thread 760:#012#0  0x00007f0cc3c28b7e epoll_wait (libc.so.6 + 0x10fb7e)#012#1  0x00007f0cc0f64618 _ZN11EpollDriver10event_waitERSt6vectorI14FiredFileEventSaIS1_EEP7timeval (libceph-common.so.2 + 0x440618)#012#2  0x00007f0cc0f62702 _ZN11EventCenter14process_eventsEjPNSt6chrono8durationImSt5ratioILl1ELl1000000000EEEE (libceph-common.so.2 + 0x43e702)#012#3  0x00007f0cc0f632c6 _ZNSt17_Function_handlerIFvvEZN12NetworkStack10add_threadEP6WorkerEUlvE_E9_M_invokeERKSt9_Any_data (libceph-common.so.2 + 0x43f2c6)#012#4  0x00007f0cc0413ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)#012#5  0x00007f0cc3ba42fa start_thread (libc.so.6 + 0x8b2fa)#012#6  0x00007f0cc3c29540 __clone3 (libc.so.6 + 0x110540)#012#012Stack trace of thread 761:#012#0  0x00007f0cc3c28b7e epoll_wait (libc.so.6 + 0x10fb7e)#012#1  0x00007f0cc0f64618 _ZN11EpollDriver10event_waitERSt6vectorI14FiredFileEventSaIS1_EEP7timeval (libceph-common.so.2 + 0x440618)#012#2  0x00007f0cc0f62702 _ZN11EventCenter14process_eventsEjPNSt6chrono8durationImSt5ratioILl1ELl1000000000EEEE (libceph-common.so.2 + 0x43e702)#012#3  0x00007f0cc0f632c6 _ZNSt17_Function_handlerIFvvEZN12NetworkStack10add_threadEP6WorkerEUlvE_E9_M_invokeERKSt9_Any_data (libceph-common.so.2 + 0x43f2c6)#012#4  0x00007f0cc0413ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)#012#5  0x00007f0cc3ba42fa start_thread (libc.so.6 + 0x8b2fa)#012#6  0x00007f0cc3c29540 __clone3 (libc.so.6 + 0x110540)#012#012Stack trace of thread 774:#012#0  0x00007f0cc3c21b16 __mmap (libc.so.6 + 0x108b16)#012#1  0x00007f0cc3bb0514 new_heap (libc.so.6 + 0x97514)#012#2  0x00007f0cc3bb108b arena_get2.part.0 (libc.so.6 + 0x9808b)#012#3  0x00007f0cc3bb40ab __libc_malloc (libc.so.6 + 0x9b0ab)#012#4  0x00007f0cc42afe7e malloc (ld-linux-x86-64.so.2 + 0x12e7e)#012#5  0x00007f0cc42b3eec __tls_get_addr (ld-linux-x86-64.so.2 + 0x16eec)#012#6  0x00007f0cc0dc05a4 ceph_pthread_setname (libceph-common.so.2 + 0x29c5a4)#012#7  0x00007f0cc0d7cf38 _ZN6Thread13entry_wrapperEv (libceph-common.so.2 + 0x258f38)#012#8  0x00007f0cc3ba42fa start_thread (libc.so.6 + 0x8b2fa)#012#9  0x00007f0cc3c29540 __clone3 (libc.so.6 + 0x110540)#012#012Stack trace of thread 757:#012#0  0x00007f0cc3c2196d syscall (libc.so.6 + 0x10896d)#012#1  0x00005635e9b60f73 qemu_event_wait (qemu-img + 0x1cdf73)#012#2  0x00005635e9b6df87 call_rcu_thread (qemu-img + 0x1daf87)#012#3  0x00005635e9b612ba qemu_thread_start.llvm.7701297430486814853 (qemu-img + 0x1ce2ba)#012#4  0x00007f0cc3ba42fa start_thread (libc.so.6 + 0x8b2fa)#012#5  0x00007f0cc3c29540 __clone3 (libc.so.6 + 0x110540)#012#012Stack trace of thread 772:#012#0  0x00007f0cc3ba138a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)#012#1  0x00007f0cc3ba38e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)#012#2  0x00007f0cc040d6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)#012#3  0x00007f0cc0d827f8 _ZN15CommonSafeTimerISt5mutexE12timer_threadEv (libceph-common.so.2 + 0x25e7f8)#012#4  0x00007f0cc0d82f81 _ZN21CommonSafeTimerThreadISt5mutexE5entryEv (libceph-common.so.2 + 0x25ef81)#012#5  0x00007f0cc3ba42fa start_thread (libc.so.6 + 0x8b2fa)#012#6  0x00007f0cc3c29540 __clone3 (libc.so.6 + 0x110540)#012#012Stack trace of thread 773:#012#0  0x00007f0cc3ba138a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)#012#1  0x00007f0cc3ba38e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)#012#2  0x00007f0cc040d6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)#012#3  0x00007f0cc0d827f8 _ZN15CommonSafeTimerISt5mutexE12timer_threadEv (libceph-common.so.2 + 0x25e7f8)#012#4  0x00007f0cc0d82f81 _ZN21CommonSafeTimerThreadISt5mutexE5entryEv (libceph-common.so.2 + 0x25ef81)#012#5  0x00007f0cc3ba42fa start_thread (libc.so.6 + 0x8b2fa)#012#6  0x00007f0cc3c29540 __clone3 (libc.so.6 + 0x110540)#012#012Stack trace of thread 758:#012#0  0x00007f0cc3ba138a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)#012#1  0x00007f0cc3ba38e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)#012#2  0x00007f0cc040d6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)#012#3  0x00007f0cc0f8f0a2 _ZN4ceph7logging3Log5entryEv (
Oct  1 12:42:39 np0005464891 systemd[1]: systemd-coredump@0-269088-0.service: Deactivated successfully.
Oct  1 12:42:39 np0005464891 systemd[1]: systemd-coredump@0-269088-0.service: Consumed 1.022s CPU time.
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.442 2 ERROR nova.virt.libvirt.driver [req-8ebe89c6-fa2a-45b1-856c-ad0d4549a9fb req-9442b65c-1e2e-4c8d-8542-cc77c56d3fc4 4a6d461bdac245229e2e40492503a6e4 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Unknown error when attempting to find the payload_offset for LUKSv1 encrypted disk rbd:volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3:id=openstack.: nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3:id=openstack : Unexpected error while running command.
Oct  1 12:42:39 np0005464891 nova_compute[259907]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3:id=openstack --force-share --output=json
Oct  1 12:42:39 np0005464891 nova_compute[259907]: Exit code: -6
Oct  1 12:42:39 np0005464891 nova_compute[259907]: Stdout: ''
Oct  1 12:42:39 np0005464891 nova_compute[259907]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.442 2 ERROR nova.virt.libvirt.driver [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Traceback (most recent call last):
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.442 2 ERROR nova.virt.libvirt.driver [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2788, in _resize_attached_encrypted_volume
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.442 2 ERROR nova.virt.libvirt.driver [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e]     info = images.privileged_qemu_img_info(path)
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.442 2 ERROR nova.virt.libvirt.driver [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e]   File "/usr/lib/python3.9/site-packages/nova/virt/images.py", line 57, in privileged_qemu_img_info
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.442 2 ERROR nova.virt.libvirt.driver [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e]     info = nova.privsep.qemu.privileged_qemu_img_info(path, format=format)
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.442 2 ERROR nova.virt.libvirt.driver [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e]   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.442 2 ERROR nova.virt.libvirt.driver [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e]     return self.channel.remote_call(name, args, kwargs,
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.442 2 ERROR nova.virt.libvirt.driver [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e]   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.442 2 ERROR nova.virt.libvirt.driver [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e]     raise exc_type(*result[2])
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.442 2 ERROR nova.virt.libvirt.driver [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3:id=openstack : Unexpected error while running command.
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.442 2 ERROR nova.virt.libvirt.driver [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3:id=openstack --force-share --output=json
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.442 2 ERROR nova.virt.libvirt.driver [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Exit code: -6
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.442 2 ERROR nova.virt.libvirt.driver [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Stdout: ''
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.442 2 ERROR nova.virt.libvirt.driver [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.442 2 ERROR nova.virt.libvirt.driver [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] #033[00m
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.454 2 WARNING nova.compute.manager [req-8ebe89c6-fa2a-45b1-856c-ad0d4549a9fb req-9442b65c-1e2e-4c8d-8542-cc77c56d3fc4 4a6d461bdac245229e2e40492503a6e4 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Extend volume failed, volume_id=3d985315-9697-4d87-9a3d-150a21033dd3, reason: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3:id=openstack : Unexpected error while running command.
Oct  1 12:42:39 np0005464891 nova_compute[259907]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3:id=openstack --force-share --output=json
Oct  1 12:42:39 np0005464891 nova_compute[259907]: Exit code: -6
Oct  1 12:42:39 np0005464891 nova_compute[259907]: Stdout: ''
Oct  1 12:42:39 np0005464891 nova_compute[259907]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n': nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3:id=openstack : Unexpected error while running command.#033[00m
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server [req-8ebe89c6-fa2a-45b1-856c-ad0d4549a9fb req-9442b65c-1e2e-4c8d-8542-cc77c56d3fc4 4a6d461bdac245229e2e40492503a6e4 887664ec11194266978ceeac8bd7b17e - - default default] Exception during message handling: nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3:id=openstack : Unexpected error while running command.
Oct  1 12:42:39 np0005464891 nova_compute[259907]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3:id=openstack --force-share --output=json
Oct  1 12:42:39 np0005464891 nova_compute[259907]: Exit code: -6
Oct  1 12:42:39 np0005464891 nova_compute[259907]: Stdout: ''
Oct  1 12:42:39 np0005464891 nova_compute[259907]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11073, in external_instance_event
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     self.extend_volume(context, instance, event.tag)
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 1439, in decorated_function
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     compute_utils.add_instance_fault_from_exc(context,
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10930, in extend_volume
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     self.driver.extend_volume(context, connection_info, instance,
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2865, in extend_volume
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     self._resize_attached_encrypted_volume(
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2804, in _resize_attached_encrypted_volume
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     LOG.exception('Unknown error when attempting to find the '
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2788, in _resize_attached_encrypted_volume
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     info = images.privileged_qemu_img_info(path)
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/images.py", line 57, in privileged_qemu_img_info
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     info = nova.privsep.qemu.privileged_qemu_img_info(path, format=format)
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     return self.channel.remote_call(name, args, kwargs,
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server     raise exc_type(*result[2])
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3:id=openstack : Unexpected error while running command.
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3:id=openstack --force-share --output=json
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server Exit code: -6
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server Stdout: ''
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Oct  1 12:42:39 np0005464891 nova_compute[259907]: 2025-10-01 16:42:39.508 2 ERROR oslo_messaging.rpc.server #033[00m
Oct  1 12:42:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:42:40 np0005464891 nova_compute[259907]: 2025-10-01 16:42:40.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:42:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/442180044' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:42:40 np0005464891 nova_compute[259907]: 2025-10-01 16:42:40.297 2 DEBUG oslo_concurrency.lockutils [None req-5f07bb85-b3fc-4272-9e13-ee065d028069 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Acquiring lock "c067f811-99a1-4d7a-a634-3a4c1db5830e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:40 np0005464891 nova_compute[259907]: 2025-10-01 16:42:40.298 2 DEBUG oslo_concurrency.lockutils [None req-5f07bb85-b3fc-4272-9e13-ee065d028069 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:42:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/442180044' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:42:40 np0005464891 nova_compute[259907]: 2025-10-01 16:42:40.315 2 INFO nova.compute.manager [None req-5f07bb85-b3fc-4272-9e13-ee065d028069 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Detaching volume 3d985315-9697-4d87-9a3d-150a21033dd3#033[00m
Oct  1 12:42:40 np0005464891 nova_compute[259907]: 2025-10-01 16:42:40.431 2 INFO nova.virt.block_device [None req-5f07bb85-b3fc-4272-9e13-ee065d028069 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Attempting to driver detach volume 3d985315-9697-4d87-9a3d-150a21033dd3 from mountpoint /dev/vdb#033[00m
Oct  1 12:42:40 np0005464891 nova_compute[259907]: 2025-10-01 16:42:40.549 2 DEBUG os_brick.encryptors [None req-5f07bb85-b3fc-4272-9e13-ee065d028069 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Using volume encryption metadata '{'encryption_key_id': '768b2a0a-2a52-4046-977b-800388b52ced', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3d985315-9697-4d87-9a3d-150a21033dd3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c067f811-99a1-4d7a-a634-3a4c1db5830e', 'attached_at': '', 'detached_at': '', 'volume_id': '3d985315-9697-4d87-9a3d-150a21033dd3', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct  1 12:42:40 np0005464891 nova_compute[259907]: 2025-10-01 16:42:40.560 2 DEBUG nova.virt.libvirt.driver [None req-5f07bb85-b3fc-4272-9e13-ee065d028069 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Attempting to detach device vdb from instance c067f811-99a1-4d7a-a634-3a4c1db5830e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  1 12:42:40 np0005464891 nova_compute[259907]: 2025-10-01 16:42:40.561 2 DEBUG nova.virt.libvirt.guest [None req-5f07bb85-b3fc-4272-9e13-ee065d028069 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:42:40 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:42:40 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3">
Oct  1 12:42:40 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:42:40 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:42:40 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:42:40 np0005464891 nova_compute[259907]:  <serial>3d985315-9697-4d87-9a3d-150a21033dd3</serial>
Oct  1 12:42:40 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:42:40 np0005464891 nova_compute[259907]:  <encryption format="luks">
Oct  1 12:42:40 np0005464891 nova_compute[259907]:    <secret type="passphrase" uuid="3cd6045e-3c62-45b2-bcfc-245984582bcc"/>
Oct  1 12:42:40 np0005464891 nova_compute[259907]:  </encryption>
Oct  1 12:42:40 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:42:40 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:42:40 np0005464891 nova_compute[259907]: 2025-10-01 16:42:40.572 2 INFO nova.virt.libvirt.driver [None req-5f07bb85-b3fc-4272-9e13-ee065d028069 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Successfully detached device vdb from instance c067f811-99a1-4d7a-a634-3a4c1db5830e from the persistent domain config.#033[00m
Oct  1 12:42:40 np0005464891 nova_compute[259907]: 2025-10-01 16:42:40.573 2 DEBUG nova.virt.libvirt.driver [None req-5f07bb85-b3fc-4272-9e13-ee065d028069 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance c067f811-99a1-4d7a-a634-3a4c1db5830e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  1 12:42:40 np0005464891 nova_compute[259907]: 2025-10-01 16:42:40.573 2 DEBUG nova.virt.libvirt.guest [None req-5f07bb85-b3fc-4272-9e13-ee065d028069 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:42:40 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:42:40 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-3d985315-9697-4d87-9a3d-150a21033dd3">
Oct  1 12:42:40 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:42:40 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:42:40 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:42:40 np0005464891 nova_compute[259907]:  <serial>3d985315-9697-4d87-9a3d-150a21033dd3</serial>
Oct  1 12:42:40 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:42:40 np0005464891 nova_compute[259907]:  <encryption format="luks">
Oct  1 12:42:40 np0005464891 nova_compute[259907]:    <secret type="passphrase" uuid="3cd6045e-3c62-45b2-bcfc-245984582bcc"/>
Oct  1 12:42:40 np0005464891 nova_compute[259907]:  </encryption>
Oct  1 12:42:40 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:42:40 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:42:40 np0005464891 nova_compute[259907]: 2025-10-01 16:42:40.705 2 DEBUG nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Received event <DeviceRemovedEvent: 1759336960.7045476, c067f811-99a1-4d7a-a634-3a4c1db5830e => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  1 12:42:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 121 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct  1 12:42:40 np0005464891 nova_compute[259907]: 2025-10-01 16:42:40.708 2 DEBUG nova.virt.libvirt.driver [None req-5f07bb85-b3fc-4272-9e13-ee065d028069 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance c067f811-99a1-4d7a-a634-3a4c1db5830e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  1 12:42:40 np0005464891 nova_compute[259907]: 2025-10-01 16:42:40.712 2 INFO nova.virt.libvirt.driver [None req-5f07bb85-b3fc-4272-9e13-ee065d028069 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Successfully detached device vdb from instance c067f811-99a1-4d7a-a634-3a4c1db5830e from the live domain config.#033[00m
Oct  1 12:42:41 np0005464891 nova_compute[259907]: 2025-10-01 16:42:41.330 2 DEBUG nova.objects.instance [None req-5f07bb85-b3fc-4272-9e13-ee065d028069 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lazy-loading 'flavor' on Instance uuid c067f811-99a1-4d7a-a634-3a4c1db5830e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:42:41 np0005464891 nova_compute[259907]: 2025-10-01 16:42:41.379 2 DEBUG oslo_concurrency.lockutils [None req-5f07bb85-b3fc-4272-9e13-ee065d028069 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:41 np0005464891 nova_compute[259907]: 2025-10-01 16:42:41.774 2 DEBUG oslo_concurrency.lockutils [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Acquiring lock "c067f811-99a1-4d7a-a634-3a4c1db5830e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:41 np0005464891 nova_compute[259907]: 2025-10-01 16:42:41.775 2 DEBUG oslo_concurrency.lockutils [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:41 np0005464891 nova_compute[259907]: 2025-10-01 16:42:41.775 2 DEBUG oslo_concurrency.lockutils [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Acquiring lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:41 np0005464891 nova_compute[259907]: 2025-10-01 16:42:41.776 2 DEBUG oslo_concurrency.lockutils [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:41 np0005464891 nova_compute[259907]: 2025-10-01 16:42:41.776 2 DEBUG oslo_concurrency.lockutils [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:41 np0005464891 nova_compute[259907]: 2025-10-01 16:42:41.778 2 INFO nova.compute.manager [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Terminating instance#033[00m
Oct  1 12:42:41 np0005464891 nova_compute[259907]: 2025-10-01 16:42:41.780 2 DEBUG nova.compute.manager [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:42:42 np0005464891 kernel: tapa5d23fa4-49 (unregistering): left promiscuous mode
Oct  1 12:42:42 np0005464891 NetworkManager[44940]: <info>  [1759336962.0086] device (tapa5d23fa4-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:42 np0005464891 ovn_controller[152409]: 2025-10-01T16:42:42Z|00032|binding|INFO|Releasing lport a5d23fa4-4991-45da-a2a2-84f66c06fcee from this chassis (sb_readonly=0)
Oct  1 12:42:42 np0005464891 ovn_controller[152409]: 2025-10-01T16:42:42Z|00033|binding|INFO|Setting lport a5d23fa4-4991-45da-a2a2-84f66c06fcee down in Southbound
Oct  1 12:42:42 np0005464891 ovn_controller[152409]: 2025-10-01T16:42:42Z|00034|binding|INFO|Removing iface tapa5d23fa4-49 ovn-installed in OVS
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:42 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:42.052 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:7a:be 10.100.0.8'], port_security=['fa:16:3e:9d:7a:be 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c067f811-99a1-4d7a-a634-3a4c1db5830e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36957630-badc-42b5-ad26-5cdca3a519c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd89473c2be684cd0bea1fd04915d5d1b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcdfa1ae-8f87-403d-9e7b-02099e78c20c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.250'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51762d74-115a-4625-9f3e-27d14d10d9f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=a5d23fa4-4991-45da-a2a2-84f66c06fcee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:42:42 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:42.053 162546 INFO neutron.agent.ovn.metadata.agent [-] Port a5d23fa4-4991-45da-a2a2-84f66c06fcee in datapath 36957630-badc-42b5-ad26-5cdca3a519c1 unbound from our chassis#033[00m
Oct  1 12:42:42 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:42.054 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 36957630-badc-42b5-ad26-5cdca3a519c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:42:42 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:42.056 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[0644096b-ba3f-4edc-a75c-92f902d9fa09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:42 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:42.056 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1 namespace which is not needed anymore#033[00m
Oct  1 12:42:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:42:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:42:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:42:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:42:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:42:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:42:42 np0005464891 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct  1 12:42:42 np0005464891 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 15.694s CPU time.
Oct  1 12:42:42 np0005464891 systemd-machined[214891]: Machine qemu-1-instance-00000001 terminated.
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.222 2 INFO nova.virt.libvirt.driver [-] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Instance destroyed successfully.#033[00m
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.222 2 DEBUG nova.objects.instance [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lazy-loading 'resources' on Instance uuid c067f811-99a1-4d7a-a634-3a4c1db5830e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.264 2 DEBUG nova.virt.libvirt.vif [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:41:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1104832253',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1104832253',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1104832253',id=1,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMnWoh78fN2kows9o5rLLFpLcGNgTIFnzTsvGOxoeM8MdE94J62h/z7pDu80RzC2YZ/BbirbdlveD3DsdRrs24cEjDPmZJ7NrjUJDw88Ghm5w5DmW0BLAwrnSuWpXfHayg==',key_name='tempest-keypair-588036381',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:42:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d89473c2be684cd0bea1fd04915d5d1b',ramdisk_id='',reservation_id='r-8w28e15x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-2134626502',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-2134626502-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:42:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f99f9a421d8c468bb290009ac8393742',uuid=c067f811-99a1-4d7a-a634-3a4c1db5830e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "address": "fa:16:3e:9d:7a:be", "network": {"id": "36957630-badc-42b5-ad26-5cdca3a519c1", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-735724152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d89473c2be684cd0bea1fd04915d5d1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5d23fa4-49", "ovs_interfaceid": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.265 2 DEBUG nova.network.os_vif_util [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Converting VIF {"id": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "address": "fa:16:3e:9d:7a:be", "network": {"id": "36957630-badc-42b5-ad26-5cdca3a519c1", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-735724152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d89473c2be684cd0bea1fd04915d5d1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5d23fa4-49", "ovs_interfaceid": "a5d23fa4-4991-45da-a2a2-84f66c06fcee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.265 2 DEBUG nova.network.os_vif_util [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:7a:be,bridge_name='br-int',has_traffic_filtering=True,id=a5d23fa4-4991-45da-a2a2-84f66c06fcee,network=Network(36957630-badc-42b5-ad26-5cdca3a519c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5d23fa4-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.266 2 DEBUG os_vif [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:7a:be,bridge_name='br-int',has_traffic_filtering=True,id=a5d23fa4-4991-45da-a2a2-84f66c06fcee,network=Network(36957630-badc-42b5-ad26-5cdca3a519c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5d23fa4-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.269 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5d23fa4-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.278 2 INFO os_vif [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:7a:be,bridge_name='br-int',has_traffic_filtering=True,id=a5d23fa4-4991-45da-a2a2-84f66c06fcee,network=Network(36957630-badc-42b5-ad26-5cdca3a519c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5d23fa4-49')#033[00m
Oct  1 12:42:42 np0005464891 neutron-haproxy-ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1[267996]: [NOTICE]   (268001) : haproxy version is 2.8.14-c23fe91
Oct  1 12:42:42 np0005464891 neutron-haproxy-ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1[267996]: [NOTICE]   (268001) : path to executable is /usr/sbin/haproxy
Oct  1 12:42:42 np0005464891 neutron-haproxy-ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1[267996]: [WARNING]  (268001) : Exiting Master process...
Oct  1 12:42:42 np0005464891 neutron-haproxy-ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1[267996]: [WARNING]  (268001) : Exiting Master process...
Oct  1 12:42:42 np0005464891 neutron-haproxy-ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1[267996]: [ALERT]    (268001) : Current worker (268003) exited with code 143 (Terminated)
Oct  1 12:42:42 np0005464891 neutron-haproxy-ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1[267996]: [WARNING]  (268001) : All workers exited. Exiting... (0)
Oct  1 12:42:42 np0005464891 systemd[1]: libpod-59154d3e33dc1181a583517ec0139c2c6e3de620cc200036660e89b25aade41e.scope: Deactivated successfully.
Oct  1 12:42:42 np0005464891 podman[269143]: 2025-10-01 16:42:42.324077463 +0000 UTC m=+0.104300876 container died 59154d3e33dc1181a583517ec0139c2c6e3de620cc200036660e89b25aade41e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.330 2 DEBUG nova.compute.manager [req-99fe0e48-b022-45df-a4f1-873070611b39 req-3a4cac04-8011-44ff-9e86-caf6252fe28a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Received event network-vif-unplugged-a5d23fa4-4991-45da-a2a2-84f66c06fcee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.331 2 DEBUG oslo_concurrency.lockutils [req-99fe0e48-b022-45df-a4f1-873070611b39 req-3a4cac04-8011-44ff-9e86-caf6252fe28a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.331 2 DEBUG oslo_concurrency.lockutils [req-99fe0e48-b022-45df-a4f1-873070611b39 req-3a4cac04-8011-44ff-9e86-caf6252fe28a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.332 2 DEBUG oslo_concurrency.lockutils [req-99fe0e48-b022-45df-a4f1-873070611b39 req-3a4cac04-8011-44ff-9e86-caf6252fe28a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.332 2 DEBUG nova.compute.manager [req-99fe0e48-b022-45df-a4f1-873070611b39 req-3a4cac04-8011-44ff-9e86-caf6252fe28a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] No waiting events found dispatching network-vif-unplugged-a5d23fa4-4991-45da-a2a2-84f66c06fcee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:42:42 np0005464891 nova_compute[259907]: 2025-10-01 16:42:42.333 2 DEBUG nova.compute.manager [req-99fe0e48-b022-45df-a4f1-873070611b39 req-3a4cac04-8011-44ff-9e86-caf6252fe28a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Received event network-vif-unplugged-a5d23fa4-4991-45da-a2a2-84f66c06fcee for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:42:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 121 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct  1 12:42:42 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-59154d3e33dc1181a583517ec0139c2c6e3de620cc200036660e89b25aade41e-userdata-shm.mount: Deactivated successfully.
Oct  1 12:42:42 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ab1561bd79171943b6245da13b1089a6a6db6a348d321ed0f133cbe996b0b071-merged.mount: Deactivated successfully.
Oct  1 12:42:42 np0005464891 podman[269143]: 2025-10-01 16:42:42.841798036 +0000 UTC m=+0.622021449 container cleanup 59154d3e33dc1181a583517ec0139c2c6e3de620cc200036660e89b25aade41e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:42:42 np0005464891 systemd[1]: libpod-conmon-59154d3e33dc1181a583517ec0139c2c6e3de620cc200036660e89b25aade41e.scope: Deactivated successfully.
Oct  1 12:42:43 np0005464891 podman[269198]: 2025-10-01 16:42:43.062187614 +0000 UTC m=+0.189836903 container remove 59154d3e33dc1181a583517ec0139c2c6e3de620cc200036660e89b25aade41e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  1 12:42:43 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:43.072 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[7c673e4d-371b-44c2-b72c-a7305dbc25fb]: (4, ('Wed Oct  1 04:42:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1 (59154d3e33dc1181a583517ec0139c2c6e3de620cc200036660e89b25aade41e)\n59154d3e33dc1181a583517ec0139c2c6e3de620cc200036660e89b25aade41e\nWed Oct  1 04:42:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1 (59154d3e33dc1181a583517ec0139c2c6e3de620cc200036660e89b25aade41e)\n59154d3e33dc1181a583517ec0139c2c6e3de620cc200036660e89b25aade41e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:43 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:43.075 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[bdab1909-016e-47e3-b8b9-e3912c783823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:43 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:43.076 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36957630-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:42:43 np0005464891 kernel: tap36957630-b0: left promiscuous mode
Oct  1 12:42:43 np0005464891 nova_compute[259907]: 2025-10-01 16:42:43.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:43 np0005464891 nova_compute[259907]: 2025-10-01 16:42:43.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:43 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:43.098 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[82e8531c-b33b-4ab3-8ea8-09de6152eb37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:43 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:43.126 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa73d0d-9c0b-4f39-ae76-33cb3c57aa13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:43 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:43.127 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[a7cde588-6816-45f0-a1ee-1ebeb81ef709]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:43 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:43.150 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[fb562661-5004-4fce-886f-8ad415e7a3b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388194, 'reachable_time': 36298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269213, 'error': None, 'target': 'ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:43 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:43.165 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-36957630-badc-42b5-ad26-5cdca3a519c1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:42:43 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:42:43.166 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[ab994753-3f0d-427a-9540-745752b41d58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:42:43 np0005464891 systemd[1]: run-netns-ovnmeta\x2d36957630\x2dbadc\x2d42b5\x2dad26\x2d5cdca3a519c1.mount: Deactivated successfully.
Oct  1 12:42:44 np0005464891 nova_compute[259907]: 2025-10-01 16:42:44.060 2 INFO nova.virt.libvirt.driver [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Deleting instance files /var/lib/nova/instances/c067f811-99a1-4d7a-a634-3a4c1db5830e_del#033[00m
Oct  1 12:42:44 np0005464891 nova_compute[259907]: 2025-10-01 16:42:44.061 2 INFO nova.virt.libvirt.driver [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Deletion of /var/lib/nova/instances/c067f811-99a1-4d7a-a634-3a4c1db5830e_del complete#033[00m
Oct  1 12:42:44 np0005464891 nova_compute[259907]: 2025-10-01 16:42:44.143 2 DEBUG nova.virt.libvirt.host [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct  1 12:42:44 np0005464891 nova_compute[259907]: 2025-10-01 16:42:44.144 2 INFO nova.virt.libvirt.host [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] UEFI support detected#033[00m
Oct  1 12:42:44 np0005464891 nova_compute[259907]: 2025-10-01 16:42:44.147 2 INFO nova.compute.manager [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Took 2.37 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:42:44 np0005464891 nova_compute[259907]: 2025-10-01 16:42:44.148 2 DEBUG oslo.service.loopingcall [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:42:44 np0005464891 nova_compute[259907]: 2025-10-01 16:42:44.148 2 DEBUG nova.compute.manager [-] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:42:44 np0005464891 nova_compute[259907]: 2025-10-01 16:42:44.148 2 DEBUG nova.network.neutron [-] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:42:44 np0005464891 nova_compute[259907]: 2025-10-01 16:42:44.489 2 DEBUG nova.compute.manager [req-51a62b35-bd82-4c15-92c2-2c92eb40676e req-0d6fcf43-cdd5-4225-94c6-436eeab76949 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Received event network-vif-plugged-a5d23fa4-4991-45da-a2a2-84f66c06fcee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:42:44 np0005464891 nova_compute[259907]: 2025-10-01 16:42:44.490 2 DEBUG oslo_concurrency.lockutils [req-51a62b35-bd82-4c15-92c2-2c92eb40676e req-0d6fcf43-cdd5-4225-94c6-436eeab76949 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:44 np0005464891 nova_compute[259907]: 2025-10-01 16:42:44.491 2 DEBUG oslo_concurrency.lockutils [req-51a62b35-bd82-4c15-92c2-2c92eb40676e req-0d6fcf43-cdd5-4225-94c6-436eeab76949 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:44 np0005464891 nova_compute[259907]: 2025-10-01 16:42:44.492 2 DEBUG oslo_concurrency.lockutils [req-51a62b35-bd82-4c15-92c2-2c92eb40676e req-0d6fcf43-cdd5-4225-94c6-436eeab76949 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:44 np0005464891 nova_compute[259907]: 2025-10-01 16:42:44.492 2 DEBUG nova.compute.manager [req-51a62b35-bd82-4c15-92c2-2c92eb40676e req-0d6fcf43-cdd5-4225-94c6-436eeab76949 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] No waiting events found dispatching network-vif-plugged-a5d23fa4-4991-45da-a2a2-84f66c06fcee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:42:44 np0005464891 nova_compute[259907]: 2025-10-01 16:42:44.493 2 WARNING nova.compute.manager [req-51a62b35-bd82-4c15-92c2-2c92eb40676e req-0d6fcf43-cdd5-4225-94c6-436eeab76949 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Received unexpected event network-vif-plugged-a5d23fa4-4991-45da-a2a2-84f66c06fcee for instance with vm_state active and task_state deleting.#033[00m
Oct  1 12:42:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 121 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 4.5 KiB/s wr, 67 op/s
Oct  1 12:42:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:42:45 np0005464891 nova_compute[259907]: 2025-10-01 16:42:45.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:45 np0005464891 nova_compute[259907]: 2025-10-01 16:42:45.364 2 DEBUG nova.network.neutron [-] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:42:45 np0005464891 nova_compute[259907]: 2025-10-01 16:42:45.391 2 INFO nova.compute.manager [-] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Took 1.24 seconds to deallocate network for instance.#033[00m
Oct  1 12:42:45 np0005464891 nova_compute[259907]: 2025-10-01 16:42:45.435 2 DEBUG oslo_concurrency.lockutils [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:42:45 np0005464891 nova_compute[259907]: 2025-10-01 16:42:45.435 2 DEBUG oslo_concurrency.lockutils [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:42:45 np0005464891 nova_compute[259907]: 2025-10-01 16:42:45.491 2 DEBUG oslo_concurrency.processutils [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:42:45 np0005464891 nova_compute[259907]: 2025-10-01 16:42:45.650 2 DEBUG nova.compute.manager [req-4ee73763-8197-43fb-af58-bf488eb816a1 req-f13e211b-8247-4d86-9062-55a44a5591fb af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Received event network-vif-deleted-a5d23fa4-4991-45da-a2a2-84f66c06fcee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:42:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:42:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/935780640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:42:45 np0005464891 nova_compute[259907]: 2025-10-01 16:42:45.971 2 DEBUG oslo_concurrency.processutils [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:42:45 np0005464891 nova_compute[259907]: 2025-10-01 16:42:45.982 2 DEBUG nova.compute.provider_tree [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:42:46 np0005464891 nova_compute[259907]: 2025-10-01 16:42:46.011 2 DEBUG nova.scheduler.client.report [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:42:46 np0005464891 nova_compute[259907]: 2025-10-01 16:42:46.040 2 DEBUG oslo_concurrency.lockutils [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:46 np0005464891 nova_compute[259907]: 2025-10-01 16:42:46.091 2 INFO nova.scheduler.client.report [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Deleted allocations for instance c067f811-99a1-4d7a-a634-3a4c1db5830e#033[00m
Oct  1 12:42:46 np0005464891 nova_compute[259907]: 2025-10-01 16:42:46.275 2 DEBUG oslo_concurrency.lockutils [None req-a0b5054d-6258-437b-9a26-9f2913c89982 f99f9a421d8c468bb290009ac8393742 d89473c2be684cd0bea1fd04915d5d1b - - default default] Lock "c067f811-99a1-4d7a-a634-3a4c1db5830e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:42:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 121 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 24 op/s
Oct  1 12:42:47 np0005464891 nova_compute[259907]: 2025-10-01 16:42:47.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 42 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 4.7 KiB/s wr, 48 op/s
Oct  1 12:42:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:42:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2067245165' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:42:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:42:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2067245165' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:42:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:42:50 np0005464891 podman[269241]: 2025-10-01 16:42:50.054213198 +0000 UTC m=+0.162040614 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible)
Oct  1 12:42:50 np0005464891 nova_compute[259907]: 2025-10-01 16:42:50.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:42:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/84884909' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:42:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:42:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/84884909' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:42:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 42 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.3 KiB/s wr, 29 op/s
Oct  1 12:42:52 np0005464891 nova_compute[259907]: 2025-10-01 16:42:52.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 42 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.8 KiB/s wr, 44 op/s
Oct  1 12:42:53 np0005464891 podman[269268]: 2025-10-01 16:42:53.957305997 +0000 UTC m=+0.070645816 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  1 12:42:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 41 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.3 KiB/s wr, 57 op/s
Oct  1 12:42:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:42:55 np0005464891 nova_compute[259907]: 2025-10-01 16:42:55.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:55 np0005464891 podman[269290]: 2025-10-01 16:42:55.963751004 +0000 UTC m=+0.068504026 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:42:56 np0005464891 nova_compute[259907]: 2025-10-01 16:42:56.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 41 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Oct  1 12:42:56 np0005464891 nova_compute[259907]: 2025-10-01 16:42:56.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:57 np0005464891 nova_compute[259907]: 2025-10-01 16:42:57.221 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759336962.2192214, c067f811-99a1-4d7a-a634-3a4c1db5830e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:42:57 np0005464891 nova_compute[259907]: 2025-10-01 16:42:57.222 2 INFO nova.compute.manager [-] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:42:57 np0005464891 nova_compute[259907]: 2025-10-01 16:42:57.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:42:57 np0005464891 nova_compute[259907]: 2025-10-01 16:42:57.402 2 DEBUG nova.compute.manager [None req-93c971d9-020a-4476-a45e-7853c9194b64 - - - - - -] [instance: c067f811-99a1-4d7a-a634-3a4c1db5830e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:42:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 41 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Oct  1 12:42:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:43:00 np0005464891 nova_compute[259907]: 2025-10-01 16:43:00.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 41 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1023 B/s wr, 28 op/s
Oct  1 12:43:02 np0005464891 nova_compute[259907]: 2025-10-01 16:43:02.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 41 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1023 B/s wr, 28 op/s
Oct  1 12:43:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 41 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 511 B/s wr, 13 op/s
Oct  1 12:43:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:43:05 np0005464891 nova_compute[259907]: 2025-10-01 16:43:05.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 41 MiB data, 243 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:43:07 np0005464891 nova_compute[259907]: 2025-10-01 16:43:07.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 41 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Oct  1 12:43:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct  1 12:43:09 np0005464891 podman[269319]: 2025-10-01 16:43:08.989601903 +0000 UTC m=+0.095196725 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  1 12:43:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct  1 12:43:09 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct  1 12:43:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:43:10.126225) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336990126274, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1568, "num_deletes": 261, "total_data_size": 2198209, "memory_usage": 2244784, "flush_reason": "Manual Compaction"}
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336990144579, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2160268, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19592, "largest_seqno": 21159, "table_properties": {"data_size": 2152874, "index_size": 4337, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16129, "raw_average_key_size": 20, "raw_value_size": 2137852, "raw_average_value_size": 2758, "num_data_blocks": 193, "num_entries": 775, "num_filter_entries": 775, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336874, "oldest_key_time": 1759336874, "file_creation_time": 1759336990, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 18613 microseconds, and 10878 cpu microseconds.
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:43:10.144833) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2160268 bytes OK
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:43:10.144919) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:43:10.147839) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:43:10.147873) EVENT_LOG_v1 {"time_micros": 1759336990147862, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:43:10.147908) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2191124, prev total WAL file size 2191124, number of live WAL files 2.
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:43:10.149933) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2109KB)], [47(6985KB)]
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336990150006, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9313197, "oldest_snapshot_seqno": -1}
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4421 keys, 7549028 bytes, temperature: kUnknown
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336990205008, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7549028, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7518142, "index_size": 18744, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 109581, "raw_average_key_size": 24, "raw_value_size": 7436751, "raw_average_value_size": 1682, "num_data_blocks": 780, "num_entries": 4421, "num_filter_entries": 4421, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759336990, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:43:10.205290) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7549028 bytes
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:43:10.206840) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.1 rd, 137.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 6.8 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(7.8) write-amplify(3.5) OK, records in: 4949, records dropped: 528 output_compression: NoCompression
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:43:10.206869) EVENT_LOG_v1 {"time_micros": 1759336990206856, "job": 24, "event": "compaction_finished", "compaction_time_micros": 55083, "compaction_time_cpu_micros": 31205, "output_level": 6, "num_output_files": 1, "total_output_size": 7549028, "num_input_records": 4949, "num_output_records": 4421, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336990207772, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759336990210429, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:43:10.149760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:43:10.210517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:43:10.210530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:43:10.210533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:43:10.210537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:43:10 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:43:10.210540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:43:10 np0005464891 nova_compute[259907]: 2025-10-01 16:43:10.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 41 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 409 B/s wr, 1 op/s
Oct  1 12:43:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Oct  1 12:43:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Oct  1 12:43:11 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:43:12
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'images']
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:43:12 np0005464891 nova_compute[259907]: 2025-10-01 16:43:12.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:43:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:43:12.441 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:43:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:43:12.442 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:43:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:43:12.442 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:43:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:43:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2106267183' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:43:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:43:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2106267183' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:43:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Oct  1 12:43:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.6 KiB/s wr, 46 op/s
Oct  1 12:43:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:43:15 np0005464891 nova_compute[259907]: 2025-10-01 16:43:15.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Oct  1 12:43:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Oct  1 12:43:15 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Oct  1 12:43:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.2 KiB/s wr, 47 op/s
Oct  1 12:43:16 np0005464891 nova_compute[259907]: 2025-10-01 16:43:16.806 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:43:16 np0005464891 nova_compute[259907]: 2025-10-01 16:43:16.807 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:43:16 np0005464891 nova_compute[259907]: 2025-10-01 16:43:16.807 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:43:16 np0005464891 nova_compute[259907]: 2025-10-01 16:43:16.826 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 12:43:17 np0005464891 nova_compute[259907]: 2025-10-01 16:43:17.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:43:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3585728634' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:43:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:43:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3585728634' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:43:17 np0005464891 nova_compute[259907]: 2025-10-01 16:43:17.819 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:43:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:43:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/662880718' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:43:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:43:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/662880718' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:43:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:43:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/296503096' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:43:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:43:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/296503096' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:43:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 4.5 KiB/s wr, 73 op/s
Oct  1 12:43:18 np0005464891 nova_compute[259907]: 2025-10-01 16:43:18.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:43:18 np0005464891 nova_compute[259907]: 2025-10-01 16:43:18.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:43:18 np0005464891 nova_compute[259907]: 2025-10-01 16:43:18.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:43:18 np0005464891 nova_compute[259907]: 2025-10-01 16:43:18.804 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:43:18 np0005464891 nova_compute[259907]: 2025-10-01 16:43:18.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:43:18 np0005464891 nova_compute[259907]: 2025-10-01 16:43:18.834 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:43:18 np0005464891 nova_compute[259907]: 2025-10-01 16:43:18.835 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:43:18 np0005464891 nova_compute[259907]: 2025-10-01 16:43:18.835 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:43:18 np0005464891 nova_compute[259907]: 2025-10-01 16:43:18.835 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:43:18 np0005464891 nova_compute[259907]: 2025-10-01 16:43:18.835 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:43:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:43:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1948249033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:43:19 np0005464891 nova_compute[259907]: 2025-10-01 16:43:19.229 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:43:19 np0005464891 nova_compute[259907]: 2025-10-01 16:43:19.440 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:43:19 np0005464891 nova_compute[259907]: 2025-10-01 16:43:19.441 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4787MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:43:19 np0005464891 nova_compute[259907]: 2025-10-01 16:43:19.442 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:43:19 np0005464891 nova_compute[259907]: 2025-10-01 16:43:19.442 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:43:19 np0005464891 nova_compute[259907]: 2025-10-01 16:43:19.515 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:43:19 np0005464891 nova_compute[259907]: 2025-10-01 16:43:19.516 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:43:19 np0005464891 nova_compute[259907]: 2025-10-01 16:43:19.538 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:43:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:43:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2197795523' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:43:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:43:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2197795523' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:43:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:43:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Oct  1 12:43:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Oct  1 12:43:19 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Oct  1 12:43:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:43:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028612982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:43:19 np0005464891 nova_compute[259907]: 2025-10-01 16:43:19.972 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:43:19 np0005464891 nova_compute[259907]: 2025-10-01 16:43:19.980 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:43:19 np0005464891 nova_compute[259907]: 2025-10-01 16:43:19.998 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:43:20 np0005464891 nova_compute[259907]: 2025-10-01 16:43:20.019 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:43:20 np0005464891 nova_compute[259907]: 2025-10-01 16:43:20.020 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:43:20 np0005464891 nova_compute[259907]: 2025-10-01 16:43:20.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.2 KiB/s wr, 35 op/s
Oct  1 12:43:20 np0005464891 podman[269386]: 2025-10-01 16:43:20.991504106 +0000 UTC m=+0.099941166 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:43:21 np0005464891 nova_compute[259907]: 2025-10-01 16:43:21.017 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:43:21 np0005464891 nova_compute[259907]: 2025-10-01 16:43:21.039 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:43:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:43:22 np0005464891 nova_compute[259907]: 2025-10-01 16:43:22.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.5 KiB/s wr, 32 op/s
Oct  1 12:43:22 np0005464891 nova_compute[259907]: 2025-10-01 16:43:22.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:43:23 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:43:23.997 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:43:23 np0005464891 nova_compute[259907]: 2025-10-01 16:43:23.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:23 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:43:23.998 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:43:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 3.1 KiB/s wr, 74 op/s
Oct  1 12:43:24 np0005464891 nova_compute[259907]: 2025-10-01 16:43:24.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:43:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:43:25 np0005464891 podman[269413]: 2025-10-01 16:43:25.000615637 +0000 UTC m=+0.095746899 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  1 12:43:25 np0005464891 nova_compute[259907]: 2025-10-01 16:43:25.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:43:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4704 writes, 21K keys, 4704 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4704 writes, 4704 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1436 writes, 6742 keys, 1436 commit groups, 1.0 writes per commit group, ingest: 9.10 MB, 0.02 MB/s#012Interval WAL: 1436 writes, 1436 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     38.3      0.63              0.08        12    0.053       0      0       0.0       0.0#012  L6      1/0    7.20 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2     54.8     44.9      1.73              0.30        11    0.157     48K   5793       0.0       0.0#012 Sum      1/0    7.20 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     40.1     43.1      2.36              0.38        23    0.103     48K   5793       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.3     95.4     95.3      0.56              0.19        12    0.047     28K   3616       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     54.8     44.9      1.73              0.30        11    0.157     48K   5793       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     38.5      0.63              0.08        11    0.057       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.024, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 2.4 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bddc5951f0#2 capacity: 304.00 MB usage: 8.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000137 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(554,8.30 MB,2.72919%) FilterBlock(24,142.86 KB,0.0458918%) IndexBlock(24,270.89 KB,0.0870203%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 12:43:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.9 KiB/s wr, 68 op/s
Oct  1 12:43:26 np0005464891 podman[269433]: 2025-10-01 16:43:26.956762324 +0000 UTC m=+0.070402119 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:43:27 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:43:27.000 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:43:27 np0005464891 nova_compute[259907]: 2025-10-01 16:43:27.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1023 B/s wr, 46 op/s
Oct  1 12:43:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:43:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:43:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:43:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:43:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:43:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:43:28 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 2cde8d2a-a6a9-4428-9da8-504fba34e9a0 does not exist
Oct  1 12:43:28 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 5f7d657d-dcd2-426e-938d-e766ed18ef32 does not exist
Oct  1 12:43:28 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 640b6bcc-3b1c-4d60-a292-89c9a8b273f6 does not exist
Oct  1 12:43:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:43:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:43:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:43:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:43:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:43:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:43:29 np0005464891 podman[269725]: 2025-10-01 16:43:29.57653356 +0000 UTC m=+0.095441762 container create 1ac53c7f7f4c9253e33f96eb34efff80f7029b500f12bb80ce2bf3bd16a1a231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 12:43:29 np0005464891 podman[269725]: 2025-10-01 16:43:29.509736102 +0000 UTC m=+0.028644364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:43:29 np0005464891 systemd[1]: Started libpod-conmon-1ac53c7f7f4c9253e33f96eb34efff80f7029b500f12bb80ce2bf3bd16a1a231.scope.
Oct  1 12:43:29 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:43:29 np0005464891 podman[269725]: 2025-10-01 16:43:29.729040018 +0000 UTC m=+0.247948200 container init 1ac53c7f7f4c9253e33f96eb34efff80f7029b500f12bb80ce2bf3bd16a1a231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:43:29 np0005464891 podman[269725]: 2025-10-01 16:43:29.74320483 +0000 UTC m=+0.262112992 container start 1ac53c7f7f4c9253e33f96eb34efff80f7029b500f12bb80ce2bf3bd16a1a231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 12:43:29 np0005464891 vigilant_blackburn[269741]: 167 167
Oct  1 12:43:29 np0005464891 systemd[1]: libpod-1ac53c7f7f4c9253e33f96eb34efff80f7029b500f12bb80ce2bf3bd16a1a231.scope: Deactivated successfully.
Oct  1 12:43:29 np0005464891 podman[269725]: 2025-10-01 16:43:29.775030921 +0000 UTC m=+0.293939113 container attach 1ac53c7f7f4c9253e33f96eb34efff80f7029b500f12bb80ce2bf3bd16a1a231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:43:29 np0005464891 podman[269725]: 2025-10-01 16:43:29.776431929 +0000 UTC m=+0.295340101 container died 1ac53c7f7f4c9253e33f96eb34efff80f7029b500f12bb80ce2bf3bd16a1a231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:43:29 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:43:29 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:43:29 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:43:29 np0005464891 systemd[1]: var-lib-containers-storage-overlay-acf0191108ad257a6c21237fd2002b0cede7ef2cc11cc5e8f8efc95f6114cf05-merged.mount: Deactivated successfully.
Oct  1 12:43:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:43:30 np0005464891 podman[269725]: 2025-10-01 16:43:30.000996582 +0000 UTC m=+0.519904754 container remove 1ac53c7f7f4c9253e33f96eb34efff80f7029b500f12bb80ce2bf3bd16a1a231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 12:43:30 np0005464891 systemd[1]: libpod-conmon-1ac53c7f7f4c9253e33f96eb34efff80f7029b500f12bb80ce2bf3bd16a1a231.scope: Deactivated successfully.
Oct  1 12:43:30 np0005464891 podman[269765]: 2025-10-01 16:43:30.258864856 +0000 UTC m=+0.121056520 container create 00ff2408afb82e7862785c8aa66d6e79b604187eba6efd866f8e0da4d24296be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 12:43:30 np0005464891 podman[269765]: 2025-10-01 16:43:30.168146376 +0000 UTC m=+0.030338060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:43:30 np0005464891 nova_compute[259907]: 2025-10-01 16:43:30.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:30 np0005464891 systemd[1]: Started libpod-conmon-00ff2408afb82e7862785c8aa66d6e79b604187eba6efd866f8e0da4d24296be.scope.
Oct  1 12:43:30 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:43:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d119f057ac43aaa3e312e1508af34fdbf5468e4078a3504bca1f5ce159a9ab1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:43:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d119f057ac43aaa3e312e1508af34fdbf5468e4078a3504bca1f5ce159a9ab1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:43:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d119f057ac43aaa3e312e1508af34fdbf5468e4078a3504bca1f5ce159a9ab1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:43:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d119f057ac43aaa3e312e1508af34fdbf5468e4078a3504bca1f5ce159a9ab1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:43:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d119f057ac43aaa3e312e1508af34fdbf5468e4078a3504bca1f5ce159a9ab1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:43:30 np0005464891 podman[269765]: 2025-10-01 16:43:30.408774884 +0000 UTC m=+0.270966578 container init 00ff2408afb82e7862785c8aa66d6e79b604187eba6efd866f8e0da4d24296be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:43:30 np0005464891 podman[269765]: 2025-10-01 16:43:30.41840563 +0000 UTC m=+0.280597294 container start 00ff2408afb82e7862785c8aa66d6e79b604187eba6efd866f8e0da4d24296be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_euler, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:43:30 np0005464891 podman[269765]: 2025-10-01 16:43:30.442220069 +0000 UTC m=+0.304411753 container attach 00ff2408afb82e7862785c8aa66d6e79b604187eba6efd866f8e0da4d24296be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_euler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 12:43:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 944 B/s wr, 42 op/s
Oct  1 12:43:31 np0005464891 stupefied_euler[269781]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:43:31 np0005464891 stupefied_euler[269781]: --> relative data size: 1.0
Oct  1 12:43:31 np0005464891 stupefied_euler[269781]: --> All data devices are unavailable
Oct  1 12:43:31 np0005464891 systemd[1]: libpod-00ff2408afb82e7862785c8aa66d6e79b604187eba6efd866f8e0da4d24296be.scope: Deactivated successfully.
Oct  1 12:43:31 np0005464891 systemd[1]: libpod-00ff2408afb82e7862785c8aa66d6e79b604187eba6efd866f8e0da4d24296be.scope: Consumed 1.164s CPU time.
Oct  1 12:43:31 np0005464891 podman[269810]: 2025-10-01 16:43:31.669258545 +0000 UTC m=+0.028241902 container died 00ff2408afb82e7862785c8aa66d6e79b604187eba6efd866f8e0da4d24296be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:43:31 np0005464891 systemd[1]: var-lib-containers-storage-overlay-1d119f057ac43aaa3e312e1508af34fdbf5468e4078a3504bca1f5ce159a9ab1-merged.mount: Deactivated successfully.
Oct  1 12:43:31 np0005464891 podman[269810]: 2025-10-01 16:43:31.799096776 +0000 UTC m=+0.158080133 container remove 00ff2408afb82e7862785c8aa66d6e79b604187eba6efd866f8e0da4d24296be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_euler, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:43:31 np0005464891 systemd[1]: libpod-conmon-00ff2408afb82e7862785c8aa66d6e79b604187eba6efd866f8e0da4d24296be.scope: Deactivated successfully.
Oct  1 12:43:32 np0005464891 nova_compute[259907]: 2025-10-01 16:43:32.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:32 np0005464891 podman[269964]: 2025-10-01 16:43:32.503744071 +0000 UTC m=+0.043376552 container create c792ce84ee1e2ad090f7d565bf5e0a5ac52fb0404036859362f077face75b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:43:32 np0005464891 systemd[1]: Started libpod-conmon-c792ce84ee1e2ad090f7d565bf5e0a5ac52fb0404036859362f077face75b88a.scope.
Oct  1 12:43:32 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:43:32 np0005464891 podman[269964]: 2025-10-01 16:43:32.48639939 +0000 UTC m=+0.026031901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:43:32 np0005464891 podman[269964]: 2025-10-01 16:43:32.657662859 +0000 UTC m=+0.197295370 container init c792ce84ee1e2ad090f7d565bf5e0a5ac52fb0404036859362f077face75b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 12:43:32 np0005464891 podman[269964]: 2025-10-01 16:43:32.667215813 +0000 UTC m=+0.206848304 container start c792ce84ee1e2ad090f7d565bf5e0a5ac52fb0404036859362f077face75b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamarr, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 12:43:32 np0005464891 podman[269964]: 2025-10-01 16:43:32.673063835 +0000 UTC m=+0.212696326 container attach c792ce84ee1e2ad090f7d565bf5e0a5ac52fb0404036859362f077face75b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamarr, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 12:43:32 np0005464891 youthful_lamarr[269980]: 167 167
Oct  1 12:43:32 np0005464891 systemd[1]: libpod-c792ce84ee1e2ad090f7d565bf5e0a5ac52fb0404036859362f077face75b88a.scope: Deactivated successfully.
Oct  1 12:43:32 np0005464891 podman[269964]: 2025-10-01 16:43:32.674834644 +0000 UTC m=+0.214467155 container died c792ce84ee1e2ad090f7d565bf5e0a5ac52fb0404036859362f077face75b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamarr, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 12:43:32 np0005464891 systemd[1]: var-lib-containers-storage-overlay-aa96eef8c197abf3436e24eceec96903f7189bf1108bc82e60f3d45bd09aad07-merged.mount: Deactivated successfully.
Oct  1 12:43:32 np0005464891 podman[269964]: 2025-10-01 16:43:32.737492227 +0000 UTC m=+0.277124718 container remove c792ce84ee1e2ad090f7d565bf5e0a5ac52fb0404036859362f077face75b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamarr, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:43:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 852 B/s wr, 38 op/s
Oct  1 12:43:32 np0005464891 systemd[1]: libpod-conmon-c792ce84ee1e2ad090f7d565bf5e0a5ac52fb0404036859362f077face75b88a.scope: Deactivated successfully.
Oct  1 12:43:32 np0005464891 podman[270004]: 2025-10-01 16:43:32.991395532 +0000 UTC m=+0.120799043 container create cf66e8f68debff3b1674a4e6506be4576b6124aac6baa5c92f21ac16daa3a425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hofstadter, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:43:32 np0005464891 podman[270004]: 2025-10-01 16:43:32.899468828 +0000 UTC m=+0.028872369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:43:33 np0005464891 systemd[1]: Started libpod-conmon-cf66e8f68debff3b1674a4e6506be4576b6124aac6baa5c92f21ac16daa3a425.scope.
Oct  1 12:43:33 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:43:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bc707340d162fe391a52978012a1a3184712d654ba0c515352990db5afdee82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:43:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bc707340d162fe391a52978012a1a3184712d654ba0c515352990db5afdee82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:43:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bc707340d162fe391a52978012a1a3184712d654ba0c515352990db5afdee82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:43:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bc707340d162fe391a52978012a1a3184712d654ba0c515352990db5afdee82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:43:33 np0005464891 podman[270004]: 2025-10-01 16:43:33.092759446 +0000 UTC m=+0.222162967 container init cf66e8f68debff3b1674a4e6506be4576b6124aac6baa5c92f21ac16daa3a425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hofstadter, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 12:43:33 np0005464891 podman[270004]: 2025-10-01 16:43:33.10665108 +0000 UTC m=+0.236054601 container start cf66e8f68debff3b1674a4e6506be4576b6124aac6baa5c92f21ac16daa3a425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 12:43:33 np0005464891 podman[270004]: 2025-10-01 16:43:33.112898963 +0000 UTC m=+0.242302484 container attach cf66e8f68debff3b1674a4e6506be4576b6124aac6baa5c92f21ac16daa3a425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hofstadter, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]: {
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:    "0": [
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:        {
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "devices": [
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "/dev/loop3"
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            ],
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "lv_name": "ceph_lv0",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "lv_size": "21470642176",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "name": "ceph_lv0",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "tags": {
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.cluster_name": "ceph",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.crush_device_class": "",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.encrypted": "0",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.osd_id": "0",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.type": "block",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.vdo": "0"
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            },
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "type": "block",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "vg_name": "ceph_vg0"
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:        }
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:    ],
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:    "1": [
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:        {
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "devices": [
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "/dev/loop4"
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            ],
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "lv_name": "ceph_lv1",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "lv_size": "21470642176",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "name": "ceph_lv1",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "tags": {
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.cluster_name": "ceph",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.crush_device_class": "",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.encrypted": "0",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.osd_id": "1",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.type": "block",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.vdo": "0"
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            },
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "type": "block",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "vg_name": "ceph_vg1"
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:        }
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:    ],
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:    "2": [
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:        {
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "devices": [
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "/dev/loop5"
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            ],
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "lv_name": "ceph_lv2",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "lv_size": "21470642176",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "name": "ceph_lv2",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "tags": {
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.cluster_name": "ceph",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.crush_device_class": "",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.encrypted": "0",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.osd_id": "2",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.type": "block",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:                "ceph.vdo": "0"
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            },
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "type": "block",
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:            "vg_name": "ceph_vg2"
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:        }
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]:    ]
Oct  1 12:43:33 np0005464891 wonderful_hofstadter[270021]: }
Oct  1 12:43:33 np0005464891 systemd[1]: libpod-cf66e8f68debff3b1674a4e6506be4576b6124aac6baa5c92f21ac16daa3a425.scope: Deactivated successfully.
Oct  1 12:43:33 np0005464891 podman[270004]: 2025-10-01 16:43:33.871570171 +0000 UTC m=+1.000973722 container died cf66e8f68debff3b1674a4e6506be4576b6124aac6baa5c92f21ac16daa3a425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hofstadter, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:43:33 np0005464891 systemd[1]: var-lib-containers-storage-overlay-8bc707340d162fe391a52978012a1a3184712d654ba0c515352990db5afdee82-merged.mount: Deactivated successfully.
Oct  1 12:43:33 np0005464891 podman[270004]: 2025-10-01 16:43:33.962082425 +0000 UTC m=+1.091485936 container remove cf66e8f68debff3b1674a4e6506be4576b6124aac6baa5c92f21ac16daa3a425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct  1 12:43:33 np0005464891 systemd[1]: libpod-conmon-cf66e8f68debff3b1674a4e6506be4576b6124aac6baa5c92f21ac16daa3a425.scope: Deactivated successfully.
Oct  1 12:43:34 np0005464891 podman[270183]: 2025-10-01 16:43:34.677867188 +0000 UTC m=+0.054141480 container create 9641defeb421c34f4de43c512d672603a93c007d52af52f0dafd8a31f242cef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lehmann, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 12:43:34 np0005464891 systemd[1]: Started libpod-conmon-9641defeb421c34f4de43c512d672603a93c007d52af52f0dafd8a31f242cef7.scope.
Oct  1 12:43:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 767 B/s wr, 35 op/s
Oct  1 12:43:34 np0005464891 podman[270183]: 2025-10-01 16:43:34.648605858 +0000 UTC m=+0.024880210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:43:34 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:43:34 np0005464891 podman[270183]: 2025-10-01 16:43:34.77085826 +0000 UTC m=+0.147132562 container init 9641defeb421c34f4de43c512d672603a93c007d52af52f0dafd8a31f242cef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:43:34 np0005464891 podman[270183]: 2025-10-01 16:43:34.781854254 +0000 UTC m=+0.158128556 container start 9641defeb421c34f4de43c512d672603a93c007d52af52f0dafd8a31f242cef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:43:34 np0005464891 podman[270183]: 2025-10-01 16:43:34.786529934 +0000 UTC m=+0.162804336 container attach 9641defeb421c34f4de43c512d672603a93c007d52af52f0dafd8a31f242cef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:43:34 np0005464891 heuristic_lehmann[270199]: 167 167
Oct  1 12:43:34 np0005464891 systemd[1]: libpod-9641defeb421c34f4de43c512d672603a93c007d52af52f0dafd8a31f242cef7.scope: Deactivated successfully.
Oct  1 12:43:34 np0005464891 podman[270183]: 2025-10-01 16:43:34.791303495 +0000 UTC m=+0.167577797 container died 9641defeb421c34f4de43c512d672603a93c007d52af52f0dafd8a31f242cef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Oct  1 12:43:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:43:35 np0005464891 nova_compute[259907]: 2025-10-01 16:43:35.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:35 np0005464891 systemd[1]: var-lib-containers-storage-overlay-91d5f9c2e22a817d6899118405af467099a3ccff37cf883b2db3af492d419d6e-merged.mount: Deactivated successfully.
Oct  1 12:43:35 np0005464891 podman[270183]: 2025-10-01 16:43:35.534535128 +0000 UTC m=+0.910809430 container remove 9641defeb421c34f4de43c512d672603a93c007d52af52f0dafd8a31f242cef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lehmann, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 12:43:35 np0005464891 systemd[1]: libpod-conmon-9641defeb421c34f4de43c512d672603a93c007d52af52f0dafd8a31f242cef7.scope: Deactivated successfully.
Oct  1 12:43:35 np0005464891 podman[270225]: 2025-10-01 16:43:35.775557665 +0000 UTC m=+0.096571133 container create 924c9ecff23fd23fba987deb9c3e65c4513626cbe2a6270c274b41cbe6250a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lichterman, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:43:35 np0005464891 podman[270225]: 2025-10-01 16:43:35.713156648 +0000 UTC m=+0.034170216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:43:35 np0005464891 systemd[1]: Started libpod-conmon-924c9ecff23fd23fba987deb9c3e65c4513626cbe2a6270c274b41cbe6250a9e.scope.
Oct  1 12:43:35 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:43:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07247287f67989713133f2fb69f81d38b0f42846e53be6cebd0662592cb56979/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:43:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07247287f67989713133f2fb69f81d38b0f42846e53be6cebd0662592cb56979/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:43:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07247287f67989713133f2fb69f81d38b0f42846e53be6cebd0662592cb56979/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:43:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07247287f67989713133f2fb69f81d38b0f42846e53be6cebd0662592cb56979/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:43:35 np0005464891 podman[270225]: 2025-10-01 16:43:35.983742864 +0000 UTC m=+0.304756362 container init 924c9ecff23fd23fba987deb9c3e65c4513626cbe2a6270c274b41cbe6250a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 12:43:35 np0005464891 podman[270225]: 2025-10-01 16:43:35.997982248 +0000 UTC m=+0.318995716 container start 924c9ecff23fd23fba987deb9c3e65c4513626cbe2a6270c274b41cbe6250a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lichterman, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 12:43:36 np0005464891 podman[270225]: 2025-10-01 16:43:36.003487901 +0000 UTC m=+0.324501399 container attach 924c9ecff23fd23fba987deb9c3e65c4513626cbe2a6270c274b41cbe6250a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:43:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:43:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3583659978' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:43:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:43:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3583659978' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:43:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:43:36 np0005464891 great_lichterman[270242]: {
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:        "osd_id": 2,
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:        "type": "bluestore"
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:    },
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:        "osd_id": 0,
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:        "type": "bluestore"
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:    },
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:        "osd_id": 1,
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:        "type": "bluestore"
Oct  1 12:43:36 np0005464891 great_lichterman[270242]:    }
Oct  1 12:43:36 np0005464891 great_lichterman[270242]: }
Oct  1 12:43:37 np0005464891 systemd[1]: libpod-924c9ecff23fd23fba987deb9c3e65c4513626cbe2a6270c274b41cbe6250a9e.scope: Deactivated successfully.
Oct  1 12:43:37 np0005464891 systemd[1]: libpod-924c9ecff23fd23fba987deb9c3e65c4513626cbe2a6270c274b41cbe6250a9e.scope: Consumed 1.042s CPU time.
Oct  1 12:43:37 np0005464891 podman[270225]: 2025-10-01 16:43:37.03439889 +0000 UTC m=+1.355412358 container died 924c9ecff23fd23fba987deb9c3e65c4513626cbe2a6270c274b41cbe6250a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:43:37 np0005464891 systemd[1]: var-lib-containers-storage-overlay-07247287f67989713133f2fb69f81d38b0f42846e53be6cebd0662592cb56979-merged.mount: Deactivated successfully.
Oct  1 12:43:37 np0005464891 podman[270225]: 2025-10-01 16:43:37.094916985 +0000 UTC m=+1.415930463 container remove 924c9ecff23fd23fba987deb9c3e65c4513626cbe2a6270c274b41cbe6250a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:43:37 np0005464891 systemd[1]: libpod-conmon-924c9ecff23fd23fba987deb9c3e65c4513626cbe2a6270c274b41cbe6250a9e.scope: Deactivated successfully.
Oct  1 12:43:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:43:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:43:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:43:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:43:37 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 6a4f3b6f-ccd0-4050-8c65-abc17e697212 does not exist
Oct  1 12:43:37 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 47a34e81-b9db-4892-99e8-eff55bd19683 does not exist
Oct  1 12:43:37 np0005464891 nova_compute[259907]: 2025-10-01 16:43:37.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:38 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:43:38 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:43:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:43:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:43:39 np0005464891 podman[270337]: 2025-10-01 16:43:39.968684697 +0000 UTC m=+0.074172102 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  1 12:43:40 np0005464891 nova_compute[259907]: 2025-10-01 16:43:40.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:43:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:43:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:43:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:43:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:43:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:43:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:43:42 np0005464891 nova_compute[259907]: 2025-10-01 16:43:42.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:43:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:43:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:43:45 np0005464891 nova_compute[259907]: 2025-10-01 16:43:45.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail
Oct  1 12:43:47 np0005464891 nova_compute[259907]: 2025-10-01 16:43:47.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Oct  1 12:43:49 np0005464891 nova_compute[259907]: 2025-10-01 16:43:49.284 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Acquiring lock "d9f491a2-42e5-4c54-8880-44ac34eb626b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:43:49 np0005464891 nova_compute[259907]: 2025-10-01 16:43:49.285 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lock "d9f491a2-42e5-4c54-8880-44ac34eb626b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:43:49 np0005464891 nova_compute[259907]: 2025-10-01 16:43:49.421 2 DEBUG nova.compute.manager [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:43:49 np0005464891 nova_compute[259907]: 2025-10-01 16:43:49.733 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:43:49 np0005464891 nova_compute[259907]: 2025-10-01 16:43:49.734 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:43:49 np0005464891 nova_compute[259907]: 2025-10-01 16:43:49.746 2 DEBUG nova.virt.hardware [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:43:49 np0005464891 nova_compute[259907]: 2025-10-01 16:43:49.746 2 INFO nova.compute.claims [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:43:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.002 2 DEBUG oslo_concurrency.processutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:43:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3835029528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.484 2 DEBUG oslo_concurrency.processutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.493 2 DEBUG nova.compute.provider_tree [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.518 2 DEBUG nova.scheduler.client.report [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.547 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.548 2 DEBUG nova.compute.manager [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.613 2 DEBUG nova.compute.manager [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.614 2 DEBUG nova.network.neutron [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.635 2 INFO nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.661 2 DEBUG nova.compute.manager [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:43:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.755 2 DEBUG nova.compute.manager [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.757 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.757 2 INFO nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Creating image(s)#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.783 2 DEBUG nova.storage.rbd_utils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] rbd image d9f491a2-42e5-4c54-8880-44ac34eb626b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.808 2 DEBUG nova.storage.rbd_utils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] rbd image d9f491a2-42e5-4c54-8880-44ac34eb626b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.830 2 DEBUG nova.storage.rbd_utils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] rbd image d9f491a2-42e5-4c54-8880-44ac34eb626b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.834 2 DEBUG oslo_concurrency.processutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.905 2 DEBUG oslo_concurrency.processutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.907 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Acquiring lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.908 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.908 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.934 2 DEBUG nova.storage.rbd_utils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] rbd image d9f491a2-42e5-4c54-8880-44ac34eb626b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:43:50 np0005464891 nova_compute[259907]: 2025-10-01 16:43:50.938 2 DEBUG oslo_concurrency.processutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa d9f491a2-42e5-4c54-8880-44ac34eb626b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:43:51 np0005464891 nova_compute[259907]: 2025-10-01 16:43:51.296 2 DEBUG nova.policy [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2758287e7044c858c94aaf781adb257', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da59880eadac40a5aee733e9a8862b35', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:43:51 np0005464891 nova_compute[259907]: 2025-10-01 16:43:51.307 2 DEBUG oslo_concurrency.processutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa d9f491a2-42e5-4c54-8880-44ac34eb626b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:43:51 np0005464891 nova_compute[259907]: 2025-10-01 16:43:51.395 2 DEBUG nova.storage.rbd_utils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] resizing rbd image d9f491a2-42e5-4c54-8880-44ac34eb626b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  1 12:43:51 np0005464891 nova_compute[259907]: 2025-10-01 16:43:51.489 2 DEBUG nova.objects.instance [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lazy-loading 'migration_context' on Instance uuid d9f491a2-42e5-4c54-8880-44ac34eb626b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:43:51 np0005464891 nova_compute[259907]: 2025-10-01 16:43:51.503 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  1 12:43:51 np0005464891 nova_compute[259907]: 2025-10-01 16:43:51.504 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Ensure instance console log exists: /var/lib/nova/instances/d9f491a2-42e5-4c54-8880-44ac34eb626b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:43:51 np0005464891 nova_compute[259907]: 2025-10-01 16:43:51.504 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:43:51 np0005464891 nova_compute[259907]: 2025-10-01 16:43:51.504 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:43:51 np0005464891 nova_compute[259907]: 2025-10-01 16:43:51.505 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:43:52 np0005464891 podman[270545]: 2025-10-01 16:43:52.013055363 +0000 UTC m=+0.118733656 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:43:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:43:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1341362112' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:43:52 np0005464891 nova_compute[259907]: 2025-10-01 16:43:52.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 56 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 333 KiB/s wr, 3 op/s
Oct  1 12:43:52 np0005464891 nova_compute[259907]: 2025-10-01 16:43:52.843 2 DEBUG nova.network.neutron [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Successfully created port: aa58c525-bc7a-4509-b618-b480cb075e2d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:43:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Oct  1 12:43:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Oct  1 12:43:53 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Oct  1 12:43:53 np0005464891 nova_compute[259907]: 2025-10-01 16:43:53.924 2 DEBUG nova.network.neutron [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Successfully updated port: aa58c525-bc7a-4509-b618-b480cb075e2d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:43:53 np0005464891 nova_compute[259907]: 2025-10-01 16:43:53.958 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Acquiring lock "refresh_cache-d9f491a2-42e5-4c54-8880-44ac34eb626b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:43:53 np0005464891 nova_compute[259907]: 2025-10-01 16:43:53.959 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Acquired lock "refresh_cache-d9f491a2-42e5-4c54-8880-44ac34eb626b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:43:53 np0005464891 nova_compute[259907]: 2025-10-01 16:43:53.959 2 DEBUG nova.network.neutron [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.014 2 DEBUG nova.compute.manager [req-16591369-4a80-483c-acda-1a085a07eef9 req-fa7f4ef4-79b1-4126-a1ab-3ee529ea605d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Received event network-changed-aa58c525-bc7a-4509-b618-b480cb075e2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.014 2 DEBUG nova.compute.manager [req-16591369-4a80-483c-acda-1a085a07eef9 req-fa7f4ef4-79b1-4126-a1ab-3ee529ea605d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Refreshing instance network info cache due to event network-changed-aa58c525-bc7a-4509-b618-b480cb075e2d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.015 2 DEBUG oslo_concurrency.lockutils [req-16591369-4a80-483c-acda-1a085a07eef9 req-fa7f4ef4-79b1-4126-a1ab-3ee529ea605d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-d9f491a2-42e5-4c54-8880-44ac34eb626b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.091 2 DEBUG nova.network.neutron [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:43:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Oct  1 12:43:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.1 MiB/s wr, 36 op/s
Oct  1 12:43:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Oct  1 12:43:54 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.865 2 DEBUG nova.network.neutron [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Updating instance_info_cache with network_info: [{"id": "aa58c525-bc7a-4509-b618-b480cb075e2d", "address": "fa:16:3e:3f:e1:03", "network": {"id": "28978ee3-dc5b-4d90-b999-9a1bd25f6fc6", "bridge": "br-int", "label": "tempest-VolumesActionsTest-50432632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da59880eadac40a5aee733e9a8862b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa58c525-bc", "ovs_interfaceid": "aa58c525-bc7a-4509-b618-b480cb075e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:43:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.907 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Releasing lock "refresh_cache-d9f491a2-42e5-4c54-8880-44ac34eb626b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.908 2 DEBUG nova.compute.manager [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Instance network_info: |[{"id": "aa58c525-bc7a-4509-b618-b480cb075e2d", "address": "fa:16:3e:3f:e1:03", "network": {"id": "28978ee3-dc5b-4d90-b999-9a1bd25f6fc6", "bridge": "br-int", "label": "tempest-VolumesActionsTest-50432632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da59880eadac40a5aee733e9a8862b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa58c525-bc", "ovs_interfaceid": "aa58c525-bc7a-4509-b618-b480cb075e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.909 2 DEBUG oslo_concurrency.lockutils [req-16591369-4a80-483c-acda-1a085a07eef9 req-fa7f4ef4-79b1-4126-a1ab-3ee529ea605d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-d9f491a2-42e5-4c54-8880-44ac34eb626b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.910 2 DEBUG nova.network.neutron [req-16591369-4a80-483c-acda-1a085a07eef9 req-fa7f4ef4-79b1-4126-a1ab-3ee529ea605d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Refreshing network info cache for port aa58c525-bc7a-4509-b618-b480cb075e2d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.915 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Start _get_guest_xml network_info=[{"id": "aa58c525-bc7a-4509-b618-b480cb075e2d", "address": "fa:16:3e:3f:e1:03", "network": {"id": "28978ee3-dc5b-4d90-b999-9a1bd25f6fc6", "bridge": "br-int", "label": "tempest-VolumesActionsTest-50432632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da59880eadac40a5aee733e9a8862b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa58c525-bc", "ovs_interfaceid": "aa58c525-bc7a-4509-b618-b480cb075e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'image_id': 'f01c1e7c-fea3-4433-a44a-d71153552c78'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.925 2 WARNING nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.932 2 DEBUG nova.virt.libvirt.host [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.933 2 DEBUG nova.virt.libvirt.host [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.936 2 DEBUG nova.virt.libvirt.host [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.937 2 DEBUG nova.virt.libvirt.host [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.938 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.939 2 DEBUG nova.virt.hardware [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.939 2 DEBUG nova.virt.hardware [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.940 2 DEBUG nova.virt.hardware [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.940 2 DEBUG nova.virt.hardware [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.940 2 DEBUG nova.virt.hardware [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.941 2 DEBUG nova.virt.hardware [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.941 2 DEBUG nova.virt.hardware [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.942 2 DEBUG nova.virt.hardware [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.942 2 DEBUG nova.virt.hardware [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.942 2 DEBUG nova.virt.hardware [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.943 2 DEBUG nova.virt.hardware [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:43:54 np0005464891 nova_compute[259907]: 2025-10-01 16:43:54.948 2 DEBUG oslo_concurrency.processutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:43:55 np0005464891 nova_compute[259907]: 2025-10-01 16:43:55.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:43:55 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1026393254' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:43:55 np0005464891 nova_compute[259907]: 2025-10-01 16:43:55.511 2 DEBUG oslo_concurrency.processutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:43:55 np0005464891 nova_compute[259907]: 2025-10-01 16:43:55.555 2 DEBUG nova.storage.rbd_utils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] rbd image d9f491a2-42e5-4c54-8880-44ac34eb626b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:43:55 np0005464891 nova_compute[259907]: 2025-10-01 16:43:55.562 2 DEBUG oslo_concurrency.processutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:43:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Oct  1 12:43:55 np0005464891 podman[270632]: 2025-10-01 16:43:55.970316 +0000 UTC m=+0.075883000 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  1 12:43:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Oct  1 12:43:56 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Oct  1 12:43:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:43:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/776779104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.235 2 DEBUG oslo_concurrency.processutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.237 2 DEBUG nova.virt.libvirt.vif [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:43:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1474942647',display_name='tempest-VolumesActionsTest-instance-1474942647',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1474942647',id=2,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da59880eadac40a5aee733e9a8862b35',ramdisk_id='',reservation_id='r-gylt71ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-455475153',owner_user_name='tempest-VolumesActionsTest-455475153-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:43:50Z,user_data=None,user_id='c2758287e7044c858c94aaf781adb257',uuid=d9f491a2-42e5-4c54-8880-44ac34eb626b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa58c525-bc7a-4509-b618-b480cb075e2d", "address": "fa:16:3e:3f:e1:03", "network": {"id": "28978ee3-dc5b-4d90-b999-9a1bd25f6fc6", "bridge": "br-int", "label": "tempest-VolumesActionsTest-50432632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da59880eadac40a5aee733e9a8862b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa58c525-bc", "ovs_interfaceid": "aa58c525-bc7a-4509-b618-b480cb075e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.237 2 DEBUG nova.network.os_vif_util [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Converting VIF {"id": "aa58c525-bc7a-4509-b618-b480cb075e2d", "address": "fa:16:3e:3f:e1:03", "network": {"id": "28978ee3-dc5b-4d90-b999-9a1bd25f6fc6", "bridge": "br-int", "label": "tempest-VolumesActionsTest-50432632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da59880eadac40a5aee733e9a8862b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa58c525-bc", "ovs_interfaceid": "aa58c525-bc7a-4509-b618-b480cb075e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.238 2 DEBUG nova.network.os_vif_util [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:e1:03,bridge_name='br-int',has_traffic_filtering=True,id=aa58c525-bc7a-4509-b618-b480cb075e2d,network=Network(28978ee3-dc5b-4d90-b999-9a1bd25f6fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa58c525-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.239 2 DEBUG nova.objects.instance [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lazy-loading 'pci_devices' on Instance uuid d9f491a2-42e5-4c54-8880-44ac34eb626b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.358 2 DEBUG nova.network.neutron [req-16591369-4a80-483c-acda-1a085a07eef9 req-fa7f4ef4-79b1-4126-a1ab-3ee529ea605d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Updated VIF entry in instance network info cache for port aa58c525-bc7a-4509-b618-b480cb075e2d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.359 2 DEBUG nova.network.neutron [req-16591369-4a80-483c-acda-1a085a07eef9 req-fa7f4ef4-79b1-4126-a1ab-3ee529ea605d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Updating instance_info_cache with network_info: [{"id": "aa58c525-bc7a-4509-b618-b480cb075e2d", "address": "fa:16:3e:3f:e1:03", "network": {"id": "28978ee3-dc5b-4d90-b999-9a1bd25f6fc6", "bridge": "br-int", "label": "tempest-VolumesActionsTest-50432632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da59880eadac40a5aee733e9a8862b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa58c525-bc", "ovs_interfaceid": "aa58c525-bc7a-4509-b618-b480cb075e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.382 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  <uuid>d9f491a2-42e5-4c54-8880-44ac34eb626b</uuid>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  <name>instance-00000002</name>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <nova:name>tempest-VolumesActionsTest-instance-1474942647</nova:name>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:43:54</nova:creationTime>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:        <nova:user uuid="c2758287e7044c858c94aaf781adb257">tempest-VolumesActionsTest-455475153-project-member</nova:user>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:        <nova:project uuid="da59880eadac40a5aee733e9a8862b35">tempest-VolumesActionsTest-455475153</nova:project>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="f01c1e7c-fea3-4433-a44a-d71153552c78"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:        <nova:port uuid="aa58c525-bc7a-4509-b618-b480cb075e2d">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <entry name="serial">d9f491a2-42e5-4c54-8880-44ac34eb626b</entry>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <entry name="uuid">d9f491a2-42e5-4c54-8880-44ac34eb626b</entry>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/d9f491a2-42e5-4c54-8880-44ac34eb626b_disk">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/d9f491a2-42e5-4c54-8880-44ac34eb626b_disk.config">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:3f:e1:03"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <target dev="tapaa58c525-bc"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/d9f491a2-42e5-4c54-8880-44ac34eb626b/console.log" append="off"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:43:56 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:43:56 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:43:56 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:43:56 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.384 2 DEBUG nova.compute.manager [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Preparing to wait for external event network-vif-plugged-aa58c525-bc7a-4509-b618-b480cb075e2d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.385 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Acquiring lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.385 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.386 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.387 2 DEBUG nova.virt.libvirt.vif [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:43:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1474942647',display_name='tempest-VolumesActionsTest-instance-1474942647',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1474942647',id=2,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da59880eadac40a5aee733e9a8862b35',ramdisk_id='',reservation_id='r-gylt71ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-455475153',owner_user_name='tempest-VolumesActionsTest-455475153-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:43:50Z,user_data=None,user_id='c2758287e7044c858c94aaf781adb257',uuid=d9f491a2-42e5-4c54-8880-44ac34eb626b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa58c525-bc7a-4509-b618-b480cb075e2d", "address": "fa:16:3e:3f:e1:03", "network": {"id": "28978ee3-dc5b-4d90-b999-9a1bd25f6fc6", "bridge": "br-int", "label": "tempest-VolumesActionsTest-50432632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da59880eadac40a5aee733e9a8862b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa58c525-bc", "ovs_interfaceid": "aa58c525-bc7a-4509-b618-b480cb075e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.388 2 DEBUG nova.network.os_vif_util [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Converting VIF {"id": "aa58c525-bc7a-4509-b618-b480cb075e2d", "address": "fa:16:3e:3f:e1:03", "network": {"id": "28978ee3-dc5b-4d90-b999-9a1bd25f6fc6", "bridge": "br-int", "label": "tempest-VolumesActionsTest-50432632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da59880eadac40a5aee733e9a8862b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa58c525-bc", "ovs_interfaceid": "aa58c525-bc7a-4509-b618-b480cb075e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.389 2 DEBUG nova.network.os_vif_util [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:e1:03,bridge_name='br-int',has_traffic_filtering=True,id=aa58c525-bc7a-4509-b618-b480cb075e2d,network=Network(28978ee3-dc5b-4d90-b999-9a1bd25f6fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa58c525-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.390 2 DEBUG os_vif [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:e1:03,bridge_name='br-int',has_traffic_filtering=True,id=aa58c525-bc7a-4509-b618-b480cb075e2d,network=Network(28978ee3-dc5b-4d90-b999-9a1bd25f6fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa58c525-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.392 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.399 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa58c525-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.400 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa58c525-bc, col_values=(('external_ids', {'iface-id': 'aa58c525-bc7a-4509-b618-b480cb075e2d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:e1:03', 'vm-uuid': 'd9f491a2-42e5-4c54-8880-44ac34eb626b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:43:56 np0005464891 NetworkManager[44940]: <info>  [1759337036.4039] manager: (tapaa58c525-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.414 2 INFO os_vif [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:e1:03,bridge_name='br-int',has_traffic_filtering=True,id=aa58c525-bc7a-4509-b618-b480cb075e2d,network=Network(28978ee3-dc5b-4d90-b999-9a1bd25f6fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa58c525-bc')#033[00m
Oct  1 12:43:56 np0005464891 nova_compute[259907]: 2025-10-01 16:43:56.443 2 DEBUG oslo_concurrency.lockutils [req-16591369-4a80-483c-acda-1a085a07eef9 req-fa7f4ef4-79b1-4126-a1ab-3ee529ea605d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-d9f491a2-42e5-4c54-8880-44ac34eb626b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:43:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Oct  1 12:43:57 np0005464891 nova_compute[259907]: 2025-10-01 16:43:57.151 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:43:57 np0005464891 nova_compute[259907]: 2025-10-01 16:43:57.151 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:43:57 np0005464891 nova_compute[259907]: 2025-10-01 16:43:57.151 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] No VIF found with MAC fa:16:3e:3f:e1:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:43:57 np0005464891 nova_compute[259907]: 2025-10-01 16:43:57.152 2 INFO nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Using config drive#033[00m
Oct  1 12:43:57 np0005464891 nova_compute[259907]: 2025-10-01 16:43:57.331 2 DEBUG nova.storage.rbd_utils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] rbd image d9f491a2-42e5-4c54-8880-44ac34eb626b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:43:57 np0005464891 nova_compute[259907]: 2025-10-01 16:43:57.882 2 INFO nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Creating config drive at /var/lib/nova/instances/d9f491a2-42e5-4c54-8880-44ac34eb626b/disk.config#033[00m
Oct  1 12:43:57 np0005464891 nova_compute[259907]: 2025-10-01 16:43:57.894 2 DEBUG oslo_concurrency.processutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d9f491a2-42e5-4c54-8880-44ac34eb626b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjiysk91c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:43:57 np0005464891 podman[270675]: 2025-10-01 16:43:57.984468772 +0000 UTC m=+0.093699403 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 12:43:58 np0005464891 nova_compute[259907]: 2025-10-01 16:43:58.042 2 DEBUG oslo_concurrency.processutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d9f491a2-42e5-4c54-8880-44ac34eb626b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjiysk91c" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:43:58 np0005464891 nova_compute[259907]: 2025-10-01 16:43:58.081 2 DEBUG nova.storage.rbd_utils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] rbd image d9f491a2-42e5-4c54-8880-44ac34eb626b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:43:58 np0005464891 nova_compute[259907]: 2025-10-01 16:43:58.087 2 DEBUG oslo_concurrency.processutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d9f491a2-42e5-4c54-8880-44ac34eb626b/disk.config d9f491a2-42e5-4c54-8880-44ac34eb626b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:43:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Oct  1 12:43:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 2.9 MiB/s wr, 82 op/s
Oct  1 12:43:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Oct  1 12:43:59 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Oct  1 12:43:59 np0005464891 nova_compute[259907]: 2025-10-01 16:43:59.729 2 DEBUG oslo_concurrency.processutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d9f491a2-42e5-4c54-8880-44ac34eb626b/disk.config d9f491a2-42e5-4c54-8880-44ac34eb626b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:43:59 np0005464891 nova_compute[259907]: 2025-10-01 16:43:59.730 2 INFO nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Deleting local config drive /var/lib/nova/instances/d9f491a2-42e5-4c54-8880-44ac34eb626b/disk.config because it was imported into RBD.#033[00m
Oct  1 12:43:59 np0005464891 kernel: tapaa58c525-bc: entered promiscuous mode
Oct  1 12:43:59 np0005464891 NetworkManager[44940]: <info>  [1759337039.8225] manager: (tapaa58c525-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Oct  1 12:43:59 np0005464891 ovn_controller[152409]: 2025-10-01T16:43:59Z|00035|binding|INFO|Claiming lport aa58c525-bc7a-4509-b618-b480cb075e2d for this chassis.
Oct  1 12:43:59 np0005464891 ovn_controller[152409]: 2025-10-01T16:43:59Z|00036|binding|INFO|aa58c525-bc7a-4509-b618-b480cb075e2d: Claiming fa:16:3e:3f:e1:03 10.100.0.7
Oct  1 12:43:59 np0005464891 nova_compute[259907]: 2025-10-01 16:43:59.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:59 np0005464891 systemd-udevd[270747]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:43:59 np0005464891 systemd-machined[214891]: New machine qemu-2-instance-00000002.
Oct  1 12:43:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:43:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Oct  1 12:43:59 np0005464891 NetworkManager[44940]: <info>  [1759337039.9199] device (tapaa58c525-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:43:59 np0005464891 NetworkManager[44940]: <info>  [1759337039.9213] device (tapaa58c525-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:43:59 np0005464891 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Oct  1 12:43:59 np0005464891 nova_compute[259907]: 2025-10-01 16:43:59.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:43:59 np0005464891 ovn_controller[152409]: 2025-10-01T16:43:59Z|00037|binding|INFO|Setting lport aa58c525-bc7a-4509-b618-b480cb075e2d ovn-installed in OVS
Oct  1 12:43:59 np0005464891 nova_compute[259907]: 2025-10-01 16:43:59.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:00 np0005464891 ovn_controller[152409]: 2025-10-01T16:44:00Z|00038|binding|INFO|Setting lport aa58c525-bc7a-4509-b618-b480cb075e2d up in Southbound
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.014 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:e1:03 10.100.0.7'], port_security=['fa:16:3e:3f:e1:03 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd9f491a2-42e5-4c54-8880-44ac34eb626b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da59880eadac40a5aee733e9a8862b35', 'neutron:revision_number': '2', 'neutron:security_group_ids': '94595f66-532c-47cb-b0d0-e4bf8a5d83ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e3a14ca-0153-471d-a40d-4422e6ebe1c6, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=aa58c525-bc7a-4509-b618-b480cb075e2d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.016 162546 INFO neutron.agent.ovn.metadata.agent [-] Port aa58c525-bc7a-4509-b618-b480cb075e2d in datapath 28978ee3-dc5b-4d90-b999-9a1bd25f6fc6 bound to our chassis#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.017 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28978ee3-dc5b-4d90-b999-9a1bd25f6fc6#033[00m
Oct  1 12:44:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.033 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[462ea65d-f55b-453a-b598-10d0062f13c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.034 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap28978ee3-d1 in ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.036 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap28978ee3-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.036 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[39c38afb-1a6b-4b34-ac84-e2d7270e5a58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.036 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[73152055-5143-41e2-94f9-6063938d0f08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.051 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[a188e04c-cd18-4642-939e-e3d201277b08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.071 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[1198227f-fedb-4526-bcae-d930f9ee88ef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:00 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.109 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[52696bcb-b8c8-4d4f-97c1-f64528bb613b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:00 np0005464891 NetworkManager[44940]: <info>  [1759337040.1179] manager: (tap28978ee3-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.118 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0b27f8-6ec6-4008-a312-bdb527cc6971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:00 np0005464891 systemd-udevd[270750]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.156 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5475a9-e629-46f5-b44e-c6e7f3cef7b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.160 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bf3f3e-2b60-49dc-ba02-481397844dec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:00 np0005464891 NetworkManager[44940]: <info>  [1759337040.1894] device (tap28978ee3-d0): carrier: link connected
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.193 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[84dc4300-4a04-4c64-b288-f7db0d5781a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.214 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe5cecb-30f7-471e-bce0-b695f328271d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28978ee3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:bd:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398876, 'reachable_time': 18420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270781, 'error': None, 'target': 'ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.235 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0a7a68-3154-4364-8153-c5ffe0a1bae4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:bdcf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 398876, 'tstamp': 398876}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270782, 'error': None, 'target': 'ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.255 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[9aaafc5c-1f2c-4013-aa30-b5d19326ca68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28978ee3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:bd:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 306, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 306, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398876, 'reachable_time': 18420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270783, 'error': None, 'target': 'ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.291 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[236fdacd-16f1-4e8d-9da7-99ea87c01608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:00 np0005464891 nova_compute[259907]: 2025-10-01 16:44:00.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.366 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[fe08bd46-3f1e-4c06-acae-a1b6d87e480b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.368 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28978ee3-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.368 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.369 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28978ee3-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:44:00 np0005464891 nova_compute[259907]: 2025-10-01 16:44:00.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:00 np0005464891 NetworkManager[44940]: <info>  [1759337040.3721] manager: (tap28978ee3-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Oct  1 12:44:00 np0005464891 kernel: tap28978ee3-d0: entered promiscuous mode
Oct  1 12:44:00 np0005464891 nova_compute[259907]: 2025-10-01 16:44:00.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.376 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28978ee3-d0, col_values=(('external_ids', {'iface-id': 'ea89f2ce-4435-4138-b04e-8597f4689216'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:44:00 np0005464891 nova_compute[259907]: 2025-10-01 16:44:00.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.381 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/28978ee3-dc5b-4d90-b999-9a1bd25f6fc6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/28978ee3-dc5b-4d90-b999-9a1bd25f6fc6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:44:00 np0005464891 ovn_controller[152409]: 2025-10-01T16:44:00Z|00039|binding|INFO|Releasing lport ea89f2ce-4435-4138-b04e-8597f4689216 from this chassis (sb_readonly=0)
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.382 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ec74911e-323d-4af9-96bd-a81756ade018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.383 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/28978ee3-dc5b-4d90-b999-9a1bd25f6fc6.pid.haproxy
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID 28978ee3-dc5b-4d90-b999-9a1bd25f6fc6
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:44:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:00.385 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6', 'env', 'PROCESS_TAG=haproxy-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/28978ee3-dc5b-4d90-b999-9a1bd25f6fc6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:44:00 np0005464891 nova_compute[259907]: 2025-10-01 16:44:00.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct  1 12:44:00 np0005464891 podman[270849]: 2025-10-01 16:44:00.792199358 +0000 UTC m=+0.031999057 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:44:01 np0005464891 podman[270849]: 2025-10-01 16:44:01.365134107 +0000 UTC m=+0.604933736 container create 92092e2062135e1299b8a49500a95f57516c06a9895c3053d60beced320f5023 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:01 np0005464891 systemd[1]: Started libpod-conmon-92092e2062135e1299b8a49500a95f57516c06a9895c3053d60beced320f5023.scope.
Oct  1 12:44:01 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:44:01 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c038b2c4e8bfb0c342968f44913f89ba55e452cde34aa1e3ffca1c5c041c706/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:44:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:44:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4261420424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:44:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:44:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4261420424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.655 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337041.6544402, d9f491a2-42e5-4c54-8880-44ac34eb626b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.655 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] VM Started (Lifecycle Event)#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.705 2 DEBUG nova.compute.manager [req-fa2d87f0-157e-4b32-b1a0-947eb166841c req-1161f5fb-67e0-43fd-9f26-dd38f9ddb0af af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Received event network-vif-plugged-aa58c525-bc7a-4509-b618-b480cb075e2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.706 2 DEBUG oslo_concurrency.lockutils [req-fa2d87f0-157e-4b32-b1a0-947eb166841c req-1161f5fb-67e0-43fd-9f26-dd38f9ddb0af af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.706 2 DEBUG oslo_concurrency.lockutils [req-fa2d87f0-157e-4b32-b1a0-947eb166841c req-1161f5fb-67e0-43fd-9f26-dd38f9ddb0af af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.707 2 DEBUG oslo_concurrency.lockutils [req-fa2d87f0-157e-4b32-b1a0-947eb166841c req-1161f5fb-67e0-43fd-9f26-dd38f9ddb0af af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.707 2 DEBUG nova.compute.manager [req-fa2d87f0-157e-4b32-b1a0-947eb166841c req-1161f5fb-67e0-43fd-9f26-dd38f9ddb0af af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Processing event network-vif-plugged-aa58c525-bc7a-4509-b618-b480cb075e2d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.710 2 DEBUG nova.compute.manager [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.715 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.720 2 INFO nova.virt.libvirt.driver [-] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Instance spawned successfully.#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.721 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:44:01 np0005464891 podman[270849]: 2025-10-01 16:44:01.733263357 +0000 UTC m=+0.973063026 container init 92092e2062135e1299b8a49500a95f57516c06a9895c3053d60beced320f5023 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  1 12:44:01 np0005464891 podman[270849]: 2025-10-01 16:44:01.745544165 +0000 UTC m=+0.985343794 container start 92092e2062135e1299b8a49500a95f57516c06a9895c3053d60beced320f5023 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.756 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.764 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:44:01 np0005464891 neutron-haproxy-ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6[270872]: [NOTICE]   (270876) : New worker (270878) forked
Oct  1 12:44:01 np0005464891 neutron-haproxy-ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6[270872]: [NOTICE]   (270876) : Loading success.
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.859 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.859 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.860 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.860 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.861 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.861 2 DEBUG nova.virt.libvirt.driver [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.937 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.938 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337041.6546943, d9f491a2-42e5-4c54-8880-44ac34eb626b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.939 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.982 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.986 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337041.7144868, d9f491a2-42e5-4c54-8880-44ac34eb626b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:44:01 np0005464891 nova_compute[259907]: 2025-10-01 16:44:01.987 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:44:02 np0005464891 nova_compute[259907]: 2025-10-01 16:44:02.093 2 INFO nova.compute.manager [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Took 11.34 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:44:02 np0005464891 nova_compute[259907]: 2025-10-01 16:44:02.093 2 DEBUG nova.compute.manager [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:44:02 np0005464891 nova_compute[259907]: 2025-10-01 16:44:02.134 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:44:02 np0005464891 nova_compute[259907]: 2025-10-01 16:44:02.138 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:44:02 np0005464891 nova_compute[259907]: 2025-10-01 16:44:02.250 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:44:02 np0005464891 nova_compute[259907]: 2025-10-01 16:44:02.292 2 INFO nova.compute.manager [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Took 12.60 seconds to build instance.#033[00m
Oct  1 12:44:02 np0005464891 nova_compute[259907]: 2025-10-01 16:44:02.387 2 DEBUG oslo_concurrency.lockutils [None req-9fdef54d-40e3-42f4-b13c-c7ffd6582780 c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lock "d9f491a2-42e5-4c54-8880-44ac34eb626b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:44:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 622 KiB/s rd, 24 KiB/s wr, 61 op/s
Oct  1 12:44:03 np0005464891 nova_compute[259907]: 2025-10-01 16:44:03.831 2 DEBUG nova.compute.manager [req-385d641e-b9bf-485e-8205-f7b6d5460537 req-92cdefee-df83-474a-b40b-0d32aae4966c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Received event network-vif-plugged-aa58c525-bc7a-4509-b618-b480cb075e2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:44:03 np0005464891 nova_compute[259907]: 2025-10-01 16:44:03.832 2 DEBUG oslo_concurrency.lockutils [req-385d641e-b9bf-485e-8205-f7b6d5460537 req-92cdefee-df83-474a-b40b-0d32aae4966c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:44:03 np0005464891 nova_compute[259907]: 2025-10-01 16:44:03.832 2 DEBUG oslo_concurrency.lockutils [req-385d641e-b9bf-485e-8205-f7b6d5460537 req-92cdefee-df83-474a-b40b-0d32aae4966c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:44:03 np0005464891 nova_compute[259907]: 2025-10-01 16:44:03.833 2 DEBUG oslo_concurrency.lockutils [req-385d641e-b9bf-485e-8205-f7b6d5460537 req-92cdefee-df83-474a-b40b-0d32aae4966c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:44:03 np0005464891 nova_compute[259907]: 2025-10-01 16:44:03.833 2 DEBUG nova.compute.manager [req-385d641e-b9bf-485e-8205-f7b6d5460537 req-92cdefee-df83-474a-b40b-0d32aae4966c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] No waiting events found dispatching network-vif-plugged-aa58c525-bc7a-4509-b618-b480cb075e2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:44:03 np0005464891 nova_compute[259907]: 2025-10-01 16:44:03.833 2 WARNING nova.compute.manager [req-385d641e-b9bf-485e-8205-f7b6d5460537 req-92cdefee-df83-474a-b40b-0d32aae4966c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Received unexpected event network-vif-plugged-aa58c525-bc7a-4509-b618-b480cb075e2d for instance with vm_state active and task_state None.#033[00m
Oct  1 12:44:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 21 KiB/s wr, 110 op/s
Oct  1 12:44:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:44:05 np0005464891 nova_compute[259907]: 2025-10-01 16:44:05.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:06 np0005464891 nova_compute[259907]: 2025-10-01 16:44:06.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 20 KiB/s wr, 89 op/s
Oct  1 12:44:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:44:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/469918634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:44:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Oct  1 12:44:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Oct  1 12:44:08 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Oct  1 12:44:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:44:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1200674060' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:44:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:44:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1200674060' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:44:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 150 op/s
Oct  1 12:44:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:44:10 np0005464891 nova_compute[259907]: 2025-10-01 16:44:10.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 130 op/s
Oct  1 12:44:10 np0005464891 podman[270890]: 2025-10-01 16:44:10.944446103 +0000 UTC m=+0.060062520 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 12:44:11 np0005464891 nova_compute[259907]: 2025-10-01 16:44:11.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:11 np0005464891 nova_compute[259907]: 2025-10-01 16:44:11.799 2 DEBUG oslo_concurrency.lockutils [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Acquiring lock "d9f491a2-42e5-4c54-8880-44ac34eb626b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:44:11 np0005464891 nova_compute[259907]: 2025-10-01 16:44:11.799 2 DEBUG oslo_concurrency.lockutils [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lock "d9f491a2-42e5-4c54-8880-44ac34eb626b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:44:11 np0005464891 nova_compute[259907]: 2025-10-01 16:44:11.800 2 DEBUG oslo_concurrency.lockutils [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Acquiring lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:44:11 np0005464891 nova_compute[259907]: 2025-10-01 16:44:11.800 2 DEBUG oslo_concurrency.lockutils [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:44:11 np0005464891 nova_compute[259907]: 2025-10-01 16:44:11.800 2 DEBUG oslo_concurrency.lockutils [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:44:11 np0005464891 nova_compute[259907]: 2025-10-01 16:44:11.801 2 INFO nova.compute.manager [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Terminating instance#033[00m
Oct  1 12:44:11 np0005464891 nova_compute[259907]: 2025-10-01 16:44:11.802 2 DEBUG nova.compute.manager [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:44:11 np0005464891 kernel: tapaa58c525-bc (unregistering): left promiscuous mode
Oct  1 12:44:11 np0005464891 NetworkManager[44940]: <info>  [1759337051.8924] device (tapaa58c525-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:44:11 np0005464891 ovn_controller[152409]: 2025-10-01T16:44:11Z|00040|binding|INFO|Releasing lport aa58c525-bc7a-4509-b618-b480cb075e2d from this chassis (sb_readonly=0)
Oct  1 12:44:11 np0005464891 ovn_controller[152409]: 2025-10-01T16:44:11Z|00041|binding|INFO|Setting lport aa58c525-bc7a-4509-b618-b480cb075e2d down in Southbound
Oct  1 12:44:11 np0005464891 nova_compute[259907]: 2025-10-01 16:44:11.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:11 np0005464891 ovn_controller[152409]: 2025-10-01T16:44:11Z|00042|binding|INFO|Removing iface tapaa58c525-bc ovn-installed in OVS
Oct  1 12:44:11 np0005464891 nova_compute[259907]: 2025-10-01 16:44:11.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:11 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:11.911 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:e1:03 10.100.0.7'], port_security=['fa:16:3e:3f:e1:03 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd9f491a2-42e5-4c54-8880-44ac34eb626b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da59880eadac40a5aee733e9a8862b35', 'neutron:revision_number': '4', 'neutron:security_group_ids': '94595f66-532c-47cb-b0d0-e4bf8a5d83ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e3a14ca-0153-471d-a40d-4422e6ebe1c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=aa58c525-bc7a-4509-b618-b480cb075e2d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:44:11 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:11.912 162546 INFO neutron.agent.ovn.metadata.agent [-] Port aa58c525-bc7a-4509-b618-b480cb075e2d in datapath 28978ee3-dc5b-4d90-b999-9a1bd25f6fc6 unbound from our chassis#033[00m
Oct  1 12:44:11 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:11.914 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28978ee3-dc5b-4d90-b999-9a1bd25f6fc6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:44:11 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:11.916 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccd75f2-cc0d-4e21-878e-43629d14823c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:11 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:11.916 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6 namespace which is not needed anymore#033[00m
Oct  1 12:44:11 np0005464891 nova_compute[259907]: 2025-10-01 16:44:11.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:11 np0005464891 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct  1 12:44:11 np0005464891 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 11.294s CPU time.
Oct  1 12:44:11 np0005464891 systemd-machined[214891]: Machine qemu-2-instance-00000002 terminated.
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:44:12
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['images', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.rgw.root', '.mgr', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'backups']
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.037 2 INFO nova.virt.libvirt.driver [-] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Instance destroyed successfully.#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.038 2 DEBUG nova.objects.instance [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lazy-loading 'resources' on Instance uuid d9f491a2-42e5-4c54-8880-44ac34eb626b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:44:12 np0005464891 neutron-haproxy-ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6[270872]: [NOTICE]   (270876) : haproxy version is 2.8.14-c23fe91
Oct  1 12:44:12 np0005464891 neutron-haproxy-ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6[270872]: [NOTICE]   (270876) : path to executable is /usr/sbin/haproxy
Oct  1 12:44:12 np0005464891 neutron-haproxy-ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6[270872]: [WARNING]  (270876) : Exiting Master process...
Oct  1 12:44:12 np0005464891 neutron-haproxy-ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6[270872]: [ALERT]    (270876) : Current worker (270878) exited with code 143 (Terminated)
Oct  1 12:44:12 np0005464891 neutron-haproxy-ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6[270872]: [WARNING]  (270876) : All workers exited. Exiting... (0)
Oct  1 12:44:12 np0005464891 systemd[1]: libpod-92092e2062135e1299b8a49500a95f57516c06a9895c3053d60beced320f5023.scope: Deactivated successfully.
Oct  1 12:44:12 np0005464891 podman[270935]: 2025-10-01 16:44:12.092474254 +0000 UTC m=+0.066934209 container died 92092e2062135e1299b8a49500a95f57516c06a9895c3053d60beced320f5023 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.098 2 DEBUG nova.virt.libvirt.vif [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:43:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1474942647',display_name='tempest-VolumesActionsTest-instance-1474942647',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1474942647',id=2,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:44:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da59880eadac40a5aee733e9a8862b35',ramdisk_id='',reservation_id='r-gylt71ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-455475153',owner_user_name='tempest-VolumesActionsTest-455475153-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:44:02Z,user_data=None,user_id='c2758287e7044c858c94aaf781adb257',uuid=d9f491a2-42e5-4c54-8880-44ac34eb626b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa58c525-bc7a-4509-b618-b480cb075e2d", "address": "fa:16:3e:3f:e1:03", "network": {"id": "28978ee3-dc5b-4d90-b999-9a1bd25f6fc6", "bridge": "br-int", "label": "tempest-VolumesActionsTest-50432632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da59880eadac40a5aee733e9a8862b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa58c525-bc", "ovs_interfaceid": "aa58c525-bc7a-4509-b618-b480cb075e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.099 2 DEBUG nova.network.os_vif_util [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Converting VIF {"id": "aa58c525-bc7a-4509-b618-b480cb075e2d", "address": "fa:16:3e:3f:e1:03", "network": {"id": "28978ee3-dc5b-4d90-b999-9a1bd25f6fc6", "bridge": "br-int", "label": "tempest-VolumesActionsTest-50432632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da59880eadac40a5aee733e9a8862b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa58c525-bc", "ovs_interfaceid": "aa58c525-bc7a-4509-b618-b480cb075e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.099 2 DEBUG nova.network.os_vif_util [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:e1:03,bridge_name='br-int',has_traffic_filtering=True,id=aa58c525-bc7a-4509-b618-b480cb075e2d,network=Network(28978ee3-dc5b-4d90-b999-9a1bd25f6fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa58c525-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.100 2 DEBUG os_vif [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:e1:03,bridge_name='br-int',has_traffic_filtering=True,id=aa58c525-bc7a-4509-b618-b480cb075e2d,network=Network(28978ee3-dc5b-4d90-b999-9a1bd25f6fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa58c525-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.102 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa58c525-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.149 2 INFO os_vif [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:e1:03,bridge_name='br-int',has_traffic_filtering=True,id=aa58c525-bc7a-4509-b618-b480cb075e2d,network=Network(28978ee3-dc5b-4d90-b999-9a1bd25f6fc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa58c525-bc')#033[00m
Oct  1 12:44:12 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-92092e2062135e1299b8a49500a95f57516c06a9895c3053d60beced320f5023-userdata-shm.mount: Deactivated successfully.
Oct  1 12:44:12 np0005464891 systemd[1]: var-lib-containers-storage-overlay-0c038b2c4e8bfb0c342968f44913f89ba55e452cde34aa1e3ffca1c5c041c706-merged.mount: Deactivated successfully.
Oct  1 12:44:12 np0005464891 podman[270935]: 2025-10-01 16:44:12.271951213 +0000 UTC m=+0.246411188 container cleanup 92092e2062135e1299b8a49500a95f57516c06a9895c3053d60beced320f5023 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  1 12:44:12 np0005464891 systemd[1]: libpod-conmon-92092e2062135e1299b8a49500a95f57516c06a9895c3053d60beced320f5023.scope: Deactivated successfully.
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:44:12 np0005464891 podman[270993]: 2025-10-01 16:44:12.412199456 +0000 UTC m=+0.111194212 container remove 92092e2062135e1299b8a49500a95f57516c06a9895c3053d60beced320f5023 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:44:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Oct  1 12:44:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:12.422 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ff8b69-f93d-4bd2-a120-eb3026940dd4]: (4, ('Wed Oct  1 04:44:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6 (92092e2062135e1299b8a49500a95f57516c06a9895c3053d60beced320f5023)\n92092e2062135e1299b8a49500a95f57516c06a9895c3053d60beced320f5023\nWed Oct  1 04:44:12 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6 (92092e2062135e1299b8a49500a95f57516c06a9895c3053d60beced320f5023)\n92092e2062135e1299b8a49500a95f57516c06a9895c3053d60beced320f5023\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:12.424 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ad88c8d3-8237-457b-a5d6-3b95b6af23f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:12.425 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28978ee3-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:12 np0005464891 kernel: tap28978ee3-d0: left promiscuous mode
Oct  1 12:44:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:12.435 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[29d3d72c-bff1-4903-b7b2-7499b0d2dee8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:12 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Oct  1 12:44:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:12.441 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:44:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:12.442 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:44:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:12.444 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:12.455 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d7425c33-89b9-40d7-b318-e2971b1c41d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:12.457 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[1d460eb4-59e1-46c6-b5f3-afe2e82fdd61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.467 2 DEBUG nova.compute.manager [req-440bac38-2085-475f-9ec3-c3748f5def77 req-decbdab8-84f9-4149-9be0-62ffa85321e6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Received event network-vif-unplugged-aa58c525-bc7a-4509-b618-b480cb075e2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.470 2 DEBUG oslo_concurrency.lockutils [req-440bac38-2085-475f-9ec3-c3748f5def77 req-decbdab8-84f9-4149-9be0-62ffa85321e6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.470 2 DEBUG oslo_concurrency.lockutils [req-440bac38-2085-475f-9ec3-c3748f5def77 req-decbdab8-84f9-4149-9be0-62ffa85321e6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.470 2 DEBUG oslo_concurrency.lockutils [req-440bac38-2085-475f-9ec3-c3748f5def77 req-decbdab8-84f9-4149-9be0-62ffa85321e6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.470 2 DEBUG nova.compute.manager [req-440bac38-2085-475f-9ec3-c3748f5def77 req-decbdab8-84f9-4149-9be0-62ffa85321e6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] No waiting events found dispatching network-vif-unplugged-aa58c525-bc7a-4509-b618-b480cb075e2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:44:12 np0005464891 nova_compute[259907]: 2025-10-01 16:44:12.470 2 DEBUG nova.compute.manager [req-440bac38-2085-475f-9ec3-c3748f5def77 req-decbdab8-84f9-4149-9be0-62ffa85321e6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Received event network-vif-unplugged-aa58c525-bc7a-4509-b618-b480cb075e2d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:44:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:12.479 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[bc85e5c0-4cbf-4348-97c3-e53c35069be8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398867, 'reachable_time': 26802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271008, 'error': None, 'target': 'ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:12.485 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-28978ee3-dc5b-4d90-b999-9a1bd25f6fc6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:44:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:12.485 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[bee057ee-bca1-4649-915b-2eb674c0c3c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:44:12 np0005464891 systemd[1]: run-netns-ovnmeta\x2d28978ee3\x2ddc5b\x2d4d90\x2db999\x2d9a1bd25f6fc6.mount: Deactivated successfully.
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 73 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.4 KiB/s wr, 107 op/s
Oct  1 12:44:12 np0005464891 ceph-mgr[74592]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2433011577
Oct  1 12:44:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:44:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4121036299' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:44:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:44:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4121036299' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:44:13 np0005464891 nova_compute[259907]: 2025-10-01 16:44:13.213 2 INFO nova.virt.libvirt.driver [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Deleting instance files /var/lib/nova/instances/d9f491a2-42e5-4c54-8880-44ac34eb626b_del#033[00m
Oct  1 12:44:13 np0005464891 nova_compute[259907]: 2025-10-01 16:44:13.214 2 INFO nova.virt.libvirt.driver [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Deletion of /var/lib/nova/instances/d9f491a2-42e5-4c54-8880-44ac34eb626b_del complete#033[00m
Oct  1 12:44:13 np0005464891 nova_compute[259907]: 2025-10-01 16:44:13.276 2 INFO nova.compute.manager [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Took 1.47 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:44:13 np0005464891 nova_compute[259907]: 2025-10-01 16:44:13.277 2 DEBUG oslo.service.loopingcall [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:44:13 np0005464891 nova_compute[259907]: 2025-10-01 16:44:13.278 2 DEBUG nova.compute.manager [-] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:44:13 np0005464891 nova_compute[259907]: 2025-10-01 16:44:13.278 2 DEBUG nova.network.neutron [-] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:44:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Oct  1 12:44:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Oct  1 12:44:14 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Oct  1 12:44:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 2.7 KiB/s wr, 91 op/s
Oct  1 12:44:14 np0005464891 nova_compute[259907]: 2025-10-01 16:44:14.777 2 DEBUG nova.compute.manager [req-78547b46-50ae-42c4-86ce-5df6fbcb6a2b req-4e96c7a6-ee3b-4dc7-a680-2078f186090d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Received event network-vif-plugged-aa58c525-bc7a-4509-b618-b480cb075e2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:44:14 np0005464891 nova_compute[259907]: 2025-10-01 16:44:14.778 2 DEBUG oslo_concurrency.lockutils [req-78547b46-50ae-42c4-86ce-5df6fbcb6a2b req-4e96c7a6-ee3b-4dc7-a680-2078f186090d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:44:14 np0005464891 nova_compute[259907]: 2025-10-01 16:44:14.779 2 DEBUG oslo_concurrency.lockutils [req-78547b46-50ae-42c4-86ce-5df6fbcb6a2b req-4e96c7a6-ee3b-4dc7-a680-2078f186090d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:44:14 np0005464891 nova_compute[259907]: 2025-10-01 16:44:14.779 2 DEBUG oslo_concurrency.lockutils [req-78547b46-50ae-42c4-86ce-5df6fbcb6a2b req-4e96c7a6-ee3b-4dc7-a680-2078f186090d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "d9f491a2-42e5-4c54-8880-44ac34eb626b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:44:14 np0005464891 nova_compute[259907]: 2025-10-01 16:44:14.779 2 DEBUG nova.compute.manager [req-78547b46-50ae-42c4-86ce-5df6fbcb6a2b req-4e96c7a6-ee3b-4dc7-a680-2078f186090d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] No waiting events found dispatching network-vif-plugged-aa58c525-bc7a-4509-b618-b480cb075e2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:44:14 np0005464891 nova_compute[259907]: 2025-10-01 16:44:14.780 2 WARNING nova.compute.manager [req-78547b46-50ae-42c4-86ce-5df6fbcb6a2b req-4e96c7a6-ee3b-4dc7-a680-2078f186090d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Received unexpected event network-vif-plugged-aa58c525-bc7a-4509-b618-b480cb075e2d for instance with vm_state active and task_state deleting.#033[00m
Oct  1 12:44:14 np0005464891 nova_compute[259907]: 2025-10-01 16:44:14.878 2 DEBUG nova.network.neutron [-] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:44:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:44:15 np0005464891 nova_compute[259907]: 2025-10-01 16:44:15.144 2 INFO nova.compute.manager [-] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Took 1.87 seconds to deallocate network for instance.#033[00m
Oct  1 12:44:15 np0005464891 nova_compute[259907]: 2025-10-01 16:44:15.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:15 np0005464891 nova_compute[259907]: 2025-10-01 16:44:15.391 2 DEBUG oslo_concurrency.lockutils [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:44:15 np0005464891 nova_compute[259907]: 2025-10-01 16:44:15.392 2 DEBUG oslo_concurrency.lockutils [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:44:15 np0005464891 nova_compute[259907]: 2025-10-01 16:44:15.457 2 DEBUG oslo_concurrency.processutils [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:44:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:44:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4232992158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:44:16 np0005464891 nova_compute[259907]: 2025-10-01 16:44:16.035 2 DEBUG oslo_concurrency.processutils [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:44:16 np0005464891 nova_compute[259907]: 2025-10-01 16:44:16.044 2 DEBUG nova.compute.provider_tree [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:44:16 np0005464891 nova_compute[259907]: 2025-10-01 16:44:16.179 2 DEBUG nova.scheduler.client.report [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:44:16 np0005464891 nova_compute[259907]: 2025-10-01 16:44:16.421 2 DEBUG oslo_concurrency.lockutils [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:44:16 np0005464891 nova_compute[259907]: 2025-10-01 16:44:16.563 2 INFO nova.scheduler.client.report [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Deleted allocations for instance d9f491a2-42e5-4c54-8880-44ac34eb626b#033[00m
Oct  1 12:44:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 41 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 2.2 KiB/s wr, 75 op/s
Oct  1 12:44:17 np0005464891 nova_compute[259907]: 2025-10-01 16:44:17.109 2 DEBUG nova.compute.manager [req-7e6d6aec-12f4-4b92-b1d7-9c2da26a714e req-969a5ef2-f46d-4d9b-bde6-88c1e1dbf07f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Received event network-vif-deleted-aa58c525-bc7a-4509-b618-b480cb075e2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:44:17 np0005464891 nova_compute[259907]: 2025-10-01 16:44:17.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:17 np0005464891 nova_compute[259907]: 2025-10-01 16:44:17.387 2 DEBUG oslo_concurrency.lockutils [None req-323b4ab6-74b5-4bd6-9e07-b9b910476b9b c2758287e7044c858c94aaf781adb257 da59880eadac40a5aee733e9a8862b35 - - default default] Lock "d9f491a2-42e5-4c54-8880-44ac34eb626b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:44:17 np0005464891 nova_compute[259907]: 2025-10-01 16:44:17.806 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:44:17 np0005464891 nova_compute[259907]: 2025-10-01 16:44:17.807 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:44:17 np0005464891 nova_compute[259907]: 2025-10-01 16:44:17.807 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:44:18 np0005464891 nova_compute[259907]: 2025-10-01 16:44:18.024 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 12:44:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 4.2 KiB/s wr, 120 op/s
Oct  1 12:44:19 np0005464891 nova_compute[259907]: 2025-10-01 16:44:19.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:44:19 np0005464891 nova_compute[259907]: 2025-10-01 16:44:19.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:44:19 np0005464891 nova_compute[259907]: 2025-10-01 16:44:19.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:44:19 np0005464891 nova_compute[259907]: 2025-10-01 16:44:19.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:44:19 np0005464891 nova_compute[259907]: 2025-10-01 16:44:19.806 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:44:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:44:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Oct  1 12:44:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Oct  1 12:44:20 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Oct  1 12:44:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:44:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/551348398' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:44:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:44:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/551348398' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:44:20 np0005464891 nova_compute[259907]: 2025-10-01 16:44:20.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 3.0 KiB/s wr, 86 op/s
Oct  1 12:44:20 np0005464891 nova_compute[259907]: 2025-10-01 16:44:20.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:44:20 np0005464891 nova_compute[259907]: 2025-10-01 16:44:20.846 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:44:20 np0005464891 nova_compute[259907]: 2025-10-01 16:44:20.847 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:44:20 np0005464891 nova_compute[259907]: 2025-10-01 16:44:20.847 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:44:20 np0005464891 nova_compute[259907]: 2025-10-01 16:44:20.847 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:44:20 np0005464891 nova_compute[259907]: 2025-10-01 16:44:20.848 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:44:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:44:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3078950680' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:44:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:44:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3078950680' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:44:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:44:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1724768954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:44:21 np0005464891 nova_compute[259907]: 2025-10-01 16:44:21.268 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:44:21 np0005464891 nova_compute[259907]: 2025-10-01 16:44:21.440 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:44:21 np0005464891 nova_compute[259907]: 2025-10-01 16:44:21.441 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4731MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:44:21 np0005464891 nova_compute[259907]: 2025-10-01 16:44:21.442 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:44:21 np0005464891 nova_compute[259907]: 2025-10-01 16:44:21.442 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:44:21 np0005464891 nova_compute[259907]: 2025-10-01 16:44:21.673 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:44:21 np0005464891 nova_compute[259907]: 2025-10-01 16:44:21.673 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:44:21 np0005464891 nova_compute[259907]: 2025-10-01 16:44:21.690 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00017169491111545225 quantized to 32 (current 32)
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:44:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:44:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:44:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/513725035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:44:22 np0005464891 nova_compute[259907]: 2025-10-01 16:44:22.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:22 np0005464891 nova_compute[259907]: 2025-10-01 16:44:22.234 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:44:22 np0005464891 nova_compute[259907]: 2025-10-01 16:44:22.239 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:44:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Oct  1 12:44:22 np0005464891 nova_compute[259907]: 2025-10-01 16:44:22.413 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:44:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Oct  1 12:44:22 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Oct  1 12:44:22 np0005464891 nova_compute[259907]: 2025-10-01 16:44:22.669 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:44:22 np0005464891 nova_compute[259907]: 2025-10-01 16:44:22.670 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:44:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.2 KiB/s wr, 52 op/s
Oct  1 12:44:23 np0005464891 podman[271078]: 2025-10-01 16:44:23.009539224 +0000 UTC m=+0.116637853 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  1 12:44:23 np0005464891 nova_compute[259907]: 2025-10-01 16:44:23.671 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:44:23 np0005464891 nova_compute[259907]: 2025-10-01 16:44:23.671 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:44:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:24.257 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:44:24 np0005464891 nova_compute[259907]: 2025-10-01 16:44:24.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:24.259 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:44:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 4.1 KiB/s wr, 97 op/s
Oct  1 12:44:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:44:25 np0005464891 nova_compute[259907]: 2025-10-01 16:44:25.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:25 np0005464891 nova_compute[259907]: 2025-10-01 16:44:25.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:44:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.1 KiB/s wr, 52 op/s
Oct  1 12:44:26 np0005464891 podman[271105]: 2025-10-01 16:44:26.988241235 +0000 UTC m=+0.088998660 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  1 12:44:27 np0005464891 nova_compute[259907]: 2025-10-01 16:44:27.036 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337052.034665, d9f491a2-42e5-4c54-8880-44ac34eb626b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:44:27 np0005464891 nova_compute[259907]: 2025-10-01 16:44:27.037 2 INFO nova.compute.manager [-] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:44:27 np0005464891 nova_compute[259907]: 2025-10-01 16:44:27.140 2 DEBUG nova.compute.manager [None req-b41db186-9f56-483c-a06e-da6276c9916c - - - - - -] [instance: d9f491a2-42e5-4c54-8880-44ac34eb626b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:44:27 np0005464891 nova_compute[259907]: 2025-10-01 16:44:27.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:44:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2625021586' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:44:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:44:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2625021586' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:44:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 2.9 KiB/s wr, 77 op/s
Oct  1 12:44:28 np0005464891 podman[271125]: 2025-10-01 16:44:28.982566572 +0000 UTC m=+0.082175302 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct  1 12:44:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:44:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1628530611' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:44:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:44:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1628530611' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:44:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:44:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Oct  1 12:44:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Oct  1 12:44:30 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Oct  1 12:44:30 np0005464891 nova_compute[259907]: 2025-10-01 16:44:30.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:44:30 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2032547800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:44:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.1 KiB/s wr, 83 op/s
Oct  1 12:44:32 np0005464891 nova_compute[259907]: 2025-10-01 16:44:32.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:44:32.262 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:44:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Oct  1 12:44:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Oct  1 12:44:32 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Oct  1 12:44:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.6 KiB/s wr, 43 op/s
Oct  1 12:44:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Oct  1 12:44:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Oct  1 12:44:34 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Oct  1 12:44:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.3 KiB/s wr, 25 op/s
Oct  1 12:44:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:44:35 np0005464891 nova_compute[259907]: 2025-10-01 16:44:35.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:44:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/620987477' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:44:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:44:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/620987477' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:44:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:44:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2963236302' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:44:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:44:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2963236302' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:44:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:44:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/207001038' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:44:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:44:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/207001038' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:44:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 2.1 KiB/s wr, 22 op/s
Oct  1 12:44:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:44:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1408551417' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:44:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:44:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1408551417' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:44:37 np0005464891 nova_compute[259907]: 2025-10-01 16:44:37.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Oct  1 12:44:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Oct  1 12:44:37 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Oct  1 12:44:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:44:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:44:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:44:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 6.0 KiB/s wr, 109 op/s
Oct  1 12:44:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:44:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:44:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:44:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:44:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:44:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:44:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:44:38 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev c133f7fd-2332-41c1-a8c9-805d567d608a does not exist
Oct  1 12:44:38 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 9950d36d-5b88-4186-a80e-34c466ca2f49 does not exist
Oct  1 12:44:38 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 8e09ed9a-8da7-4391-ab22-b2e67e0ecdc6 does not exist
Oct  1 12:44:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:44:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:44:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:44:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:44:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:44:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:44:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:44:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4237003552' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:44:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:44:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4237003552' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:44:39 np0005464891 podman[271537]: 2025-10-01 16:44:39.6821591 +0000 UTC m=+0.112217771 container create 5c8ac0096a647992d3e8e0bbde76f2c6d6ce63a604a913497f588de459cab56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 12:44:39 np0005464891 podman[271537]: 2025-10-01 16:44:39.601773339 +0000 UTC m=+0.031832050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:44:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:44:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:44:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:44:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:44:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:44:39 np0005464891 systemd[1]: Started libpod-conmon-5c8ac0096a647992d3e8e0bbde76f2c6d6ce63a604a913497f588de459cab56c.scope.
Oct  1 12:44:39 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:44:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Oct  1 12:44:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Oct  1 12:44:39 np0005464891 podman[271537]: 2025-10-01 16:44:39.893375763 +0000 UTC m=+0.323434544 container init 5c8ac0096a647992d3e8e0bbde76f2c6d6ce63a604a913497f588de459cab56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:44:39 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Oct  1 12:44:39 np0005464891 podman[271537]: 2025-10-01 16:44:39.905994932 +0000 UTC m=+0.336053643 container start 5c8ac0096a647992d3e8e0bbde76f2c6d6ce63a604a913497f588de459cab56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:44:39 np0005464891 focused_gould[271554]: 167 167
Oct  1 12:44:39 np0005464891 systemd[1]: libpod-5c8ac0096a647992d3e8e0bbde76f2c6d6ce63a604a913497f588de459cab56c.scope: Deactivated successfully.
Oct  1 12:44:39 np0005464891 podman[271537]: 2025-10-01 16:44:39.937536043 +0000 UTC m=+0.367594754 container attach 5c8ac0096a647992d3e8e0bbde76f2c6d6ce63a604a913497f588de459cab56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:44:39 np0005464891 podman[271537]: 2025-10-01 16:44:39.938936313 +0000 UTC m=+0.368994994 container died 5c8ac0096a647992d3e8e0bbde76f2c6d6ce63a604a913497f588de459cab56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:44:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:44:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Oct  1 12:44:40 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9c4aabdecfbe4c14b750b55909d6c20c2dd4f4843ababfa94110ef9838cc1ff3-merged.mount: Deactivated successfully.
Oct  1 12:44:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Oct  1 12:44:40 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Oct  1 12:44:40 np0005464891 nova_compute[259907]: 2025-10-01 16:44:40.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:40 np0005464891 podman[271537]: 2025-10-01 16:44:40.503110476 +0000 UTC m=+0.933169157 container remove 5c8ac0096a647992d3e8e0bbde76f2c6d6ce63a604a913497f588de459cab56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 12:44:40 np0005464891 systemd[1]: libpod-conmon-5c8ac0096a647992d3e8e0bbde76f2c6d6ce63a604a913497f588de459cab56c.scope: Deactivated successfully.
Oct  1 12:44:40 np0005464891 podman[271579]: 2025-10-01 16:44:40.770152072 +0000 UTC m=+0.067543277 container create 264960ab8ebc34e56a5840528789d6cf45c3f6028decafdc05e5daa8209e0aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 12:44:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 4.5 KiB/s wr, 98 op/s
Oct  1 12:44:40 np0005464891 podman[271579]: 2025-10-01 16:44:40.738605331 +0000 UTC m=+0.035996596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:44:40 np0005464891 systemd[1]: Started libpod-conmon-264960ab8ebc34e56a5840528789d6cf45c3f6028decafdc05e5daa8209e0aa9.scope.
Oct  1 12:44:40 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:44:40 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd23850eb4d09afab6d93527ed6383dd0d38a621bf34a60b391b3aff4a8b5788/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:44:40 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd23850eb4d09afab6d93527ed6383dd0d38a621bf34a60b391b3aff4a8b5788/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:44:40 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd23850eb4d09afab6d93527ed6383dd0d38a621bf34a60b391b3aff4a8b5788/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:44:40 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd23850eb4d09afab6d93527ed6383dd0d38a621bf34a60b391b3aff4a8b5788/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:44:40 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd23850eb4d09afab6d93527ed6383dd0d38a621bf34a60b391b3aff4a8b5788/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:44:41 np0005464891 podman[271579]: 2025-10-01 16:44:41.013054082 +0000 UTC m=+0.310445317 container init 264960ab8ebc34e56a5840528789d6cf45c3f6028decafdc05e5daa8209e0aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:44:41 np0005464891 podman[271579]: 2025-10-01 16:44:41.021946877 +0000 UTC m=+0.319338122 container start 264960ab8ebc34e56a5840528789d6cf45c3f6028decafdc05e5daa8209e0aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:44:41 np0005464891 podman[271579]: 2025-10-01 16:44:41.037734454 +0000 UTC m=+0.335125709 container attach 264960ab8ebc34e56a5840528789d6cf45c3f6028decafdc05e5daa8209e0aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:44:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:44:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/440483900' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:44:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:44:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/440483900' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:44:41 np0005464891 podman[271609]: 2025-10-01 16:44:41.966501679 +0000 UTC m=+0.073610285 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:44:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:44:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:44:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:44:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:44:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:44:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:44:42 np0005464891 xenodochial_lamport[271596]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:44:42 np0005464891 xenodochial_lamport[271596]: --> relative data size: 1.0
Oct  1 12:44:42 np0005464891 xenodochial_lamport[271596]: --> All data devices are unavailable
Oct  1 12:44:42 np0005464891 nova_compute[259907]: 2025-10-01 16:44:42.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:42 np0005464891 systemd[1]: libpod-264960ab8ebc34e56a5840528789d6cf45c3f6028decafdc05e5daa8209e0aa9.scope: Deactivated successfully.
Oct  1 12:44:42 np0005464891 podman[271579]: 2025-10-01 16:44:42.243788458 +0000 UTC m=+1.541179663 container died 264960ab8ebc34e56a5840528789d6cf45c3f6028decafdc05e5daa8209e0aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:44:42 np0005464891 systemd[1]: libpod-264960ab8ebc34e56a5840528789d6cf45c3f6028decafdc05e5daa8209e0aa9.scope: Consumed 1.149s CPU time.
Oct  1 12:44:42 np0005464891 systemd[1]: var-lib-containers-storage-overlay-fd23850eb4d09afab6d93527ed6383dd0d38a621bf34a60b391b3aff4a8b5788-merged.mount: Deactivated successfully.
Oct  1 12:44:42 np0005464891 podman[271579]: 2025-10-01 16:44:42.585104575 +0000 UTC m=+1.882495780 container remove 264960ab8ebc34e56a5840528789d6cf45c3f6028decafdc05e5daa8209e0aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:44:42 np0005464891 systemd[1]: libpod-conmon-264960ab8ebc34e56a5840528789d6cf45c3f6028decafdc05e5daa8209e0aa9.scope: Deactivated successfully.
Oct  1 12:44:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 5.3 KiB/s wr, 141 op/s
Oct  1 12:44:43 np0005464891 podman[271795]: 2025-10-01 16:44:43.349626203 +0000 UTC m=+0.072615226 container create c1c911324c94444ceb5336493a44adc6e6d4571d20f0e9ec1801a1c03e4426dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:44:43 np0005464891 podman[271795]: 2025-10-01 16:44:43.320225252 +0000 UTC m=+0.043214295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:44:43 np0005464891 systemd[1]: Started libpod-conmon-c1c911324c94444ceb5336493a44adc6e6d4571d20f0e9ec1801a1c03e4426dc.scope.
Oct  1 12:44:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:44:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/308021146' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:44:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:44:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/308021146' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:44:43 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:44:43 np0005464891 podman[271795]: 2025-10-01 16:44:43.575162114 +0000 UTC m=+0.298151227 container init c1c911324c94444ceb5336493a44adc6e6d4571d20f0e9ec1801a1c03e4426dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bell, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 12:44:43 np0005464891 podman[271795]: 2025-10-01 16:44:43.58554947 +0000 UTC m=+0.308538533 container start c1c911324c94444ceb5336493a44adc6e6d4571d20f0e9ec1801a1c03e4426dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:44:43 np0005464891 hopeful_bell[271812]: 167 167
Oct  1 12:44:43 np0005464891 systemd[1]: libpod-c1c911324c94444ceb5336493a44adc6e6d4571d20f0e9ec1801a1c03e4426dc.scope: Deactivated successfully.
Oct  1 12:44:43 np0005464891 podman[271795]: 2025-10-01 16:44:43.59493646 +0000 UTC m=+0.317925563 container attach c1c911324c94444ceb5336493a44adc6e6d4571d20f0e9ec1801a1c03e4426dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:44:43 np0005464891 podman[271795]: 2025-10-01 16:44:43.595581268 +0000 UTC m=+0.318570351 container died c1c911324c94444ceb5336493a44adc6e6d4571d20f0e9ec1801a1c03e4426dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:44:43 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6c5df4ecda7e23ef965ef929eb45ffcdae0ed1f2f208fd09fd34c25444b51fc5-merged.mount: Deactivated successfully.
Oct  1 12:44:43 np0005464891 podman[271795]: 2025-10-01 16:44:43.712907199 +0000 UTC m=+0.435896242 container remove c1c911324c94444ceb5336493a44adc6e6d4571d20f0e9ec1801a1c03e4426dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:44:43 np0005464891 systemd[1]: libpod-conmon-c1c911324c94444ceb5336493a44adc6e6d4571d20f0e9ec1801a1c03e4426dc.scope: Deactivated successfully.
Oct  1 12:44:43 np0005464891 podman[271836]: 2025-10-01 16:44:43.908009968 +0000 UTC m=+0.059617388 container create d419416a944299c12167c4d55fa9cdf5270ecd9fc37b94402215fbe6d6fb58cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bell, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:44:43 np0005464891 systemd[1]: Started libpod-conmon-d419416a944299c12167c4d55fa9cdf5270ecd9fc37b94402215fbe6d6fb58cb.scope.
Oct  1 12:44:43 np0005464891 podman[271836]: 2025-10-01 16:44:43.885483006 +0000 UTC m=+0.037090446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:44:44 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:44:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98bb91342500855e575323d7d3b0260c90da390a293cd7323fed5c0f15fceed3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:44:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98bb91342500855e575323d7d3b0260c90da390a293cd7323fed5c0f15fceed3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:44:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98bb91342500855e575323d7d3b0260c90da390a293cd7323fed5c0f15fceed3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:44:44 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98bb91342500855e575323d7d3b0260c90da390a293cd7323fed5c0f15fceed3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:44:44 np0005464891 podman[271836]: 2025-10-01 16:44:44.04057453 +0000 UTC m=+0.192181970 container init d419416a944299c12167c4d55fa9cdf5270ecd9fc37b94402215fbe6d6fb58cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 12:44:44 np0005464891 podman[271836]: 2025-10-01 16:44:44.053078155 +0000 UTC m=+0.204685615 container start d419416a944299c12167c4d55fa9cdf5270ecd9fc37b94402215fbe6d6fb58cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:44:44 np0005464891 podman[271836]: 2025-10-01 16:44:44.06594057 +0000 UTC m=+0.217548040 container attach d419416a944299c12167c4d55fa9cdf5270ecd9fc37b94402215fbe6d6fb58cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 12:44:44 np0005464891 ovn_controller[152409]: 2025-10-01T16:44:44Z|00043|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  1 12:44:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 5.9 KiB/s wr, 156 op/s
Oct  1 12:44:44 np0005464891 crazy_bell[271853]: {
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:    "0": [
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:        {
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "devices": [
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "/dev/loop3"
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            ],
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "lv_name": "ceph_lv0",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "lv_size": "21470642176",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "name": "ceph_lv0",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "tags": {
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.cluster_name": "ceph",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.crush_device_class": "",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.encrypted": "0",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.osd_id": "0",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.type": "block",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.vdo": "0"
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            },
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "type": "block",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "vg_name": "ceph_vg0"
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:        }
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:    ],
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:    "1": [
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:        {
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "devices": [
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "/dev/loop4"
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            ],
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "lv_name": "ceph_lv1",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "lv_size": "21470642176",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "name": "ceph_lv1",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "tags": {
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.cluster_name": "ceph",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.crush_device_class": "",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.encrypted": "0",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.osd_id": "1",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.type": "block",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.vdo": "0"
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            },
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "type": "block",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "vg_name": "ceph_vg1"
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:        }
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:    ],
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:    "2": [
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:        {
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "devices": [
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "/dev/loop5"
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            ],
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "lv_name": "ceph_lv2",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "lv_size": "21470642176",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "name": "ceph_lv2",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "tags": {
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.cluster_name": "ceph",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.crush_device_class": "",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.encrypted": "0",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.osd_id": "2",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.type": "block",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:                "ceph.vdo": "0"
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            },
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "type": "block",
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:            "vg_name": "ceph_vg2"
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:        }
Oct  1 12:44:44 np0005464891 crazy_bell[271853]:    ]
Oct  1 12:44:44 np0005464891 crazy_bell[271853]: }
Oct  1 12:44:44 np0005464891 systemd[1]: libpod-d419416a944299c12167c4d55fa9cdf5270ecd9fc37b94402215fbe6d6fb58cb.scope: Deactivated successfully.
Oct  1 12:44:44 np0005464891 podman[271836]: 2025-10-01 16:44:44.858196015 +0000 UTC m=+1.009803445 container died d419416a944299c12167c4d55fa9cdf5270ecd9fc37b94402215fbe6d6fb58cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:44:44 np0005464891 systemd[1]: var-lib-containers-storage-overlay-98bb91342500855e575323d7d3b0260c90da390a293cd7323fed5c0f15fceed3-merged.mount: Deactivated successfully.
Oct  1 12:44:45 np0005464891 podman[271836]: 2025-10-01 16:44:45.076438933 +0000 UTC m=+1.228046363 container remove d419416a944299c12167c4d55fa9cdf5270ecd9fc37b94402215fbe6d6fb58cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bell, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 12:44:45 np0005464891 systemd[1]: libpod-conmon-d419416a944299c12167c4d55fa9cdf5270ecd9fc37b94402215fbe6d6fb58cb.scope: Deactivated successfully.
Oct  1 12:44:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:44:45 np0005464891 nova_compute[259907]: 2025-10-01 16:44:45.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:46 np0005464891 podman[272016]: 2025-10-01 16:44:46.043804644 +0000 UTC m=+0.089952176 container create 0ef0c1d666bd9e0305c832e5c1c2f735fcc8bc681a99310f1958640d5fbe9b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 12:44:46 np0005464891 podman[272016]: 2025-10-01 16:44:45.980976239 +0000 UTC m=+0.027123761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:44:46 np0005464891 systemd[1]: Started libpod-conmon-0ef0c1d666bd9e0305c832e5c1c2f735fcc8bc681a99310f1958640d5fbe9b72.scope.
Oct  1 12:44:46 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:44:46 np0005464891 podman[272016]: 2025-10-01 16:44:46.188572223 +0000 UTC m=+0.234719755 container init 0ef0c1d666bd9e0305c832e5c1c2f735fcc8bc681a99310f1958640d5fbe9b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 12:44:46 np0005464891 podman[272016]: 2025-10-01 16:44:46.201855089 +0000 UTC m=+0.248002591 container start 0ef0c1d666bd9e0305c832e5c1c2f735fcc8bc681a99310f1958640d5fbe9b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cartwright, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:44:46 np0005464891 sad_cartwright[272032]: 167 167
Oct  1 12:44:46 np0005464891 systemd[1]: libpod-0ef0c1d666bd9e0305c832e5c1c2f735fcc8bc681a99310f1958640d5fbe9b72.scope: Deactivated successfully.
Oct  1 12:44:46 np0005464891 podman[272016]: 2025-10-01 16:44:46.216355921 +0000 UTC m=+0.262503453 container attach 0ef0c1d666bd9e0305c832e5c1c2f735fcc8bc681a99310f1958640d5fbe9b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cartwright, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:44:46 np0005464891 podman[272016]: 2025-10-01 16:44:46.217169413 +0000 UTC m=+0.263316935 container died 0ef0c1d666bd9e0305c832e5c1c2f735fcc8bc681a99310f1958640d5fbe9b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cartwright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:44:46 np0005464891 systemd[1]: var-lib-containers-storage-overlay-12e6535d9622c6106189c940bc2d33b66094baf81c46ce035a1d9b49a5fa5f23-merged.mount: Deactivated successfully.
Oct  1 12:44:46 np0005464891 podman[272016]: 2025-10-01 16:44:46.446663402 +0000 UTC m=+0.492810924 container remove 0ef0c1d666bd9e0305c832e5c1c2f735fcc8bc681a99310f1958640d5fbe9b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cartwright, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:44:46 np0005464891 systemd[1]: libpod-conmon-0ef0c1d666bd9e0305c832e5c1c2f735fcc8bc681a99310f1958640d5fbe9b72.scope: Deactivated successfully.
Oct  1 12:44:46 np0005464891 nova_compute[259907]: 2025-10-01 16:44:46.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:46 np0005464891 podman[272058]: 2025-10-01 16:44:46.716768773 +0000 UTC m=+0.102187284 container create 01629c2ede115365810946150953190fc913c84b0f98cdbb7ba6e57afa576a03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_meninsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:44:46 np0005464891 podman[272058]: 2025-10-01 16:44:46.664015395 +0000 UTC m=+0.049433906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:44:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.0 KiB/s wr, 67 op/s
Oct  1 12:44:46 np0005464891 systemd[1]: Started libpod-conmon-01629c2ede115365810946150953190fc913c84b0f98cdbb7ba6e57afa576a03.scope.
Oct  1 12:44:46 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:44:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9be5109c7112c699f324c1480c6df30b8f6a5bd3e188a03ccffa7090b0e087b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:44:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9be5109c7112c699f324c1480c6df30b8f6a5bd3e188a03ccffa7090b0e087b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:44:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9be5109c7112c699f324c1480c6df30b8f6a5bd3e188a03ccffa7090b0e087b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:44:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9be5109c7112c699f324c1480c6df30b8f6a5bd3e188a03ccffa7090b0e087b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:44:46 np0005464891 podman[272058]: 2025-10-01 16:44:46.871507197 +0000 UTC m=+0.256925788 container init 01629c2ede115365810946150953190fc913c84b0f98cdbb7ba6e57afa576a03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_meninsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 12:44:46 np0005464891 podman[272058]: 2025-10-01 16:44:46.880564748 +0000 UTC m=+0.265983249 container start 01629c2ede115365810946150953190fc913c84b0f98cdbb7ba6e57afa576a03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:44:46 np0005464891 podman[272058]: 2025-10-01 16:44:46.905741163 +0000 UTC m=+0.291159754 container attach 01629c2ede115365810946150953190fc913c84b0f98cdbb7ba6e57afa576a03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:44:47 np0005464891 nova_compute[259907]: 2025-10-01 16:44:47.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]: {
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:        "osd_id": 2,
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:        "type": "bluestore"
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:    },
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:        "osd_id": 0,
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:        "type": "bluestore"
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:    },
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:        "osd_id": 1,
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:        "type": "bluestore"
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]:    }
Oct  1 12:44:48 np0005464891 dreamy_meninsky[272075]: }
Oct  1 12:44:48 np0005464891 systemd[1]: libpod-01629c2ede115365810946150953190fc913c84b0f98cdbb7ba6e57afa576a03.scope: Deactivated successfully.
Oct  1 12:44:48 np0005464891 podman[272058]: 2025-10-01 16:44:48.096716891 +0000 UTC m=+1.482135392 container died 01629c2ede115365810946150953190fc913c84b0f98cdbb7ba6e57afa576a03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:44:48 np0005464891 systemd[1]: libpod-01629c2ede115365810946150953190fc913c84b0f98cdbb7ba6e57afa576a03.scope: Consumed 1.222s CPU time.
Oct  1 12:44:48 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d9be5109c7112c699f324c1480c6df30b8f6a5bd3e188a03ccffa7090b0e087b-merged.mount: Deactivated successfully.
Oct  1 12:44:48 np0005464891 podman[272058]: 2025-10-01 16:44:48.279360796 +0000 UTC m=+1.664779297 container remove 01629c2ede115365810946150953190fc913c84b0f98cdbb7ba6e57afa576a03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 12:44:48 np0005464891 systemd[1]: libpod-conmon-01629c2ede115365810946150953190fc913c84b0f98cdbb7ba6e57afa576a03.scope: Deactivated successfully.
Oct  1 12:44:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:44:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:44:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:44:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:44:48 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev ae61e0fa-7aba-4d5b-82b0-d279c8703b5e does not exist
Oct  1 12:44:48 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev b4d3ca10-ebe5-4f1f-ac1d-f1d90e2ca0da does not exist
Oct  1 12:44:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 2.1 KiB/s wr, 70 op/s
Oct  1 12:44:49 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:44:49 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:44:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:44:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Oct  1 12:44:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Oct  1 12:44:50 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Oct  1 12:44:50 np0005464891 nova_compute[259907]: 2025-10-01 16:44:50.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 1.9 KiB/s wr, 63 op/s
Oct  1 12:44:52 np0005464891 nova_compute[259907]: 2025-10-01 16:44:52.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 KiB/s wr, 39 op/s
Oct  1 12:44:54 np0005464891 podman[272170]: 2025-10-01 16:44:54.044274208 +0000 UTC m=+0.137308964 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 12:44:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 1.1 KiB/s wr, 11 op/s
Oct  1 12:44:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:44:55 np0005464891 nova_compute[259907]: 2025-10-01 16:44:55.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 1.1 KiB/s wr, 11 op/s
Oct  1 12:44:57 np0005464891 nova_compute[259907]: 2025-10-01 16:44:57.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:44:57 np0005464891 podman[272196]: 2025-10-01 16:44:57.994999839 +0000 UTC m=+0.095444518 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:44:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Oct  1 12:44:59 np0005464891 podman[272217]: 2025-10-01 16:44:59.987184708 +0000 UTC m=+0.090844030 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:45:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:45:00 np0005464891 nova_compute[259907]: 2025-10-01 16:45:00.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:45:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3255295729' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:45:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:45:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3255295729' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:45:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Oct  1 12:45:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:45:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3506002082' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:45:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:45:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3506002082' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:45:02 np0005464891 nova_compute[259907]: 2025-10-01 16:45:02.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.3 KiB/s wr, 16 op/s
Oct  1 12:45:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:45:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/870840326' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:45:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:45:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/870840326' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:45:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.7 KiB/s wr, 45 op/s
Oct  1 12:45:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:45:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/149593610' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:45:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:45:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/149593610' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:45:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:45:05 np0005464891 nova_compute[259907]: 2025-10-01 16:45:05.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.3 KiB/s wr, 44 op/s
Oct  1 12:45:07 np0005464891 nova_compute[259907]: 2025-10-01 16:45:07.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.4 KiB/s wr, 69 op/s
Oct  1 12:45:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:45:09 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1174779354' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:45:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:45:09 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1174779354' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:45:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:45:10 np0005464891 nova_compute[259907]: 2025-10-01 16:45:10.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.7 KiB/s wr, 67 op/s
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:45:12
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['images', '.rgw.root', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data']
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:45:12 np0005464891 nova_compute[259907]: 2025-10-01 16:45:12.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:45:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:45:12.447 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:45:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:45:12.447 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:45:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:45:12.448 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:45:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 64 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 557 KiB/s wr, 83 op/s
Oct  1 12:45:12 np0005464891 podman[272236]: 2025-10-01 16:45:12.959658447 +0000 UTC m=+0.067647239 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  1 12:45:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  1 12:45:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:45:15 np0005464891 nova_compute[259907]: 2025-10-01 16:45:15.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:45:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/492408081' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:45:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:45:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/492408081' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:45:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:45:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8941 writes, 31K keys, 8941 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8941 writes, 2290 syncs, 3.90 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3284 writes, 8242 keys, 3284 commit groups, 1.0 writes per commit group, ingest: 4.94 MB, 0.01 MB/s#012Interval WAL: 3284 writes, 1411 syncs, 2.33 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 12:45:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 428 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct  1 12:45:16 np0005464891 nova_compute[259907]: 2025-10-01 16:45:16.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:45:17 np0005464891 nova_compute[259907]: 2025-10-01 16:45:17.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 435 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Oct  1 12:45:19 np0005464891 nova_compute[259907]: 2025-10-01 16:45:19.904 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:45:19 np0005464891 nova_compute[259907]: 2025-10-01 16:45:19.905 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:45:19 np0005464891 nova_compute[259907]: 2025-10-01 16:45:19.905 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:45:20 np0005464891 nova_compute[259907]: 2025-10-01 16:45:20.009 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 12:45:20 np0005464891 nova_compute[259907]: 2025-10-01 16:45:20.009 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:45:20 np0005464891 nova_compute[259907]: 2025-10-01 16:45:20.010 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:45:20 np0005464891 nova_compute[259907]: 2025-10-01 16:45:20.010 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:45:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:45:20 np0005464891 nova_compute[259907]: 2025-10-01 16:45:20.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:20 np0005464891 nova_compute[259907]: 2025-10-01 16:45:20.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:45:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct  1 12:45:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:45:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8915 writes, 33K keys, 8915 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8915 writes, 2126 syncs, 4.19 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2210 writes, 6212 keys, 2210 commit groups, 1.0 writes per commit group, ingest: 3.85 MB, 0.01 MB/s#012Interval WAL: 2210 writes, 900 syncs, 2.46 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 12:45:21 np0005464891 nova_compute[259907]: 2025-10-01 16:45:21.800 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:45:21 np0005464891 nova_compute[259907]: 2025-10-01 16:45:21.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:45:21 np0005464891 nova_compute[259907]: 2025-10-01 16:45:21.944 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:45:21 np0005464891 nova_compute[259907]: 2025-10-01 16:45:21.945 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:45:21 np0005464891 nova_compute[259907]: 2025-10-01 16:45:21.945 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:45:21 np0005464891 nova_compute[259907]: 2025-10-01 16:45:21.945 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:45:21 np0005464891 nova_compute[259907]: 2025-10-01 16:45:21.945 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:45:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:45:22 np0005464891 nova_compute[259907]: 2025-10-01 16:45:22.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:22 np0005464891 ovn_controller[152409]: 2025-10-01T16:45:22Z|00044|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct  1 12:45:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:45:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1424391446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:45:22 np0005464891 nova_compute[259907]: 2025-10-01 16:45:22.619 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:45:22 np0005464891 nova_compute[259907]: 2025-10-01 16:45:22.778 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:45:22 np0005464891 nova_compute[259907]: 2025-10-01 16:45:22.779 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4765MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:45:22 np0005464891 nova_compute[259907]: 2025-10-01 16:45:22.780 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:45:22 np0005464891 nova_compute[259907]: 2025-10-01 16:45:22.780 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:45:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Oct  1 12:45:23 np0005464891 nova_compute[259907]: 2025-10-01 16:45:23.445 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:45:23 np0005464891 nova_compute[259907]: 2025-10-01 16:45:23.446 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:45:23 np0005464891 nova_compute[259907]: 2025-10-01 16:45:23.524 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:45:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:45:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2170453377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:45:24 np0005464891 nova_compute[259907]: 2025-10-01 16:45:24.179 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:45:24 np0005464891 nova_compute[259907]: 2025-10-01 16:45:24.184 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:45:24 np0005464891 nova_compute[259907]: 2025-10-01 16:45:24.344 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:45:24 np0005464891 nova_compute[259907]: 2025-10-01 16:45:24.345 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:45:24 np0005464891 nova_compute[259907]: 2025-10-01 16:45:24.346 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:45:24 np0005464891 nova_compute[259907]: 2025-10-01 16:45:24.347 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:45:24 np0005464891 nova_compute[259907]: 2025-10-01 16:45:24.347 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  1 12:45:24 np0005464891 nova_compute[259907]: 2025-10-01 16:45:24.715 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  1 12:45:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.2 MiB/s wr, 44 op/s
Oct  1 12:45:25 np0005464891 podman[272300]: 2025-10-01 16:45:25.010532183 +0000 UTC m=+0.118894285 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  1 12:45:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:45:25 np0005464891 nova_compute[259907]: 2025-10-01 16:45:25.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:25 np0005464891 nova_compute[259907]: 2025-10-01 16:45:25.716 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:45:25 np0005464891 nova_compute[259907]: 2025-10-01 16:45:25.795 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:45:25 np0005464891 nova_compute[259907]: 2025-10-01 16:45:25.796 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:45:25 np0005464891 nova_compute[259907]: 2025-10-01 16:45:25.803 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:45:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Oct  1 12:45:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Oct  1 12:45:26 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Oct  1 12:45:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.8 KiB/s wr, 16 op/s
Oct  1 12:45:27 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:45:27 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.3 total, 600.0 interval#012Cumulative writes: 7488 writes, 29K keys, 7488 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7487 writes, 1607 syncs, 4.66 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1828 writes, 5337 keys, 1828 commit groups, 1.0 writes per commit group, ingest: 3.52 MB, 0.01 MB/s#012Interval WAL: 1827 writes, 738 syncs, 2.48 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 12:45:27 np0005464891 nova_compute[259907]: 2025-10-01 16:45:27.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:27 np0005464891 nova_compute[259907]: 2025-10-01 16:45:27.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:45:27 np0005464891 nova_compute[259907]: 2025-10-01 16:45:27.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  1 12:45:28 np0005464891 ceph-mgr[74592]: [devicehealth INFO root] Check health
Oct  1 12:45:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:45:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3745553596' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:45:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:45:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3745553596' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:45:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.9 KiB/s wr, 36 op/s
Oct  1 12:45:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:45:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1651746147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:45:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:45:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1651746147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:45:28 np0005464891 podman[272326]: 2025-10-01 16:45:28.957670623 +0000 UTC m=+0.066557940 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct  1 12:45:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:45:30 np0005464891 nova_compute[259907]: 2025-10-01 16:45:30.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.9 KiB/s wr, 36 op/s
Oct  1 12:45:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Oct  1 12:45:30 np0005464891 podman[272346]: 2025-10-01 16:45:30.994371402 +0000 UTC m=+0.093861244 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct  1 12:45:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Oct  1 12:45:31 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Oct  1 12:45:32 np0005464891 nova_compute[259907]: 2025-10-01 16:45:32.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 2.9 KiB/s wr, 81 op/s
Oct  1 12:45:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.1 KiB/s wr, 84 op/s
Oct  1 12:45:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:45:35 np0005464891 nova_compute[259907]: 2025-10-01 16:45:35.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:45:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3440618705' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:45:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:45:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3440618705' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:45:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.5 KiB/s wr, 68 op/s
Oct  1 12:45:37 np0005464891 nova_compute[259907]: 2025-10-01 16:45:37.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:45:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2883165417' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:45:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:45:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2883165417' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:45:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:45:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2701632801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:45:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:45:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2701632801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:45:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:45:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1281955689' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:45:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:45:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1281955689' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:45:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.6 KiB/s wr, 68 op/s
Oct  1 12:45:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:45:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Oct  1 12:45:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Oct  1 12:45:40 np0005464891 nova_compute[259907]: 2025-10-01 16:45:40.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:40 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Oct  1 12:45:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.7 KiB/s wr, 70 op/s
Oct  1 12:45:42 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:45:42.011 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:45:42 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:45:42.011 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:45:42 np0005464891 nova_compute[259907]: 2025-10-01 16:45:42.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:45:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:45:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:45:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:45:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:45:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:45:42 np0005464891 nova_compute[259907]: 2025-10-01 16:45:42.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.1 KiB/s wr, 36 op/s
Oct  1 12:45:43 np0005464891 podman[272367]: 2025-10-01 16:45:43.957193321 +0000 UTC m=+0.065021117 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  1 12:45:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 921 B/s wr, 33 op/s
Oct  1 12:45:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:45:45 np0005464891 nova_compute[259907]: 2025-10-01 16:45:45.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 921 B/s wr, 33 op/s
Oct  1 12:45:47 np0005464891 nova_compute[259907]: 2025-10-01 16:45:47.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 409 B/s wr, 1 op/s
Oct  1 12:45:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:45:49.014 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:45:49 np0005464891 nova_compute[259907]: 2025-10-01 16:45:49.118 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:45:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:45:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:45:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:45:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:45:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:45:50 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:45:50 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:45:50 np0005464891 nova_compute[259907]: 2025-10-01 16:45:50.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 497 B/s rd, 398 B/s wr, 1 op/s
Oct  1 12:45:51 np0005464891 podman[272777]: 2025-10-01 16:45:51.419232681 +0000 UTC m=+0.034494985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:45:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Oct  1 12:45:51 np0005464891 podman[272777]: 2025-10-01 16:45:51.668694122 +0000 UTC m=+0.283956346 container create 982b3d18c7dd3494842c9046ce5c42d84f0a793f8e867314be918fb3ca6c1cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 12:45:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Oct  1 12:45:51 np0005464891 systemd[1]: Started libpod-conmon-982b3d18c7dd3494842c9046ce5c42d84f0a793f8e867314be918fb3ca6c1cc5.scope.
Oct  1 12:45:52 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:45:52 np0005464891 podman[272777]: 2025-10-01 16:45:52.118586168 +0000 UTC m=+0.733848382 container init 982b3d18c7dd3494842c9046ce5c42d84f0a793f8e867314be918fb3ca6c1cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 12:45:52 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Oct  1 12:45:52 np0005464891 podman[272777]: 2025-10-01 16:45:52.13531323 +0000 UTC m=+0.750575434 container start 982b3d18c7dd3494842c9046ce5c42d84f0a793f8e867314be918fb3ca6c1cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ardinghelli, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:45:52 np0005464891 gallant_ardinghelli[272794]: 167 167
Oct  1 12:45:52 np0005464891 systemd[1]: libpod-982b3d18c7dd3494842c9046ce5c42d84f0a793f8e867314be918fb3ca6c1cc5.scope: Deactivated successfully.
Oct  1 12:45:52 np0005464891 podman[272777]: 2025-10-01 16:45:52.25654347 +0000 UTC m=+0.871805654 container attach 982b3d18c7dd3494842c9046ce5c42d84f0a793f8e867314be918fb3ca6c1cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ardinghelli, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:45:52 np0005464891 podman[272777]: 2025-10-01 16:45:52.257944338 +0000 UTC m=+0.873206532 container died 982b3d18c7dd3494842c9046ce5c42d84f0a793f8e867314be918fb3ca6c1cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ardinghelli, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 12:45:52 np0005464891 nova_compute[259907]: 2025-10-01 16:45:52.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:52 np0005464891 systemd[1]: var-lib-containers-storage-overlay-1a047989b3529cbee17e230c4aa6bfa9d54865780fadc3cb7c054673f098fd67-merged.mount: Deactivated successfully.
Oct  1 12:45:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 818 B/s wr, 8 op/s
Oct  1 12:45:53 np0005464891 podman[272777]: 2025-10-01 16:45:53.712887797 +0000 UTC m=+2.328150021 container remove 982b3d18c7dd3494842c9046ce5c42d84f0a793f8e867314be918fb3ca6c1cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ardinghelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 12:45:53 np0005464891 systemd[1]: libpod-conmon-982b3d18c7dd3494842c9046ce5c42d84f0a793f8e867314be918fb3ca6c1cc5.scope: Deactivated successfully.
Oct  1 12:45:54 np0005464891 podman[272818]: 2025-10-01 16:45:53.946171501 +0000 UTC m=+0.042366421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:45:54 np0005464891 podman[272818]: 2025-10-01 16:45:54.215610993 +0000 UTC m=+0.311805923 container create eee95c59cc911f1061f6abb5090659970ac347fa2b2bedf5e649c27d2b97621e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 12:45:54 np0005464891 systemd[1]: Started libpod-conmon-eee95c59cc911f1061f6abb5090659970ac347fa2b2bedf5e649c27d2b97621e.scope.
Oct  1 12:45:54 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:45:54 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1fec529b97562c12f2d88186b6931f591a36fc8931cfde4aecafbc076f53986/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:45:54 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1fec529b97562c12f2d88186b6931f591a36fc8931cfde4aecafbc076f53986/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:45:54 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1fec529b97562c12f2d88186b6931f591a36fc8931cfde4aecafbc076f53986/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:45:54 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1fec529b97562c12f2d88186b6931f591a36fc8931cfde4aecafbc076f53986/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:45:54 np0005464891 podman[272818]: 2025-10-01 16:45:54.481781066 +0000 UTC m=+0.577976056 container init eee95c59cc911f1061f6abb5090659970ac347fa2b2bedf5e649c27d2b97621e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_einstein, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:45:54 np0005464891 podman[272818]: 2025-10-01 16:45:54.489475049 +0000 UTC m=+0.585669969 container start eee95c59cc911f1061f6abb5090659970ac347fa2b2bedf5e649c27d2b97621e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct  1 12:45:54 np0005464891 podman[272818]: 2025-10-01 16:45:54.620983451 +0000 UTC m=+0.717178401 container attach eee95c59cc911f1061f6abb5090659970ac347fa2b2bedf5e649c27d2b97621e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_einstein, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:45:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 818 B/s wr, 8 op/s
Oct  1 12:45:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:45:55 np0005464891 nova_compute[259907]: 2025-10-01 16:45:55.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:56 np0005464891 podman[273371]: 2025-10-01 16:45:56.000129867 +0000 UTC m=+0.104429156 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]: [
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:    {
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:        "available": false,
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:        "ceph_device": false,
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:        "lsm_data": {},
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:        "lvs": [],
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:        "path": "/dev/sr0",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:        "rejected_reasons": [
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "Insufficient space (<5GB)",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "Has a FileSystem"
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:        ],
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:        "sys_api": {
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "actuators": null,
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "device_nodes": "sr0",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "devname": "sr0",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "human_readable_size": "482.00 KB",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "id_bus": "ata",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "model": "QEMU DVD-ROM",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "nr_requests": "2",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "parent": "/dev/sr0",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "partitions": {},
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "path": "/dev/sr0",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "removable": "1",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "rev": "2.5+",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "ro": "0",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "rotational": "0",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "sas_address": "",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "sas_device_handle": "",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "scheduler_mode": "mq-deadline",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "sectors": 0,
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "sectorsize": "2048",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "size": 493568.0,
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "support_discard": "2048",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "type": "disk",
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:            "vendor": "QEMU"
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:        }
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]:    }
Oct  1 12:45:56 np0005464891 dazzling_einstein[272835]: ]
Oct  1 12:45:56 np0005464891 systemd[1]: libpod-eee95c59cc911f1061f6abb5090659970ac347fa2b2bedf5e649c27d2b97621e.scope: Deactivated successfully.
Oct  1 12:45:56 np0005464891 systemd[1]: libpod-eee95c59cc911f1061f6abb5090659970ac347fa2b2bedf5e649c27d2b97621e.scope: Consumed 1.565s CPU time.
Oct  1 12:45:56 np0005464891 podman[272818]: 2025-10-01 16:45:56.215979719 +0000 UTC m=+2.312174659 container died eee95c59cc911f1061f6abb5090659970ac347fa2b2bedf5e649c27d2b97621e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_einstein, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:45:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Oct  1 12:45:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Oct  1 12:45:56 np0005464891 systemd[1]: var-lib-containers-storage-overlay-a1fec529b97562c12f2d88186b6931f591a36fc8931cfde4aecafbc076f53986-merged.mount: Deactivated successfully.
Oct  1 12:45:56 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Oct  1 12:45:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 511 B/s wr, 9 op/s
Oct  1 12:45:57 np0005464891 podman[272818]: 2025-10-01 16:45:57.243387469 +0000 UTC m=+3.339582409 container remove eee95c59cc911f1061f6abb5090659970ac347fa2b2bedf5e649c27d2b97621e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_einstein, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:45:57 np0005464891 systemd[1]: libpod-conmon-eee95c59cc911f1061f6abb5090659970ac347fa2b2bedf5e649c27d2b97621e.scope: Deactivated successfully.
Oct  1 12:45:57 np0005464891 nova_compute[259907]: 2025-10-01 16:45:57.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:45:57 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 65d09351-d2a7-416f-ae60-a0c34644f64f does not exist
Oct  1 12:45:57 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 2f43e258-3122-43fa-87fd-2ade35f13f05 does not exist
Oct  1 12:45:57 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 0c3a8d76-f4d5-4f15-b2c3-bd170e049970 does not exist
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:45:57 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:45:58 np0005464891 podman[275081]: 2025-10-01 16:45:58.451013927 +0000 UTC m=+0.027815610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:45:58 np0005464891 podman[275081]: 2025-10-01 16:45:58.746273552 +0000 UTC m=+0.323075155 container create 28c40c30d62ed27b8f615bfed426cdb7b459e5d3c2e088f1698c88d22ca5f11b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 12:45:58 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:45:58 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:45:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 KiB/s wr, 33 op/s
Oct  1 12:45:58 np0005464891 systemd[1]: Started libpod-conmon-28c40c30d62ed27b8f615bfed426cdb7b459e5d3c2e088f1698c88d22ca5f11b.scope.
Oct  1 12:45:58 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:45:58 np0005464891 podman[275081]: 2025-10-01 16:45:58.941739881 +0000 UTC m=+0.518541584 container init 28c40c30d62ed27b8f615bfed426cdb7b459e5d3c2e088f1698c88d22ca5f11b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 12:45:58 np0005464891 podman[275081]: 2025-10-01 16:45:58.951012497 +0000 UTC m=+0.527814140 container start 28c40c30d62ed27b8f615bfed426cdb7b459e5d3c2e088f1698c88d22ca5f11b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:45:58 np0005464891 youthful_albattani[275098]: 167 167
Oct  1 12:45:58 np0005464891 systemd[1]: libpod-28c40c30d62ed27b8f615bfed426cdb7b459e5d3c2e088f1698c88d22ca5f11b.scope: Deactivated successfully.
Oct  1 12:45:58 np0005464891 podman[275081]: 2025-10-01 16:45:58.982487397 +0000 UTC m=+0.559289030 container attach 28c40c30d62ed27b8f615bfed426cdb7b459e5d3c2e088f1698c88d22ca5f11b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 12:45:58 np0005464891 podman[275081]: 2025-10-01 16:45:58.983791253 +0000 UTC m=+0.560592896 container died 28c40c30d62ed27b8f615bfed426cdb7b459e5d3c2e088f1698c88d22ca5f11b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:45:59 np0005464891 systemd[1]: var-lib-containers-storage-overlay-f86d581090b7f911a98d519a7d879c37dd185e7094a59eab7349054864d01e96-merged.mount: Deactivated successfully.
Oct  1 12:45:59 np0005464891 podman[275081]: 2025-10-01 16:45:59.394798127 +0000 UTC m=+0.971599770 container remove 28c40c30d62ed27b8f615bfed426cdb7b459e5d3c2e088f1698c88d22ca5f11b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 12:45:59 np0005464891 systemd[1]: libpod-conmon-28c40c30d62ed27b8f615bfed426cdb7b459e5d3c2e088f1698c88d22ca5f11b.scope: Deactivated successfully.
Oct  1 12:45:59 np0005464891 podman[275116]: 2025-10-01 16:45:59.508211909 +0000 UTC m=+0.320475623 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 12:45:59 np0005464891 podman[275144]: 2025-10-01 16:45:59.695751289 +0000 UTC m=+0.083352623 container create fce760b3a9436dae4e5c3967122dc63c7e87a6203a1b3c94b4f256c4752c6dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bose, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 12:45:59 np0005464891 podman[275144]: 2025-10-01 16:45:59.654932192 +0000 UTC m=+0.042533486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:45:59 np0005464891 systemd[1]: Started libpod-conmon-fce760b3a9436dae4e5c3967122dc63c7e87a6203a1b3c94b4f256c4752c6dfe.scope.
Oct  1 12:45:59 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:45:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Oct  1 12:45:59 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac05a01723a1c7e9d74f88f95a0e2ba83681e81407f1ac56bb2b533e6c4293d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:45:59 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac05a01723a1c7e9d74f88f95a0e2ba83681e81407f1ac56bb2b533e6c4293d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:45:59 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac05a01723a1c7e9d74f88f95a0e2ba83681e81407f1ac56bb2b533e6c4293d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:45:59 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac05a01723a1c7e9d74f88f95a0e2ba83681e81407f1ac56bb2b533e6c4293d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:45:59 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac05a01723a1c7e9d74f88f95a0e2ba83681e81407f1ac56bb2b533e6c4293d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:45:59 np0005464891 podman[275144]: 2025-10-01 16:45:59.860704866 +0000 UTC m=+0.248306230 container init fce760b3a9436dae4e5c3967122dc63c7e87a6203a1b3c94b4f256c4752c6dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:45:59 np0005464891 podman[275144]: 2025-10-01 16:45:59.869524079 +0000 UTC m=+0.257125403 container start fce760b3a9436dae4e5c3967122dc63c7e87a6203a1b3c94b4f256c4752c6dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:45:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Oct  1 12:45:59 np0005464891 podman[275144]: 2025-10-01 16:45:59.903736524 +0000 UTC m=+0.291337888 container attach fce760b3a9436dae4e5c3967122dc63c7e87a6203a1b3c94b4f256c4752c6dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bose, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:45:59 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Oct  1 12:46:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:46:00 np0005464891 nova_compute[259907]: 2025-10-01 16:46:00.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3722516867' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3722516867' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 KiB/s wr, 24 op/s
Oct  1 12:46:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Oct  1 12:46:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Oct  1 12:46:00 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Oct  1 12:46:01 np0005464891 awesome_bose[275160]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:46:01 np0005464891 awesome_bose[275160]: --> relative data size: 1.0
Oct  1 12:46:01 np0005464891 awesome_bose[275160]: --> All data devices are unavailable
Oct  1 12:46:01 np0005464891 systemd[1]: libpod-fce760b3a9436dae4e5c3967122dc63c7e87a6203a1b3c94b4f256c4752c6dfe.scope: Deactivated successfully.
Oct  1 12:46:01 np0005464891 podman[275144]: 2025-10-01 16:46:01.060015434 +0000 UTC m=+1.447616738 container died fce760b3a9436dae4e5c3967122dc63c7e87a6203a1b3c94b4f256c4752c6dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 12:46:01 np0005464891 systemd[1]: libpod-fce760b3a9436dae4e5c3967122dc63c7e87a6203a1b3c94b4f256c4752c6dfe.scope: Consumed 1.100s CPU time.
Oct  1 12:46:01 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4ac05a01723a1c7e9d74f88f95a0e2ba83681e81407f1ac56bb2b533e6c4293d-merged.mount: Deactivated successfully.
Oct  1 12:46:02 np0005464891 podman[275144]: 2025-10-01 16:46:02.02066383 +0000 UTC m=+2.408265154 container remove fce760b3a9436dae4e5c3967122dc63c7e87a6203a1b3c94b4f256c4752c6dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 12:46:02 np0005464891 systemd[1]: libpod-conmon-fce760b3a9436dae4e5c3967122dc63c7e87a6203a1b3c94b4f256c4752c6dfe.scope: Deactivated successfully.
Oct  1 12:46:02 np0005464891 podman[275189]: 2025-10-01 16:46:02.122590546 +0000 UTC m=+1.025870059 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  1 12:46:02 np0005464891 nova_compute[259907]: 2025-10-01 16:46:02.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.8 KiB/s wr, 106 op/s
Oct  1 12:46:02 np0005464891 podman[275363]: 2025-10-01 16:46:02.800245583 +0000 UTC m=+0.031855540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:46:03 np0005464891 podman[275363]: 2025-10-01 16:46:03.042954198 +0000 UTC m=+0.274564115 container create 924ca8dec7201ab5f6a176467a3a2c1c308001d1fe1a13dc8f69bd18da01a190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:46:03 np0005464891 systemd[1]: Started libpod-conmon-924ca8dec7201ab5f6a176467a3a2c1c308001d1fe1a13dc8f69bd18da01a190.scope.
Oct  1 12:46:03 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:46:03 np0005464891 podman[275363]: 2025-10-01 16:46:03.411729435 +0000 UTC m=+0.643339432 container init 924ca8dec7201ab5f6a176467a3a2c1c308001d1fe1a13dc8f69bd18da01a190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_darwin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 12:46:03 np0005464891 podman[275363]: 2025-10-01 16:46:03.424501837 +0000 UTC m=+0.656111794 container start 924ca8dec7201ab5f6a176467a3a2c1c308001d1fe1a13dc8f69bd18da01a190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_darwin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 12:46:03 np0005464891 systemd[1]: libpod-924ca8dec7201ab5f6a176467a3a2c1c308001d1fe1a13dc8f69bd18da01a190.scope: Deactivated successfully.
Oct  1 12:46:03 np0005464891 inspiring_darwin[275380]: 167 167
Oct  1 12:46:03 np0005464891 conmon[275380]: conmon 924ca8dec7201ab5f6a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-924ca8dec7201ab5f6a176467a3a2c1c308001d1fe1a13dc8f69bd18da01a190.scope/container/memory.events
Oct  1 12:46:03 np0005464891 podman[275363]: 2025-10-01 16:46:03.509945238 +0000 UTC m=+0.741555245 container attach 924ca8dec7201ab5f6a176467a3a2c1c308001d1fe1a13dc8f69bd18da01a190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_darwin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:46:03 np0005464891 podman[275363]: 2025-10-01 16:46:03.511159631 +0000 UTC m=+0.742769588 container died 924ca8dec7201ab5f6a176467a3a2c1c308001d1fe1a13dc8f69bd18da01a190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_darwin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 12:46:03 np0005464891 systemd[1]: var-lib-containers-storage-overlay-5a39c544d418f6d984ff2146efc02e7557ec1f475e304c3f3e600e8874c2ec36-merged.mount: Deactivated successfully.
Oct  1 12:46:04 np0005464891 podman[275363]: 2025-10-01 16:46:04.327872092 +0000 UTC m=+1.559482019 container remove 924ca8dec7201ab5f6a176467a3a2c1c308001d1fe1a13dc8f69bd18da01a190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_darwin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:46:04 np0005464891 systemd[1]: libpod-conmon-924ca8dec7201ab5f6a176467a3a2c1c308001d1fe1a13dc8f69bd18da01a190.scope: Deactivated successfully.
Oct  1 12:46:04 np0005464891 podman[275404]: 2025-10-01 16:46:04.532098973 +0000 UTC m=+0.031396669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:46:04 np0005464891 podman[275404]: 2025-10-01 16:46:04.786354086 +0000 UTC m=+0.285651732 container create 719b1302f9b5a9433374ed11fd6c5bc82c9be24426c47585a3a63be871131920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_robinson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 12:46:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.9 KiB/s wr, 83 op/s
Oct  1 12:46:04 np0005464891 systemd[1]: Started libpod-conmon-719b1302f9b5a9433374ed11fd6c5bc82c9be24426c47585a3a63be871131920.scope.
Oct  1 12:46:04 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:46:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7c9ff8f1b0e544fe269ede3f61fa6bdc9d3246b8ca6f5807379c64c9ebdb0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:46:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7c9ff8f1b0e544fe269ede3f61fa6bdc9d3246b8ca6f5807379c64c9ebdb0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:46:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7c9ff8f1b0e544fe269ede3f61fa6bdc9d3246b8ca6f5807379c64c9ebdb0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:46:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7c9ff8f1b0e544fe269ede3f61fa6bdc9d3246b8ca6f5807379c64c9ebdb0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:46:05 np0005464891 podman[275404]: 2025-10-01 16:46:05.164224663 +0000 UTC m=+0.663522329 container init 719b1302f9b5a9433374ed11fd6c5bc82c9be24426c47585a3a63be871131920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 12:46:05 np0005464891 podman[275404]: 2025-10-01 16:46:05.175937627 +0000 UTC m=+0.675235283 container start 719b1302f9b5a9433374ed11fd6c5bc82c9be24426c47585a3a63be871131920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_robinson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:46:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:46:05 np0005464891 nova_compute[259907]: 2025-10-01 16:46:05.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:05 np0005464891 podman[275404]: 2025-10-01 16:46:05.55431611 +0000 UTC m=+1.053613846 container attach 719b1302f9b5a9433374ed11fd6c5bc82c9be24426c47585a3a63be871131920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]: {
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:    "0": [
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:        {
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "devices": [
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "/dev/loop3"
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            ],
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "lv_name": "ceph_lv0",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "lv_size": "21470642176",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "name": "ceph_lv0",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "tags": {
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.cluster_name": "ceph",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.crush_device_class": "",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.encrypted": "0",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.osd_id": "0",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.type": "block",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.vdo": "0"
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            },
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "type": "block",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "vg_name": "ceph_vg0"
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:        }
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:    ],
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:    "1": [
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:        {
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "devices": [
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "/dev/loop4"
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            ],
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "lv_name": "ceph_lv1",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "lv_size": "21470642176",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "name": "ceph_lv1",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "tags": {
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.cluster_name": "ceph",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.crush_device_class": "",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.encrypted": "0",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.osd_id": "1",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.type": "block",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.vdo": "0"
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            },
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "type": "block",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "vg_name": "ceph_vg1"
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:        }
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:    ],
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:    "2": [
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:        {
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "devices": [
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "/dev/loop5"
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            ],
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "lv_name": "ceph_lv2",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "lv_size": "21470642176",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "name": "ceph_lv2",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "tags": {
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.cluster_name": "ceph",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.crush_device_class": "",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.encrypted": "0",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.osd_id": "2",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.type": "block",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:                "ceph.vdo": "0"
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            },
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "type": "block",
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:            "vg_name": "ceph_vg2"
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:        }
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]:    ]
Oct  1 12:46:06 np0005464891 gallant_robinson[275420]: }
Oct  1 12:46:06 np0005464891 systemd[1]: libpod-719b1302f9b5a9433374ed11fd6c5bc82c9be24426c47585a3a63be871131920.scope: Deactivated successfully.
Oct  1 12:46:06 np0005464891 podman[275404]: 2025-10-01 16:46:06.668427964 +0000 UTC m=+2.167725690 container died 719b1302f9b5a9433374ed11fd6c5bc82c9be24426c47585a3a63be871131920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:46:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 611 KiB/s rd, 2.7 KiB/s wr, 59 op/s
Oct  1 12:46:07 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6c7c9ff8f1b0e544fe269ede3f61fa6bdc9d3246b8ca6f5807379c64c9ebdb0e-merged.mount: Deactivated successfully.
Oct  1 12:46:07 np0005464891 nova_compute[259907]: 2025-10-01 16:46:07.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:07 np0005464891 podman[275404]: 2025-10-01 16:46:07.850988829 +0000 UTC m=+3.350286475 container remove 719b1302f9b5a9433374ed11fd6c5bc82c9be24426c47585a3a63be871131920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_robinson, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:46:07 np0005464891 systemd[1]: libpod-conmon-719b1302f9b5a9433374ed11fd6c5bc82c9be24426c47585a3a63be871131920.scope: Deactivated successfully.
Oct  1 12:46:08 np0005464891 podman[275588]: 2025-10-01 16:46:08.728498449 +0000 UTC m=+0.105463605 container create 57ecf5f079e035fecdc3cfab671816053e120a2a3e18d43cf8ca6a13107b56c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lewin, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:46:08 np0005464891 podman[275588]: 2025-10-01 16:46:08.666135646 +0000 UTC m=+0.043100842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:46:08 np0005464891 systemd[1]: Started libpod-conmon-57ecf5f079e035fecdc3cfab671816053e120a2a3e18d43cf8ca6a13107b56c4.scope.
Oct  1 12:46:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 573 KiB/s rd, 3.3 KiB/s wr, 86 op/s
Oct  1 12:46:08 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:46:08 np0005464891 podman[275588]: 2025-10-01 16:46:08.86794783 +0000 UTC m=+0.244912986 container init 57ecf5f079e035fecdc3cfab671816053e120a2a3e18d43cf8ca6a13107b56c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lewin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 12:46:08 np0005464891 podman[275588]: 2025-10-01 16:46:08.877988538 +0000 UTC m=+0.254953694 container start 57ecf5f079e035fecdc3cfab671816053e120a2a3e18d43cf8ca6a13107b56c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lewin, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:46:08 np0005464891 frosty_lewin[275604]: 167 167
Oct  1 12:46:08 np0005464891 systemd[1]: libpod-57ecf5f079e035fecdc3cfab671816053e120a2a3e18d43cf8ca6a13107b56c4.scope: Deactivated successfully.
Oct  1 12:46:08 np0005464891 podman[275588]: 2025-10-01 16:46:08.897403694 +0000 UTC m=+0.274368860 container attach 57ecf5f079e035fecdc3cfab671816053e120a2a3e18d43cf8ca6a13107b56c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lewin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:46:08 np0005464891 podman[275588]: 2025-10-01 16:46:08.898070472 +0000 UTC m=+0.275035668 container died 57ecf5f079e035fecdc3cfab671816053e120a2a3e18d43cf8ca6a13107b56c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  1 12:46:09 np0005464891 systemd[1]: var-lib-containers-storage-overlay-16336267fe677d935cee2e4022a8d5142730d1d7e5ef9187f1729493c2e71910-merged.mount: Deactivated successfully.
Oct  1 12:46:09 np0005464891 podman[275588]: 2025-10-01 16:46:09.107929039 +0000 UTC m=+0.484894215 container remove 57ecf5f079e035fecdc3cfab671816053e120a2a3e18d43cf8ca6a13107b56c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lewin, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:46:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:09 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/385953894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:09 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/385953894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:09 np0005464891 systemd[1]: libpod-conmon-57ecf5f079e035fecdc3cfab671816053e120a2a3e18d43cf8ca6a13107b56c4.scope: Deactivated successfully.
Oct  1 12:46:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:09 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3537549881' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:09 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3537549881' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:09 np0005464891 podman[275628]: 2025-10-01 16:46:09.347648641 +0000 UTC m=+0.047367309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:46:09 np0005464891 podman[275628]: 2025-10-01 16:46:09.519592091 +0000 UTC m=+0.219310759 container create 28c71fdf69447476bdfa15e2436c0ad89442a91e348072c33f536063837f8878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 12:46:09 np0005464891 systemd[1]: Started libpod-conmon-28c71fdf69447476bdfa15e2436c0ad89442a91e348072c33f536063837f8878.scope.
Oct  1 12:46:09 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:46:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2b1d0354c1744e64c36b4e1d6cc0f4de12a4ab320bad27d50172b9ef715778/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:46:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2b1d0354c1744e64c36b4e1d6cc0f4de12a4ab320bad27d50172b9ef715778/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:46:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2b1d0354c1744e64c36b4e1d6cc0f4de12a4ab320bad27d50172b9ef715778/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:46:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2b1d0354c1744e64c36b4e1d6cc0f4de12a4ab320bad27d50172b9ef715778/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:46:09 np0005464891 podman[275628]: 2025-10-01 16:46:09.820249605 +0000 UTC m=+0.519968323 container init 28c71fdf69447476bdfa15e2436c0ad89442a91e348072c33f536063837f8878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:46:09 np0005464891 podman[275628]: 2025-10-01 16:46:09.833021028 +0000 UTC m=+0.532739706 container start 28c71fdf69447476bdfa15e2436c0ad89442a91e348072c33f536063837f8878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 12:46:09 np0005464891 podman[275628]: 2025-10-01 16:46:09.96992439 +0000 UTC m=+0.669643118 container attach 28c71fdf69447476bdfa15e2436c0ad89442a91e348072c33f536063837f8878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 12:46:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:46:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Oct  1 12:46:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Oct  1 12:46:10 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Oct  1 12:46:10 np0005464891 nova_compute[259907]: 2025-10-01 16:46:10.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 518 KiB/s rd, 2.9 KiB/s wr, 77 op/s
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]: {
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:        "osd_id": 2,
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:        "type": "bluestore"
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:    },
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:        "osd_id": 0,
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:        "type": "bluestore"
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:    },
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:        "osd_id": 1,
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:        "type": "bluestore"
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]:    }
Oct  1 12:46:10 np0005464891 recursing_keldysh[275644]: }
Oct  1 12:46:10 np0005464891 systemd[1]: libpod-28c71fdf69447476bdfa15e2436c0ad89442a91e348072c33f536063837f8878.scope: Deactivated successfully.
Oct  1 12:46:10 np0005464891 systemd[1]: libpod-28c71fdf69447476bdfa15e2436c0ad89442a91e348072c33f536063837f8878.scope: Consumed 1.112s CPU time.
Oct  1 12:46:10 np0005464891 podman[275628]: 2025-10-01 16:46:10.955763092 +0000 UTC m=+1.655481760 container died 28c71fdf69447476bdfa15e2436c0ad89442a91e348072c33f536063837f8878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 12:46:11 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ae2b1d0354c1744e64c36b4e1d6cc0f4de12a4ab320bad27d50172b9ef715778-merged.mount: Deactivated successfully.
Oct  1 12:46:11 np0005464891 podman[275628]: 2025-10-01 16:46:11.8908501 +0000 UTC m=+2.590568738 container remove 28c71fdf69447476bdfa15e2436c0ad89442a91e348072c33f536063837f8878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 12:46:11 np0005464891 systemd[1]: libpod-conmon-28c71fdf69447476bdfa15e2436c0ad89442a91e348072c33f536063837f8878.scope: Deactivated successfully.
Oct  1 12:46:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:46:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:46:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:46:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 2d177663-f334-4809-b8ec-7bfaa27af6c6 does not exist
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev b62a8e9b-e061-4b72-b52d-effd1ffc4e0c does not exist
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:46:12
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes']
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:46:12 np0005464891 nova_compute[259907]: 2025-10-01 16:46:12.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:46:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:46:12.449 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:46:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:46:12.449 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:46:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:46:12.450 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:46:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 69 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.1 MiB/s wr, 54 op/s
Oct  1 12:46:12 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:46:12 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:46:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Oct  1 12:46:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Oct  1 12:46:14 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Oct  1 12:46:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 2.7 MiB/s wr, 86 op/s
Oct  1 12:46:15 np0005464891 podman[275744]: 2025-10-01 16:46:15.010353588 +0000 UTC m=+0.111508081 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  1 12:46:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Oct  1 12:46:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Oct  1 12:46:15 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Oct  1 12:46:15 np0005464891 nova_compute[259907]: 2025-10-01 16:46:15.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 MiB/s wr, 62 op/s
Oct  1 12:46:17 np0005464891 nova_compute[259907]: 2025-10-01 16:46:17.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 2.7 MiB/s wr, 130 op/s
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:46:19.079597) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337179079640, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2284, "num_deletes": 259, "total_data_size": 3538385, "memory_usage": 3606496, "flush_reason": "Manual Compaction"}
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337179259302, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3443728, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21160, "largest_seqno": 23443, "table_properties": {"data_size": 3433187, "index_size": 6779, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22349, "raw_average_key_size": 20, "raw_value_size": 3411837, "raw_average_value_size": 3197, "num_data_blocks": 300, "num_entries": 1067, "num_filter_entries": 1067, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759336991, "oldest_key_time": 1759336991, "file_creation_time": 1759337179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 179905 microseconds, and 13161 cpu microseconds.
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:46:19.259498) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3443728 bytes OK
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:46:19.259587) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:46:19.449844) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:46:19.449920) EVENT_LOG_v1 {"time_micros": 1759337179449907, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:46:19.449945) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3528522, prev total WAL file size 3528522, number of live WAL files 2.
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:46:19.451486) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3363KB)], [50(7372KB)]
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337179451782, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10992756, "oldest_snapshot_seqno": -1}
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4961 keys, 9284779 bytes, temperature: kUnknown
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337179807293, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9284779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9248378, "index_size": 22915, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 121972, "raw_average_key_size": 24, "raw_value_size": 9155571, "raw_average_value_size": 1845, "num_data_blocks": 955, "num_entries": 4961, "num_filter_entries": 4961, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759337179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:46:19.807716) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9284779 bytes
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:46:19.836389) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 30.9 rd, 26.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.2 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5488, records dropped: 527 output_compression: NoCompression
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:46:19.836526) EVENT_LOG_v1 {"time_micros": 1759337179836437, "job": 26, "event": "compaction_finished", "compaction_time_micros": 355610, "compaction_time_cpu_micros": 39549, "output_level": 6, "num_output_files": 1, "total_output_size": 9284779, "num_input_records": 5488, "num_output_records": 4961, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337179837921, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337179840727, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:46:19.451330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:46:19.840781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:46:19.840787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:46:19.840791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:46:19.840794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:46:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:46:19.840797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:46:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:46:20 np0005464891 nova_compute[259907]: 2025-10-01 16:46:20.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 1.3 MiB/s wr, 102 op/s
Oct  1 12:46:20 np0005464891 nova_compute[259907]: 2025-10-01 16:46:20.904 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:46:20 np0005464891 nova_compute[259907]: 2025-10-01 16:46:20.905 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:46:20 np0005464891 nova_compute[259907]: 2025-10-01 16:46:20.906 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:46:20 np0005464891 nova_compute[259907]: 2025-10-01 16:46:20.964 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 12:46:21 np0005464891 nova_compute[259907]: 2025-10-01 16:46:21.803 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:46:21 np0005464891 nova_compute[259907]: 2025-10-01 16:46:21.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:46:21 np0005464891 nova_compute[259907]: 2025-10-01 16:46:21.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:46:21 np0005464891 nova_compute[259907]: 2025-10-01 16:46:21.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:46:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3586081430' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3586081430' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003471416739923162 of space, bias 1.0, pg target 0.10414250219769486 quantized to 32 (current 32)
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:46:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:46:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/500786903' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/500786903' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Oct  1 12:46:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Oct  1 12:46:22 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Oct  1 12:46:22 np0005464891 nova_compute[259907]: 2025-10-01 16:46:22.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:22 np0005464891 nova_compute[259907]: 2025-10-01 16:46:22.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:46:22 np0005464891 nova_compute[259907]: 2025-10-01 16:46:22.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:46:22 np0005464891 nova_compute[259907]: 2025-10-01 16:46:22.835 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:46:22 np0005464891 nova_compute[259907]: 2025-10-01 16:46:22.837 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:46:22 np0005464891 nova_compute[259907]: 2025-10-01 16:46:22.837 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:46:22 np0005464891 nova_compute[259907]: 2025-10-01 16:46:22.838 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:46:22 np0005464891 nova_compute[259907]: 2025-10-01 16:46:22.838 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:46:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 8.5 KiB/s wr, 168 op/s
Oct  1 12:46:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3885049411' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3885049411' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2269380264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2269380264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:46:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3059486954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:46:23 np0005464891 nova_compute[259907]: 2025-10-01 16:46:23.315 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:46:23 np0005464891 nova_compute[259907]: 2025-10-01 16:46:23.492 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:46:23 np0005464891 nova_compute[259907]: 2025-10-01 16:46:23.494 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4719MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:46:23 np0005464891 nova_compute[259907]: 2025-10-01 16:46:23.494 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:46:23 np0005464891 nova_compute[259907]: 2025-10-01 16:46:23.494 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:46:23 np0005464891 nova_compute[259907]: 2025-10-01 16:46:23.586 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:46:23 np0005464891 nova_compute[259907]: 2025-10-01 16:46:23.588 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:46:23 np0005464891 nova_compute[259907]: 2025-10-01 16:46:23.603 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:46:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/722629311' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/722629311' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:46:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1983545640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:46:24 np0005464891 nova_compute[259907]: 2025-10-01 16:46:24.081 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:46:24 np0005464891 nova_compute[259907]: 2025-10-01 16:46:24.087 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:46:24 np0005464891 nova_compute[259907]: 2025-10-01 16:46:24.103 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:46:24 np0005464891 nova_compute[259907]: 2025-10-01 16:46:24.105 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:46:24 np0005464891 nova_compute[259907]: 2025-10-01 16:46:24.106 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:46:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1691694896' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1691694896' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1911864109' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1911864109' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 7.3 KiB/s wr, 161 op/s
Oct  1 12:46:25 np0005464891 nova_compute[259907]: 2025-10-01 16:46:25.106 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:46:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:46:25 np0005464891 nova_compute[259907]: 2025-10-01 16:46:25.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:25 np0005464891 nova_compute[259907]: 2025-10-01 16:46:25.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:46:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Oct  1 12:46:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Oct  1 12:46:26 np0005464891 nova_compute[259907]: 2025-10-01 16:46:26.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:46:26 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Oct  1 12:46:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 2.4 KiB/s wr, 110 op/s
Oct  1 12:46:27 np0005464891 podman[275807]: 2025-10-01 16:46:27.014382408 +0000 UTC m=+0.113999900 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 12:46:27 np0005464891 nova_compute[259907]: 2025-10-01 16:46:27.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 5.7 KiB/s wr, 201 op/s
Oct  1 12:46:29 np0005464891 podman[275833]: 2025-10-01 16:46:29.96462 +0000 UTC m=+0.074471868 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  1 12:46:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:46:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Oct  1 12:46:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Oct  1 12:46:30 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Oct  1 12:46:30 np0005464891 nova_compute[259907]: 2025-10-01 16:46:30.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 3.5 KiB/s wr, 114 op/s
Oct  1 12:46:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3291773397' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3291773397' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:32 np0005464891 nova_compute[259907]: 2025-10-01 16:46:32.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 654 KiB/s rd, 4.9 KiB/s wr, 114 op/s
Oct  1 12:46:32 np0005464891 podman[275853]: 2025-10-01 16:46:32.99328687 +0000 UTC m=+0.090021927 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct  1 12:46:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2809037686' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2809037686' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.8 KiB/s wr, 146 op/s
Oct  1 12:46:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/556885527' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/556885527' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:46:35 np0005464891 nova_compute[259907]: 2025-10-01 16:46:35.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2959041060' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2959041060' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.7 KiB/s wr, 117 op/s
Oct  1 12:46:37 np0005464891 nova_compute[259907]: 2025-10-01 16:46:37.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct  1 12:46:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:46:40 np0005464891 nova_compute[259907]: 2025-10-01 16:46:40.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 105 op/s
Oct  1 12:46:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:46:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:46:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:46:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:46:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:46:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:46:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3466662623' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3466662623' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:42 np0005464891 nova_compute[259907]: 2025-10-01 16:46:42.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Oct  1 12:46:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/251910313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/251910313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct  1 12:46:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:46:45 np0005464891 nova_compute[259907]: 2025-10-01 16:46:45.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:46 np0005464891 podman[275876]: 2025-10-01 16:46:46.025235247 +0000 UTC m=+0.110902434 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:46:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Oct  1 12:46:47 np0005464891 nova_compute[259907]: 2025-10-01 16:46:47.178 2 DEBUG oslo_concurrency.lockutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Acquiring lock "0bd6a299-e725-4c0f-81ed-726c1167dde0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:46:47 np0005464891 nova_compute[259907]: 2025-10-01 16:46:47.178 2 DEBUG oslo_concurrency.lockutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lock "0bd6a299-e725-4c0f-81ed-726c1167dde0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:46:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:46:47.216 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:46:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:46:47.218 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:46:47 np0005464891 nova_compute[259907]: 2025-10-01 16:46:47.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:47 np0005464891 nova_compute[259907]: 2025-10-01 16:46:47.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:47 np0005464891 nova_compute[259907]: 2025-10-01 16:46:47.380 2 DEBUG nova.compute.manager [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:46:47 np0005464891 nova_compute[259907]: 2025-10-01 16:46:47.819 2 DEBUG oslo_concurrency.lockutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:46:47 np0005464891 nova_compute[259907]: 2025-10-01 16:46:47.820 2 DEBUG oslo_concurrency.lockutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:46:47 np0005464891 nova_compute[259907]: 2025-10-01 16:46:47.831 2 DEBUG nova.virt.hardware [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:46:47 np0005464891 nova_compute[259907]: 2025-10-01 16:46:47.831 2 INFO nova.compute.claims [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:46:48 np0005464891 nova_compute[259907]: 2025-10-01 16:46:48.028 2 DEBUG oslo_concurrency.processutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:46:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:46:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/855358888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:46:48 np0005464891 nova_compute[259907]: 2025-10-01 16:46:48.484 2 DEBUG oslo_concurrency.processutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:46:48 np0005464891 nova_compute[259907]: 2025-10-01 16:46:48.494 2 DEBUG nova.compute.provider_tree [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:46:48 np0005464891 nova_compute[259907]: 2025-10-01 16:46:48.528 2 DEBUG nova.scheduler.client.report [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:46:48 np0005464891 nova_compute[259907]: 2025-10-01 16:46:48.578 2 DEBUG oslo_concurrency.lockutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:46:48 np0005464891 nova_compute[259907]: 2025-10-01 16:46:48.580 2 DEBUG nova.compute.manager [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:46:48 np0005464891 nova_compute[259907]: 2025-10-01 16:46:48.688 2 DEBUG nova.compute.manager [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:46:48 np0005464891 nova_compute[259907]: 2025-10-01 16:46:48.689 2 DEBUG nova.network.neutron [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:46:48 np0005464891 nova_compute[259907]: 2025-10-01 16:46:48.727 2 INFO nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:46:48 np0005464891 nova_compute[259907]: 2025-10-01 16:46:48.808 2 DEBUG nova.compute.manager [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:46:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 104 op/s
Oct  1 12:46:48 np0005464891 nova_compute[259907]: 2025-10-01 16:46:48.924 2 DEBUG nova.compute.manager [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:46:48 np0005464891 nova_compute[259907]: 2025-10-01 16:46:48.926 2 DEBUG nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:46:48 np0005464891 nova_compute[259907]: 2025-10-01 16:46:48.926 2 INFO nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Creating image(s)#033[00m
Oct  1 12:46:48 np0005464891 nova_compute[259907]: 2025-10-01 16:46:48.952 2 DEBUG nova.storage.rbd_utils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] rbd image 0bd6a299-e725-4c0f-81ed-726c1167dde0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:46:48 np0005464891 nova_compute[259907]: 2025-10-01 16:46:48.990 2 DEBUG nova.storage.rbd_utils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] rbd image 0bd6a299-e725-4c0f-81ed-726c1167dde0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:46:49 np0005464891 nova_compute[259907]: 2025-10-01 16:46:49.034 2 DEBUG nova.storage.rbd_utils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] rbd image 0bd6a299-e725-4c0f-81ed-726c1167dde0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:46:49 np0005464891 nova_compute[259907]: 2025-10-01 16:46:49.039 2 DEBUG oslo_concurrency.processutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:46:49 np0005464891 nova_compute[259907]: 2025-10-01 16:46:49.106 2 DEBUG oslo_concurrency.processutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:46:49 np0005464891 nova_compute[259907]: 2025-10-01 16:46:49.107 2 DEBUG oslo_concurrency.lockutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Acquiring lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:46:49 np0005464891 nova_compute[259907]: 2025-10-01 16:46:49.109 2 DEBUG oslo_concurrency.lockutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:46:49 np0005464891 nova_compute[259907]: 2025-10-01 16:46:49.109 2 DEBUG oslo_concurrency.lockutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:46:49 np0005464891 nova_compute[259907]: 2025-10-01 16:46:49.148 2 DEBUG nova.storage.rbd_utils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] rbd image 0bd6a299-e725-4c0f-81ed-726c1167dde0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:46:49 np0005464891 nova_compute[259907]: 2025-10-01 16:46:49.155 2 DEBUG oslo_concurrency.processutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 0bd6a299-e725-4c0f-81ed-726c1167dde0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:46:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/771791603' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/771791603' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:49 np0005464891 nova_compute[259907]: 2025-10-01 16:46:49.446 2 DEBUG nova.network.neutron [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  1 12:46:49 np0005464891 nova_compute[259907]: 2025-10-01 16:46:49.447 2 DEBUG nova.compute.manager [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:46:49 np0005464891 nova_compute[259907]: 2025-10-01 16:46:49.948 2 DEBUG oslo_concurrency.processutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 0bd6a299-e725-4c0f-81ed-726c1167dde0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.793s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.048 2 DEBUG nova.storage.rbd_utils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] resizing rbd image 0bd6a299-e725-4c0f-81ed-726c1167dde0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  1 12:46:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.381 2 DEBUG nova.objects.instance [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lazy-loading 'migration_context' on Instance uuid 0bd6a299-e725-4c0f-81ed-726c1167dde0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.410 2 DEBUG nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.411 2 DEBUG nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Ensure instance console log exists: /var/lib/nova/instances/0bd6a299-e725-4c0f-81ed-726c1167dde0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.412 2 DEBUG oslo_concurrency.lockutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.412 2 DEBUG oslo_concurrency.lockutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.413 2 DEBUG oslo_concurrency.lockutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.416 2 DEBUG nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'image_id': 'f01c1e7c-fea3-4433-a44a-d71153552c78'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.422 2 WARNING nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.426 2 DEBUG nova.virt.libvirt.host [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.427 2 DEBUG nova.virt.libvirt.host [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.431 2 DEBUG nova.virt.libvirt.host [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.431 2 DEBUG nova.virt.libvirt.host [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.432 2 DEBUG nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.432 2 DEBUG nova.virt.hardware [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.433 2 DEBUG nova.virt.hardware [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.433 2 DEBUG nova.virt.hardware [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.433 2 DEBUG nova.virt.hardware [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.434 2 DEBUG nova.virt.hardware [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.434 2 DEBUG nova.virt.hardware [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.434 2 DEBUG nova.virt.hardware [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.435 2 DEBUG nova.virt.hardware [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.435 2 DEBUG nova.virt.hardware [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.435 2 DEBUG nova.virt.hardware [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.436 2 DEBUG nova.virt.hardware [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.439 2 DEBUG oslo_concurrency.processutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.8 KiB/s wr, 49 op/s
Oct  1 12:46:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:46:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1513495818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.940 2 DEBUG oslo_concurrency.processutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.961 2 DEBUG nova.storage.rbd_utils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] rbd image 0bd6a299-e725-4c0f-81ed-726c1167dde0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:46:50 np0005464891 nova_compute[259907]: 2025-10-01 16:46:50.965 2 DEBUG oslo_concurrency.processutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:46:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:46:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580414699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:46:51 np0005464891 nova_compute[259907]: 2025-10-01 16:46:51.630 2 DEBUG oslo_concurrency.processutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:46:51 np0005464891 nova_compute[259907]: 2025-10-01 16:46:51.634 2 DEBUG nova.objects.instance [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lazy-loading 'pci_devices' on Instance uuid 0bd6a299-e725-4c0f-81ed-726c1167dde0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:46:51 np0005464891 nova_compute[259907]: 2025-10-01 16:46:51.662 2 DEBUG nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  <uuid>0bd6a299-e725-4c0f-81ed-726c1167dde0</uuid>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  <name>instance-00000003</name>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <nova:name>tempest-VolumesNegativeTest-instance-767144413</nova:name>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:46:50</nova:creationTime>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:46:51 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:        <nova:user uuid="29818e8feb43477ba8f23a2e69acd789">tempest-VolumesNegativeTest-731247870-project-member</nova:user>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:        <nova:project uuid="a31dec7dae8b4d86be99a6ad8e00d6bc">tempest-VolumesNegativeTest-731247870</nova:project>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="f01c1e7c-fea3-4433-a44a-d71153552c78"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <nova:ports/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <entry name="serial">0bd6a299-e725-4c0f-81ed-726c1167dde0</entry>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <entry name="uuid">0bd6a299-e725-4c0f-81ed-726c1167dde0</entry>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/0bd6a299-e725-4c0f-81ed-726c1167dde0_disk">
Oct  1 12:46:51 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:46:51 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/0bd6a299-e725-4c0f-81ed-726c1167dde0_disk.config">
Oct  1 12:46:51 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:46:51 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/0bd6a299-e725-4c0f-81ed-726c1167dde0/console.log" append="off"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:46:51 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:46:51 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:46:51 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:46:51 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:46:51 np0005464891 nova_compute[259907]: 2025-10-01 16:46:51.717 2 DEBUG nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:46:51 np0005464891 nova_compute[259907]: 2025-10-01 16:46:51.719 2 DEBUG nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:46:51 np0005464891 nova_compute[259907]: 2025-10-01 16:46:51.721 2 INFO nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Using config drive#033[00m
Oct  1 12:46:51 np0005464891 nova_compute[259907]: 2025-10-01 16:46:51.757 2 DEBUG nova.storage.rbd_utils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] rbd image 0bd6a299-e725-4c0f-81ed-726c1167dde0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:46:51 np0005464891 nova_compute[259907]: 2025-10-01 16:46:51.912 2 INFO nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Creating config drive at /var/lib/nova/instances/0bd6a299-e725-4c0f-81ed-726c1167dde0/disk.config#033[00m
Oct  1 12:46:51 np0005464891 nova_compute[259907]: 2025-10-01 16:46:51.916 2 DEBUG oslo_concurrency.processutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0bd6a299-e725-4c0f-81ed-726c1167dde0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl_qw421r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:46:52 np0005464891 nova_compute[259907]: 2025-10-01 16:46:52.049 2 DEBUG oslo_concurrency.processutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0bd6a299-e725-4c0f-81ed-726c1167dde0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl_qw421r" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:46:52 np0005464891 nova_compute[259907]: 2025-10-01 16:46:52.089 2 DEBUG nova.storage.rbd_utils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] rbd image 0bd6a299-e725-4c0f-81ed-726c1167dde0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:46:52 np0005464891 nova_compute[259907]: 2025-10-01 16:46:52.094 2 DEBUG oslo_concurrency.processutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0bd6a299-e725-4c0f-81ed-726c1167dde0/disk.config 0bd6a299-e725-4c0f-81ed-726c1167dde0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:46:52 np0005464891 nova_compute[259907]: 2025-10-01 16:46:52.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:52 np0005464891 nova_compute[259907]: 2025-10-01 16:46:52.719 2 DEBUG oslo_concurrency.processutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0bd6a299-e725-4c0f-81ed-726c1167dde0/disk.config 0bd6a299-e725-4c0f-81ed-726c1167dde0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:46:52 np0005464891 nova_compute[259907]: 2025-10-01 16:46:52.720 2 INFO nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Deleting local config drive /var/lib/nova/instances/0bd6a299-e725-4c0f-81ed-726c1167dde0/disk.config because it was imported into RBD.#033[00m
Oct  1 12:46:52 np0005464891 systemd-machined[214891]: New machine qemu-3-instance-00000003.
Oct  1 12:46:52 np0005464891 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Oct  1 12:46:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 113 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 930 KiB/s wr, 119 op/s
Oct  1 12:46:53 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:46:53.221 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:46:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:53 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/705069294' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:53 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/705069294' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.483 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337214.4825573, 0bd6a299-e725-4c0f-81ed-726c1167dde0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.483 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.486 2 DEBUG nova.compute.manager [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.486 2 DEBUG nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.493 2 INFO nova.virt.libvirt.driver [-] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Instance spawned successfully.#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.495 2 DEBUG nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.521 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.527 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.532 2 DEBUG nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.532 2 DEBUG nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.533 2 DEBUG nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.533 2 DEBUG nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.534 2 DEBUG nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.535 2 DEBUG nova.virt.libvirt.driver [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.570 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.571 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337214.485418, 0bd6a299-e725-4c0f-81ed-726c1167dde0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.571 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] VM Started (Lifecycle Event)#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.630 2 INFO nova.compute.manager [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Took 5.71 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.631 2 DEBUG nova.compute.manager [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.634 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.643 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.717 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.755 2 INFO nova.compute.manager [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Took 7.08 seconds to build instance.#033[00m
Oct  1 12:46:54 np0005464891 nova_compute[259907]: 2025-10-01 16:46:54.788 2 DEBUG oslo_concurrency.lockutils [None req-7bfbcb11-a9d7-4a3f-bdd6-3d6525652e73 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lock "0bd6a299-e725-4c0f-81ed-726c1167dde0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:46:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 1.8 MiB/s wr, 123 op/s
Oct  1 12:46:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:46:55 np0005464891 nova_compute[259907]: 2025-10-01 16:46:55.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:56 np0005464891 nova_compute[259907]: 2025-10-01 16:46:56.698 2 DEBUG oslo_concurrency.lockutils [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Acquiring lock "0bd6a299-e725-4c0f-81ed-726c1167dde0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:46:56 np0005464891 nova_compute[259907]: 2025-10-01 16:46:56.699 2 DEBUG oslo_concurrency.lockutils [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lock "0bd6a299-e725-4c0f-81ed-726c1167dde0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:46:56 np0005464891 nova_compute[259907]: 2025-10-01 16:46:56.699 2 DEBUG oslo_concurrency.lockutils [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Acquiring lock "0bd6a299-e725-4c0f-81ed-726c1167dde0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:46:56 np0005464891 nova_compute[259907]: 2025-10-01 16:46:56.700 2 DEBUG oslo_concurrency.lockutils [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lock "0bd6a299-e725-4c0f-81ed-726c1167dde0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:46:56 np0005464891 nova_compute[259907]: 2025-10-01 16:46:56.700 2 DEBUG oslo_concurrency.lockutils [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lock "0bd6a299-e725-4c0f-81ed-726c1167dde0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:46:56 np0005464891 nova_compute[259907]: 2025-10-01 16:46:56.702 2 INFO nova.compute.manager [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Terminating instance#033[00m
Oct  1 12:46:56 np0005464891 nova_compute[259907]: 2025-10-01 16:46:56.704 2 DEBUG oslo_concurrency.lockutils [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Acquiring lock "refresh_cache-0bd6a299-e725-4c0f-81ed-726c1167dde0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:46:56 np0005464891 nova_compute[259907]: 2025-10-01 16:46:56.704 2 DEBUG oslo_concurrency.lockutils [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Acquired lock "refresh_cache-0bd6a299-e725-4c0f-81ed-726c1167dde0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:46:56 np0005464891 nova_compute[259907]: 2025-10-01 16:46:56.705 2 DEBUG nova.network.neutron [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:46:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 1.8 MiB/s wr, 114 op/s
Oct  1 12:46:56 np0005464891 nova_compute[259907]: 2025-10-01 16:46:56.874 2 DEBUG nova.network.neutron [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:46:57 np0005464891 nova_compute[259907]: 2025-10-01 16:46:57.139 2 DEBUG nova.network.neutron [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:46:57 np0005464891 nova_compute[259907]: 2025-10-01 16:46:57.168 2 DEBUG oslo_concurrency.lockutils [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Releasing lock "refresh_cache-0bd6a299-e725-4c0f-81ed-726c1167dde0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:46:57 np0005464891 nova_compute[259907]: 2025-10-01 16:46:57.169 2 DEBUG nova.compute.manager [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:46:57 np0005464891 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct  1 12:46:57 np0005464891 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 4.182s CPU time.
Oct  1 12:46:57 np0005464891 systemd-machined[214891]: Machine qemu-3-instance-00000003 terminated.
Oct  1 12:46:57 np0005464891 nova_compute[259907]: 2025-10-01 16:46:57.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:46:57 np0005464891 podman[276260]: 2025-10-01 16:46:57.372044564 +0000 UTC m=+0.115622706 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct  1 12:46:57 np0005464891 nova_compute[259907]: 2025-10-01 16:46:57.397 2 INFO nova.virt.libvirt.driver [-] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Instance destroyed successfully.#033[00m
Oct  1 12:46:57 np0005464891 nova_compute[259907]: 2025-10-01 16:46:57.398 2 DEBUG nova.objects.instance [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lazy-loading 'resources' on Instance uuid 0bd6a299-e725-4c0f-81ed-726c1167dde0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:46:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 209 op/s
Oct  1 12:46:59 np0005464891 nova_compute[259907]: 2025-10-01 16:46:59.475 2 INFO nova.virt.libvirt.driver [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Deleting instance files /var/lib/nova/instances/0bd6a299-e725-4c0f-81ed-726c1167dde0_del#033[00m
Oct  1 12:46:59 np0005464891 nova_compute[259907]: 2025-10-01 16:46:59.476 2 INFO nova.virt.libvirt.driver [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Deletion of /var/lib/nova/instances/0bd6a299-e725-4c0f-81ed-726c1167dde0_del complete#033[00m
Oct  1 12:46:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:46:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3056877327' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:46:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:46:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3056877327' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:46:59 np0005464891 nova_compute[259907]: 2025-10-01 16:46:59.574 2 INFO nova.compute.manager [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Took 2.40 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:46:59 np0005464891 nova_compute[259907]: 2025-10-01 16:46:59.575 2 DEBUG oslo.service.loopingcall [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:46:59 np0005464891 nova_compute[259907]: 2025-10-01 16:46:59.575 2 DEBUG nova.compute.manager [-] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:46:59 np0005464891 nova_compute[259907]: 2025-10-01 16:46:59.575 2 DEBUG nova.network.neutron [-] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:46:59 np0005464891 nova_compute[259907]: 2025-10-01 16:46:59.857 2 DEBUG nova.network.neutron [-] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:46:59 np0005464891 nova_compute[259907]: 2025-10-01 16:46:59.889 2 DEBUG nova.network.neutron [-] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:46:59 np0005464891 nova_compute[259907]: 2025-10-01 16:46:59.930 2 INFO nova.compute.manager [-] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Took 0.35 seconds to deallocate network for instance.#033[00m
Oct  1 12:46:59 np0005464891 nova_compute[259907]: 2025-10-01 16:46:59.995 2 DEBUG oslo_concurrency.lockutils [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:46:59 np0005464891 nova_compute[259907]: 2025-10-01 16:46:59.996 2 DEBUG oslo_concurrency.lockutils [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:00 np0005464891 nova_compute[259907]: 2025-10-01 16:47:00.073 2 DEBUG oslo_concurrency.processutils [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:47:00 np0005464891 nova_compute[259907]: 2025-10-01 16:47:00.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:47:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3727389896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:47:00 np0005464891 nova_compute[259907]: 2025-10-01 16:47:00.558 2 DEBUG oslo_concurrency.processutils [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:00 np0005464891 nova_compute[259907]: 2025-10-01 16:47:00.567 2 DEBUG nova.compute.provider_tree [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:47:00 np0005464891 nova_compute[259907]: 2025-10-01 16:47:00.588 2 DEBUG nova.scheduler.client.report [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:47:00 np0005464891 nova_compute[259907]: 2025-10-01 16:47:00.685 2 DEBUG oslo_concurrency.lockutils [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:00 np0005464891 nova_compute[259907]: 2025-10-01 16:47:00.718 2 INFO nova.scheduler.client.report [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Deleted allocations for instance 0bd6a299-e725-4c0f-81ed-726c1167dde0#033[00m
Oct  1 12:47:00 np0005464891 nova_compute[259907]: 2025-10-01 16:47:00.801 2 DEBUG oslo_concurrency.lockutils [None req-321559ed-dc6f-44bf-90d3-422f1fba9c90 29818e8feb43477ba8f23a2e69acd789 a31dec7dae8b4d86be99a6ad8e00d6bc - - default default] Lock "0bd6a299-e725-4c0f-81ed-726c1167dde0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 181 op/s
Oct  1 12:47:00 np0005464891 podman[276331]: 2025-10-01 16:47:00.954529742 +0000 UTC m=+0.065179913 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  1 12:47:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Oct  1 12:47:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Oct  1 12:47:02 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Oct  1 12:47:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:47:02 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1390759357' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:47:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:47:02 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1390759357' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:47:02 np0005464891 nova_compute[259907]: 2025-10-01 16:47:02.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 109 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 200 op/s
Oct  1 12:47:03 np0005464891 podman[276352]: 2025-10-01 16:47:03.958635642 +0000 UTC m=+0.074215892 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001)
Oct  1 12:47:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 KiB/s wr, 183 op/s
Oct  1 12:47:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Oct  1 12:47:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Oct  1 12:47:05 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Oct  1 12:47:05 np0005464891 nova_compute[259907]: 2025-10-01 16:47:05.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:47:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1124282384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:47:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:47:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1124282384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:47:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 4.2 KiB/s wr, 87 op/s
Oct  1 12:47:07 np0005464891 nova_compute[259907]: 2025-10-01 16:47:07.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 5.9 KiB/s wr, 134 op/s
Oct  1 12:47:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Oct  1 12:47:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Oct  1 12:47:10 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Oct  1 12:47:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:47:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Oct  1 12:47:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Oct  1 12:47:10 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Oct  1 12:47:10 np0005464891 nova_compute[259907]: 2025-10-01 16:47:10.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.2 KiB/s wr, 62 op/s
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:47:12
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'volumes', '.mgr', 'default.rgw.meta', 'vms', 'images']
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:47:12 np0005464891 nova_compute[259907]: 2025-10-01 16:47:12.395 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337217.393069, 0bd6a299-e725-4c0f-81ed-726c1167dde0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:47:12 np0005464891 nova_compute[259907]: 2025-10-01 16:47:12.396 2 INFO nova.compute.manager [-] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:47:12 np0005464891 nova_compute[259907]: 2025-10-01 16:47:12.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:12 np0005464891 nova_compute[259907]: 2025-10-01 16:47:12.446 2 DEBUG nova.compute.manager [None req-0f8a42d9-4e1d-462e-8cf7-847495133d97 - - - - - -] [instance: 0bd6a299-e725-4c0f-81ed-726c1167dde0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:47:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:12.450 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:12.450 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:12.450 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 3.8 KiB/s wr, 62 op/s
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:47:13 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev ff2bc1b2-fb02-4e91-8890-5fb93f01717d does not exist
Oct  1 12:47:13 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 26c8f67b-cb14-477b-99f7-dcf03ebfcfd3 does not exist
Oct  1 12:47:13 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 25f8e257-adad-4e02-84ff-ad8c07fd538a does not exist
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:47:13 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:47:14 np0005464891 podman[276644]: 2025-10-01 16:47:13.909708414 +0000 UTC m=+0.044217273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:47:14 np0005464891 podman[276644]: 2025-10-01 16:47:14.126294146 +0000 UTC m=+0.260802925 container create cf8aedc734549238f5f2b631b8e1c7cf59a96b0815044b3d41351f164859aea0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goodall, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:47:14 np0005464891 systemd[1]: Started libpod-conmon-cf8aedc734549238f5f2b631b8e1c7cf59a96b0815044b3d41351f164859aea0.scope.
Oct  1 12:47:14 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:47:14 np0005464891 podman[276644]: 2025-10-01 16:47:14.56665933 +0000 UTC m=+0.701168129 container init cf8aedc734549238f5f2b631b8e1c7cf59a96b0815044b3d41351f164859aea0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goodall, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:47:14 np0005464891 podman[276644]: 2025-10-01 16:47:14.575836354 +0000 UTC m=+0.710345143 container start cf8aedc734549238f5f2b631b8e1c7cf59a96b0815044b3d41351f164859aea0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goodall, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:47:14 np0005464891 trusting_goodall[276661]: 167 167
Oct  1 12:47:14 np0005464891 systemd[1]: libpod-cf8aedc734549238f5f2b631b8e1c7cf59a96b0815044b3d41351f164859aea0.scope: Deactivated successfully.
Oct  1 12:47:14 np0005464891 podman[276644]: 2025-10-01 16:47:14.600504086 +0000 UTC m=+0.735012915 container attach cf8aedc734549238f5f2b631b8e1c7cf59a96b0815044b3d41351f164859aea0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goodall, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 12:47:14 np0005464891 podman[276644]: 2025-10-01 16:47:14.602781298 +0000 UTC m=+0.737290087 container died cf8aedc734549238f5f2b631b8e1c7cf59a96b0815044b3d41351f164859aea0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goodall, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:47:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 4.6 KiB/s wr, 75 op/s
Oct  1 12:47:14 np0005464891 systemd[1]: var-lib-containers-storage-overlay-1dd7b728303f621e148a32e7e8a95cdb78431026f8f96d3a8f2a19c062877713-merged.mount: Deactivated successfully.
Oct  1 12:47:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Oct  1 12:47:15 np0005464891 nova_compute[259907]: 2025-10-01 16:47:15.030 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "5f5bee34-d022-4b27-8233-8c05297df26c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:15 np0005464891 nova_compute[259907]: 2025-10-01 16:47:15.033 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "5f5bee34-d022-4b27-8233-8c05297df26c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:15 np0005464891 nova_compute[259907]: 2025-10-01 16:47:15.147 2 DEBUG nova.compute.manager [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:47:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Oct  1 12:47:15 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Oct  1 12:47:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:47:15 np0005464891 nova_compute[259907]: 2025-10-01 16:47:15.412 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:15 np0005464891 nova_compute[259907]: 2025-10-01 16:47:15.413 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:15 np0005464891 nova_compute[259907]: 2025-10-01 16:47:15.422 2 DEBUG nova.virt.hardware [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:47:15 np0005464891 nova_compute[259907]: 2025-10-01 16:47:15.422 2 INFO nova.compute.claims [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:47:15 np0005464891 podman[276644]: 2025-10-01 16:47:15.490713885 +0000 UTC m=+1.625222714 container remove cf8aedc734549238f5f2b631b8e1c7cf59a96b0815044b3d41351f164859aea0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goodall, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:47:15 np0005464891 nova_compute[259907]: 2025-10-01 16:47:15.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:15 np0005464891 systemd[1]: libpod-conmon-cf8aedc734549238f5f2b631b8e1c7cf59a96b0815044b3d41351f164859aea0.scope: Deactivated successfully.
Oct  1 12:47:15 np0005464891 nova_compute[259907]: 2025-10-01 16:47:15.722 2 DEBUG oslo_concurrency.processutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:15 np0005464891 podman[276686]: 2025-10-01 16:47:15.767603793 +0000 UTC m=+0.126272199 container create 963e5db7a874e641a64577f9f8a8b9dbba18416ba4b9af475730da6702151f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:47:15 np0005464891 podman[276686]: 2025-10-01 16:47:15.68530576 +0000 UTC m=+0.043974246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:47:15 np0005464891 systemd[1]: Started libpod-conmon-963e5db7a874e641a64577f9f8a8b9dbba18416ba4b9af475730da6702151f4b.scope.
Oct  1 12:47:15 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:47:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e23a1a75a427f63093d3106f8bf751733a56139879f349cf6760257d4e7ed8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:47:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e23a1a75a427f63093d3106f8bf751733a56139879f349cf6760257d4e7ed8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:47:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e23a1a75a427f63093d3106f8bf751733a56139879f349cf6760257d4e7ed8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:47:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e23a1a75a427f63093d3106f8bf751733a56139879f349cf6760257d4e7ed8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:47:15 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e23a1a75a427f63093d3106f8bf751733a56139879f349cf6760257d4e7ed8f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:47:16 np0005464891 podman[276686]: 2025-10-01 16:47:16.135327072 +0000 UTC m=+0.493995498 container init 963e5db7a874e641a64577f9f8a8b9dbba18416ba4b9af475730da6702151f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:47:16 np0005464891 podman[276686]: 2025-10-01 16:47:16.144490924 +0000 UTC m=+0.503159340 container start 963e5db7a874e641a64577f9f8a8b9dbba18416ba4b9af475730da6702151f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sammet, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:47:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:47:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2854141109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:47:16 np0005464891 podman[276686]: 2025-10-01 16:47:16.234276785 +0000 UTC m=+0.592945231 container attach 963e5db7a874e641a64577f9f8a8b9dbba18416ba4b9af475730da6702151f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 12:47:16 np0005464891 nova_compute[259907]: 2025-10-01 16:47:16.241 2 DEBUG oslo_concurrency.processutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:16 np0005464891 nova_compute[259907]: 2025-10-01 16:47:16.253 2 DEBUG nova.compute.provider_tree [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:47:16 np0005464891 nova_compute[259907]: 2025-10-01 16:47:16.282 2 DEBUG nova.scheduler.client.report [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:47:16 np0005464891 nova_compute[259907]: 2025-10-01 16:47:16.344 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:16 np0005464891 nova_compute[259907]: 2025-10-01 16:47:16.345 2 DEBUG nova.compute.manager [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:47:16 np0005464891 nova_compute[259907]: 2025-10-01 16:47:16.448 2 DEBUG nova.compute.manager [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:47:16 np0005464891 nova_compute[259907]: 2025-10-01 16:47:16.449 2 DEBUG nova.network.neutron [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:47:16 np0005464891 nova_compute[259907]: 2025-10-01 16:47:16.577 2 INFO nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:47:16 np0005464891 nova_compute[259907]: 2025-10-01 16:47:16.688 2 DEBUG nova.compute.manager [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:47:16 np0005464891 nova_compute[259907]: 2025-10-01 16:47:16.824 2 DEBUG nova.policy [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '85daab3d4ec44eb885d793a27894aab3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9a68f4cae7c4848af4537abf8f3a937', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:47:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 88 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.6 KiB/s wr, 34 op/s
Oct  1 12:47:16 np0005464891 podman[276731]: 2025-10-01 16:47:16.948430041 +0000 UTC m=+0.059614597 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct  1 12:47:17 np0005464891 nova_compute[259907]: 2025-10-01 16:47:17.074 2 DEBUG nova.compute.manager [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:47:17 np0005464891 nova_compute[259907]: 2025-10-01 16:47:17.078 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:47:17 np0005464891 nova_compute[259907]: 2025-10-01 16:47:17.079 2 INFO nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Creating image(s)#033[00m
Oct  1 12:47:17 np0005464891 nova_compute[259907]: 2025-10-01 16:47:17.114 2 DEBUG nova.storage.rbd_utils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] rbd image 5f5bee34-d022-4b27-8233-8c05297df26c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:17 np0005464891 nova_compute[259907]: 2025-10-01 16:47:17.147 2 DEBUG nova.storage.rbd_utils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] rbd image 5f5bee34-d022-4b27-8233-8c05297df26c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:17 np0005464891 nova_compute[259907]: 2025-10-01 16:47:17.179 2 DEBUG nova.storage.rbd_utils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] rbd image 5f5bee34-d022-4b27-8233-8c05297df26c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:17 np0005464891 nova_compute[259907]: 2025-10-01 16:47:17.183 2 DEBUG oslo_concurrency.processutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:17 np0005464891 nova_compute[259907]: 2025-10-01 16:47:17.253 2 DEBUG oslo_concurrency.processutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:17 np0005464891 nova_compute[259907]: 2025-10-01 16:47:17.256 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:17 np0005464891 nova_compute[259907]: 2025-10-01 16:47:17.257 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:17 np0005464891 nova_compute[259907]: 2025-10-01 16:47:17.257 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:17 np0005464891 nova_compute[259907]: 2025-10-01 16:47:17.285 2 DEBUG nova.storage.rbd_utils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] rbd image 5f5bee34-d022-4b27-8233-8c05297df26c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:17 np0005464891 nova_compute[259907]: 2025-10-01 16:47:17.290 2 DEBUG oslo_concurrency.processutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 5f5bee34-d022-4b27-8233-8c05297df26c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:17 np0005464891 thirsty_sammet[276703]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:47:17 np0005464891 thirsty_sammet[276703]: --> relative data size: 1.0
Oct  1 12:47:17 np0005464891 thirsty_sammet[276703]: --> All data devices are unavailable
Oct  1 12:47:17 np0005464891 systemd[1]: libpod-963e5db7a874e641a64577f9f8a8b9dbba18416ba4b9af475730da6702151f4b.scope: Deactivated successfully.
Oct  1 12:47:17 np0005464891 nova_compute[259907]: 2025-10-01 16:47:17.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:17 np0005464891 systemd[1]: libpod-963e5db7a874e641a64577f9f8a8b9dbba18416ba4b9af475730da6702151f4b.scope: Consumed 1.089s CPU time.
Oct  1 12:47:17 np0005464891 podman[276863]: 2025-10-01 16:47:17.530138459 +0000 UTC m=+0.030984867 container died 963e5db7a874e641a64577f9f8a8b9dbba18416ba4b9af475730da6702151f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sammet, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:47:17 np0005464891 systemd[1]: var-lib-containers-storage-overlay-1e23a1a75a427f63093d3106f8bf751733a56139879f349cf6760257d4e7ed8f-merged.mount: Deactivated successfully.
Oct  1 12:47:17 np0005464891 podman[276863]: 2025-10-01 16:47:17.709069641 +0000 UTC m=+0.209916059 container remove 963e5db7a874e641a64577f9f8a8b9dbba18416ba4b9af475730da6702151f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sammet, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 12:47:17 np0005464891 systemd[1]: libpod-conmon-963e5db7a874e641a64577f9f8a8b9dbba18416ba4b9af475730da6702151f4b.scope: Deactivated successfully.
Oct  1 12:47:18 np0005464891 nova_compute[259907]: 2025-10-01 16:47:18.052 2 DEBUG oslo_concurrency.processutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 5f5bee34-d022-4b27-8233-8c05297df26c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.762s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:18 np0005464891 nova_compute[259907]: 2025-10-01 16:47:18.105 2 DEBUG nova.network.neutron [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Successfully created port: d93019e5-5f92-4987-9169-bc28ee796c9b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:47:18 np0005464891 nova_compute[259907]: 2025-10-01 16:47:18.162 2 DEBUG nova.storage.rbd_utils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] resizing rbd image 5f5bee34-d022-4b27-8233-8c05297df26c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  1 12:47:18 np0005464891 nova_compute[259907]: 2025-10-01 16:47:18.437 2 DEBUG nova.objects.instance [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lazy-loading 'migration_context' on Instance uuid 5f5bee34-d022-4b27-8233-8c05297df26c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:47:18 np0005464891 nova_compute[259907]: 2025-10-01 16:47:18.462 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  1 12:47:18 np0005464891 nova_compute[259907]: 2025-10-01 16:47:18.463 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Ensure instance console log exists: /var/lib/nova/instances/5f5bee34-d022-4b27-8233-8c05297df26c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:47:18 np0005464891 nova_compute[259907]: 2025-10-01 16:47:18.464 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:18 np0005464891 nova_compute[259907]: 2025-10-01 16:47:18.464 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:18 np0005464891 nova_compute[259907]: 2025-10-01 16:47:18.464 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:18 np0005464891 podman[277093]: 2025-10-01 16:47:18.557748904 +0000 UTC m=+0.083302841 container create 3bfb736186b08c2533177901e22903e0b0a6f1abfdd167336150da633b1de574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 12:47:18 np0005464891 podman[277093]: 2025-10-01 16:47:18.499018522 +0000 UTC m=+0.024572479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:47:18 np0005464891 systemd[1]: Started libpod-conmon-3bfb736186b08c2533177901e22903e0b0a6f1abfdd167336150da633b1de574.scope.
Oct  1 12:47:18 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:47:18 np0005464891 podman[277093]: 2025-10-01 16:47:18.794780192 +0000 UTC m=+0.320334129 container init 3bfb736186b08c2533177901e22903e0b0a6f1abfdd167336150da633b1de574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 12:47:18 np0005464891 podman[277093]: 2025-10-01 16:47:18.807271327 +0000 UTC m=+0.332825274 container start 3bfb736186b08c2533177901e22903e0b0a6f1abfdd167336150da633b1de574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williamson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 12:47:18 np0005464891 affectionate_williamson[277108]: 167 167
Oct  1 12:47:18 np0005464891 systemd[1]: libpod-3bfb736186b08c2533177901e22903e0b0a6f1abfdd167336150da633b1de574.scope: Deactivated successfully.
Oct  1 12:47:18 np0005464891 conmon[277108]: conmon 3bfb736186b08c253317 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3bfb736186b08c2533177901e22903e0b0a6f1abfdd167336150da633b1de574.scope/container/memory.events
Oct  1 12:47:18 np0005464891 podman[277093]: 2025-10-01 16:47:18.837650276 +0000 UTC m=+0.363204253 container attach 3bfb736186b08c2533177901e22903e0b0a6f1abfdd167336150da633b1de574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williamson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:47:18 np0005464891 podman[277093]: 2025-10-01 16:47:18.83853599 +0000 UTC m=+0.364089927 container died 3bfb736186b08c2533177901e22903e0b0a6f1abfdd167336150da633b1de574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williamson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:47:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 105 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.3 MiB/s wr, 67 op/s
Oct  1 12:47:18 np0005464891 systemd[1]: var-lib-containers-storage-overlay-95d1fd698a595f7fc5735494b814f93def4a051b125677430edddde396e92fab-merged.mount: Deactivated successfully.
Oct  1 12:47:19 np0005464891 podman[277093]: 2025-10-01 16:47:19.232077221 +0000 UTC m=+0.757631178 container remove 3bfb736186b08c2533177901e22903e0b0a6f1abfdd167336150da633b1de574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 12:47:19 np0005464891 systemd[1]: libpod-conmon-3bfb736186b08c2533177901e22903e0b0a6f1abfdd167336150da633b1de574.scope: Deactivated successfully.
Oct  1 12:47:19 np0005464891 podman[277136]: 2025-10-01 16:47:19.414954952 +0000 UTC m=+0.028228010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:47:19 np0005464891 podman[277136]: 2025-10-01 16:47:19.519921322 +0000 UTC m=+0.133194360 container create 01422fbc019a2ac88617e70cc067b1263911c6f1c09237fec1fc6a2aa218f2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:47:19 np0005464891 systemd[1]: Started libpod-conmon-01422fbc019a2ac88617e70cc067b1263911c6f1c09237fec1fc6a2aa218f2ed.scope.
Oct  1 12:47:19 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:47:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9a29b66343c6c11e3563510a03c2203cc10107f331580a2d171ed081238e74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:47:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9a29b66343c6c11e3563510a03c2203cc10107f331580a2d171ed081238e74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:47:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9a29b66343c6c11e3563510a03c2203cc10107f331580a2d171ed081238e74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:47:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9a29b66343c6c11e3563510a03c2203cc10107f331580a2d171ed081238e74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:47:19 np0005464891 podman[277136]: 2025-10-01 16:47:19.75481484 +0000 UTC m=+0.368087928 container init 01422fbc019a2ac88617e70cc067b1263911c6f1c09237fec1fc6a2aa218f2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_morse, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:47:19 np0005464891 podman[277136]: 2025-10-01 16:47:19.766416571 +0000 UTC m=+0.379689619 container start 01422fbc019a2ac88617e70cc067b1263911c6f1c09237fec1fc6a2aa218f2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:47:19 np0005464891 podman[277136]: 2025-10-01 16:47:19.896551795 +0000 UTC m=+0.509824803 container attach 01422fbc019a2ac88617e70cc067b1263911c6f1c09237fec1fc6a2aa218f2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:47:20 np0005464891 nova_compute[259907]: 2025-10-01 16:47:20.066 2 DEBUG nova.network.neutron [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Successfully updated port: d93019e5-5f92-4987-9169-bc28ee796c9b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:47:20 np0005464891 nova_compute[259907]: 2025-10-01 16:47:20.125 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "refresh_cache-5f5bee34-d022-4b27-8233-8c05297df26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:47:20 np0005464891 nova_compute[259907]: 2025-10-01 16:47:20.125 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquired lock "refresh_cache-5f5bee34-d022-4b27-8233-8c05297df26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:47:20 np0005464891 nova_compute[259907]: 2025-10-01 16:47:20.125 2 DEBUG nova.network.neutron [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:47:20 np0005464891 nova_compute[259907]: 2025-10-01 16:47:20.284 2 DEBUG nova.compute.manager [req-76401fa8-89a2-42bc-8bbd-131d9ec8ec26 req-0a4314e0-7dd2-4104-a7ea-8e68926bd20d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Received event network-changed-d93019e5-5f92-4987-9169-bc28ee796c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:47:20 np0005464891 nova_compute[259907]: 2025-10-01 16:47:20.285 2 DEBUG nova.compute.manager [req-76401fa8-89a2-42bc-8bbd-131d9ec8ec26 req-0a4314e0-7dd2-4104-a7ea-8e68926bd20d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Refreshing instance network info cache due to event network-changed-d93019e5-5f92-4987-9169-bc28ee796c9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:47:20 np0005464891 nova_compute[259907]: 2025-10-01 16:47:20.286 2 DEBUG oslo_concurrency.lockutils [req-76401fa8-89a2-42bc-8bbd-131d9ec8ec26 req-0a4314e0-7dd2-4104-a7ea-8e68926bd20d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-5f5bee34-d022-4b27-8233-8c05297df26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:47:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:47:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Oct  1 12:47:20 np0005464891 nova_compute[259907]: 2025-10-01 16:47:20.448 2 DEBUG nova.network.neutron [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:47:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Oct  1 12:47:20 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Oct  1 12:47:20 np0005464891 nova_compute[259907]: 2025-10-01 16:47:20.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:20 np0005464891 recursing_morse[277152]: {
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:    "0": [
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:        {
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "devices": [
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "/dev/loop3"
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            ],
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "lv_name": "ceph_lv0",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "lv_size": "21470642176",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "name": "ceph_lv0",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "tags": {
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.cluster_name": "ceph",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.crush_device_class": "",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.encrypted": "0",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.osd_id": "0",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.type": "block",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.vdo": "0"
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            },
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "type": "block",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "vg_name": "ceph_vg0"
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:        }
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:    ],
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:    "1": [
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:        {
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "devices": [
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "/dev/loop4"
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            ],
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "lv_name": "ceph_lv1",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "lv_size": "21470642176",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "name": "ceph_lv1",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "tags": {
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.cluster_name": "ceph",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.crush_device_class": "",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.encrypted": "0",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.osd_id": "1",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.type": "block",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.vdo": "0"
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            },
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "type": "block",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "vg_name": "ceph_vg1"
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:        }
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:    ],
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:    "2": [
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:        {
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "devices": [
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "/dev/loop5"
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            ],
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "lv_name": "ceph_lv2",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "lv_size": "21470642176",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "name": "ceph_lv2",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "tags": {
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.cluster_name": "ceph",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.crush_device_class": "",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.encrypted": "0",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.osd_id": "2",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.type": "block",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:                "ceph.vdo": "0"
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            },
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "type": "block",
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:            "vg_name": "ceph_vg2"
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:        }
Oct  1 12:47:20 np0005464891 recursing_morse[277152]:    ]
Oct  1 12:47:20 np0005464891 recursing_morse[277152]: }
Oct  1 12:47:20 np0005464891 systemd[1]: libpod-01422fbc019a2ac88617e70cc067b1263911c6f1c09237fec1fc6a2aa218f2ed.scope: Deactivated successfully.
Oct  1 12:47:20 np0005464891 podman[277136]: 2025-10-01 16:47:20.650721737 +0000 UTC m=+1.263994745 container died 01422fbc019a2ac88617e70cc067b1263911c6f1c09237fec1fc6a2aa218f2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:47:20 np0005464891 systemd[1]: var-lib-containers-storage-overlay-3b9a29b66343c6c11e3563510a03c2203cc10107f331580a2d171ed081238e74-merged.mount: Deactivated successfully.
Oct  1 12:47:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 105 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.3 MiB/s wr, 60 op/s
Oct  1 12:47:20 np0005464891 podman[277136]: 2025-10-01 16:47:20.872055781 +0000 UTC m=+1.485328799 container remove 01422fbc019a2ac88617e70cc067b1263911c6f1c09237fec1fc6a2aa218f2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_morse, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 12:47:20 np0005464891 systemd[1]: libpod-conmon-01422fbc019a2ac88617e70cc067b1263911c6f1c09237fec1fc6a2aa218f2ed.scope: Deactivated successfully.
Oct  1 12:47:21 np0005464891 podman[277314]: 2025-10-01 16:47:21.627713054 +0000 UTC m=+0.042478124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:47:21 np0005464891 nova_compute[259907]: 2025-10-01 16:47:21.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:47:21 np0005464891 nova_compute[259907]: 2025-10-01 16:47:21.806 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:47:21 np0005464891 nova_compute[259907]: 2025-10-01 16:47:21.806 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:47:21 np0005464891 podman[277314]: 2025-10-01 16:47:21.861003198 +0000 UTC m=+0.275768248 container create c886b26a3e32efa4db0466181060a819aa632ab15d45c7a0e46a6ab3dfbc6c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_robinson, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:47:21 np0005464891 nova_compute[259907]: 2025-10-01 16:47:21.890 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  1 12:47:21 np0005464891 nova_compute[259907]: 2025-10-01 16:47:21.890 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00018040683808686592 of space, bias 1.0, pg target 0.054122051426059775 quantized to 32 (current 32)
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034688731116103405 of space, bias 1.0, pg target 0.10406619334831022 quantized to 32 (current 32)
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659218922966725 of space, bias 1.0, pg target 0.19977656768900176 quantized to 32 (current 32)
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:47:21 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:47:22 np0005464891 systemd[1]: Started libpod-conmon-c886b26a3e32efa4db0466181060a819aa632ab15d45c7a0e46a6ab3dfbc6c7d.scope.
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.031 2 DEBUG nova.network.neutron [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Updating instance_info_cache with network_info: [{"id": "d93019e5-5f92-4987-9169-bc28ee796c9b", "address": "fa:16:3e:28:74:91", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93019e5-5f", "ovs_interfaceid": "d93019e5-5f92-4987-9169-bc28ee796c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:47:22 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.072 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Releasing lock "refresh_cache-5f5bee34-d022-4b27-8233-8c05297df26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.074 2 DEBUG nova.compute.manager [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Instance network_info: |[{"id": "d93019e5-5f92-4987-9169-bc28ee796c9b", "address": "fa:16:3e:28:74:91", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93019e5-5f", "ovs_interfaceid": "d93019e5-5f92-4987-9169-bc28ee796c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.074 2 DEBUG oslo_concurrency.lockutils [req-76401fa8-89a2-42bc-8bbd-131d9ec8ec26 req-0a4314e0-7dd2-4104-a7ea-8e68926bd20d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-5f5bee34-d022-4b27-8233-8c05297df26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.075 2 DEBUG nova.network.neutron [req-76401fa8-89a2-42bc-8bbd-131d9ec8ec26 req-0a4314e0-7dd2-4104-a7ea-8e68926bd20d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Refreshing network info cache for port d93019e5-5f92-4987-9169-bc28ee796c9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.077 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Start _get_guest_xml network_info=[{"id": "d93019e5-5f92-4987-9169-bc28ee796c9b", "address": "fa:16:3e:28:74:91", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93019e5-5f", "ovs_interfaceid": "d93019e5-5f92-4987-9169-bc28ee796c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'image_id': 'f01c1e7c-fea3-4433-a44a-d71153552c78'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.082 2 WARNING nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:47:22 np0005464891 podman[277314]: 2025-10-01 16:47:22.087875895 +0000 UTC m=+0.502640945 container init c886b26a3e32efa4db0466181060a819aa632ab15d45c7a0e46a6ab3dfbc6c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_robinson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.089 2 DEBUG nova.virt.libvirt.host [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.090 2 DEBUG nova.virt.libvirt.host [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:47:22 np0005464891 podman[277314]: 2025-10-01 16:47:22.097975904 +0000 UTC m=+0.512740904 container start c886b26a3e32efa4db0466181060a819aa632ab15d45c7a0e46a6ab3dfbc6c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_robinson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.099 2 DEBUG nova.virt.libvirt.host [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.099 2 DEBUG nova.virt.libvirt.host [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.100 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.100 2 DEBUG nova.virt.hardware [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.101 2 DEBUG nova.virt.hardware [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.101 2 DEBUG nova.virt.hardware [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.101 2 DEBUG nova.virt.hardware [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.101 2 DEBUG nova.virt.hardware [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.102 2 DEBUG nova.virt.hardware [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:47:22 np0005464891 strange_robinson[277330]: 167 167
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.102 2 DEBUG nova.virt.hardware [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.102 2 DEBUG nova.virt.hardware [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.102 2 DEBUG nova.virt.hardware [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.104 2 DEBUG nova.virt.hardware [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:47:22 np0005464891 systemd[1]: libpod-c886b26a3e32efa4db0466181060a819aa632ab15d45c7a0e46a6ab3dfbc6c7d.scope: Deactivated successfully.
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.104 2 DEBUG nova.virt.hardware [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.107 2 DEBUG oslo_concurrency.processutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:22 np0005464891 podman[277314]: 2025-10-01 16:47:22.119007445 +0000 UTC m=+0.533772485 container attach c886b26a3e32efa4db0466181060a819aa632ab15d45c7a0e46a6ab3dfbc6c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_robinson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:47:22 np0005464891 podman[277314]: 2025-10-01 16:47:22.11954555 +0000 UTC m=+0.534310580 container died c886b26a3e32efa4db0466181060a819aa632ab15d45c7a0e46a6ab3dfbc6c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:47:22 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e179ab0058d6b8968b19d55cc5ae6f8da96d076aa35256972a30a9e65520bdec-merged.mount: Deactivated successfully.
Oct  1 12:47:22 np0005464891 podman[277314]: 2025-10-01 16:47:22.263849556 +0000 UTC m=+0.678614566 container remove c886b26a3e32efa4db0466181060a819aa632ab15d45c7a0e46a6ab3dfbc6c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_robinson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 12:47:22 np0005464891 systemd[1]: libpod-conmon-c886b26a3e32efa4db0466181060a819aa632ab15d45c7a0e46a6ab3dfbc6c7d.scope: Deactivated successfully.
Oct  1 12:47:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:47:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3484777033' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:47:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:47:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3484777033' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:22 np0005464891 podman[277374]: 2025-10-01 16:47:22.499377531 +0000 UTC m=+0.086966623 container create 7164f25fb8106a4d15061df24743f935644fb952f93f2eb6a8d105843afedf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:47:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:47:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3436277172' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:47:22 np0005464891 podman[277374]: 2025-10-01 16:47:22.457736851 +0000 UTC m=+0.045325983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.553 2 DEBUG oslo_concurrency.processutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:22 np0005464891 systemd[1]: Started libpod-conmon-7164f25fb8106a4d15061df24743f935644fb952f93f2eb6a8d105843afedf2c.scope.
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.594 2 DEBUG nova.storage.rbd_utils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] rbd image 5f5bee34-d022-4b27-8233-8c05297df26c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.602 2 DEBUG oslo_concurrency.processutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:22 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:47:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e7583b76bd9273c9534b76ca19d355cfb060bf2cc523a1e21b4afc0ecab8f69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:47:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e7583b76bd9273c9534b76ca19d355cfb060bf2cc523a1e21b4afc0ecab8f69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:47:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e7583b76bd9273c9534b76ca19d355cfb060bf2cc523a1e21b4afc0ecab8f69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:47:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e7583b76bd9273c9534b76ca19d355cfb060bf2cc523a1e21b4afc0ecab8f69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:47:22 np0005464891 podman[277374]: 2025-10-01 16:47:22.671535677 +0000 UTC m=+0.259124809 container init 7164f25fb8106a4d15061df24743f935644fb952f93f2eb6a8d105843afedf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:47:22 np0005464891 podman[277374]: 2025-10-01 16:47:22.685701939 +0000 UTC m=+0.273291021 container start 7164f25fb8106a4d15061df24743f935644fb952f93f2eb6a8d105843afedf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 12:47:22 np0005464891 podman[277374]: 2025-10-01 16:47:22.707897342 +0000 UTC m=+0.295486474 container attach 7164f25fb8106a4d15061df24743f935644fb952f93f2eb6a8d105843afedf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:47:22 np0005464891 nova_compute[259907]: 2025-10-01 16:47:22.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:47:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 134 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.7 MiB/s wr, 87 op/s
Oct  1 12:47:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:47:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2736328496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.123 2 DEBUG oslo_concurrency.processutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.124 2 DEBUG nova.virt.libvirt.vif [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1944151369',display_name='tempest-VolumesActionsTest-instance-1944151369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1944151369',id=4,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9a68f4cae7c4848af4537abf8f3a937',ramdisk_id='',reservation_id='r-vb5rn5kc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-789764846',owner_user_name='tempest-VolumesActionsTest-789764846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:47:16Z,user_data=None,user_id='85daab3d4ec44eb885d793a27894aab3',uuid=5f5bee34-d022-4b27-8233-8c05297df26c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d93019e5-5f92-4987-9169-bc28ee796c9b", "address": "fa:16:3e:28:74:91", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93019e5-5f", "ovs_interfaceid": "d93019e5-5f92-4987-9169-bc28ee796c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.125 2 DEBUG nova.network.os_vif_util [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Converting VIF {"id": "d93019e5-5f92-4987-9169-bc28ee796c9b", "address": "fa:16:3e:28:74:91", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93019e5-5f", "ovs_interfaceid": "d93019e5-5f92-4987-9169-bc28ee796c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.125 2 DEBUG nova.network.os_vif_util [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:74:91,bridge_name='br-int',has_traffic_filtering=True,id=d93019e5-5f92-4987-9169-bc28ee796c9b,network=Network(c3928aed-f713-4c4c-8990-af3a790b20cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd93019e5-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.127 2 DEBUG nova.objects.instance [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f5bee34-d022-4b27-8233-8c05297df26c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.162 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  <uuid>5f5bee34-d022-4b27-8233-8c05297df26c</uuid>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  <name>instance-00000004</name>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <nova:name>tempest-VolumesActionsTest-instance-1944151369</nova:name>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:47:22</nova:creationTime>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:        <nova:user uuid="85daab3d4ec44eb885d793a27894aab3">tempest-VolumesActionsTest-789764846-project-member</nova:user>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:        <nova:project uuid="b9a68f4cae7c4848af4537abf8f3a937">tempest-VolumesActionsTest-789764846</nova:project>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="f01c1e7c-fea3-4433-a44a-d71153552c78"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:        <nova:port uuid="d93019e5-5f92-4987-9169-bc28ee796c9b">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <entry name="serial">5f5bee34-d022-4b27-8233-8c05297df26c</entry>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <entry name="uuid">5f5bee34-d022-4b27-8233-8c05297df26c</entry>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/5f5bee34-d022-4b27-8233-8c05297df26c_disk">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/5f5bee34-d022-4b27-8233-8c05297df26c_disk.config">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:28:74:91"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <target dev="tapd93019e5-5f"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/5f5bee34-d022-4b27-8233-8c05297df26c/console.log" append="off"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:47:23 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:47:23 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:47:23 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:47:23 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.163 2 DEBUG nova.compute.manager [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Preparing to wait for external event network-vif-plugged-d93019e5-5f92-4987-9169-bc28ee796c9b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.163 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.163 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.163 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.164 2 DEBUG nova.virt.libvirt.vif [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1944151369',display_name='tempest-VolumesActionsTest-instance-1944151369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1944151369',id=4,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9a68f4cae7c4848af4537abf8f3a937',ramdisk_id='',reservation_id='r-vb5rn5kc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-789764846',owner_user_name='tempest-VolumesActionsTest-789764846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:47:16Z,user_data=None,user_id='85daab3d4ec44eb885d793a27894aab3',uuid=5f5bee34-d022-4b27-8233-8c05297df26c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d93019e5-5f92-4987-9169-bc28ee796c9b", "address": "fa:16:3e:28:74:91", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93019e5-5f", "ovs_interfaceid": "d93019e5-5f92-4987-9169-bc28ee796c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.164 2 DEBUG nova.network.os_vif_util [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Converting VIF {"id": "d93019e5-5f92-4987-9169-bc28ee796c9b", "address": "fa:16:3e:28:74:91", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93019e5-5f", "ovs_interfaceid": "d93019e5-5f92-4987-9169-bc28ee796c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.165 2 DEBUG nova.network.os_vif_util [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:74:91,bridge_name='br-int',has_traffic_filtering=True,id=d93019e5-5f92-4987-9169-bc28ee796c9b,network=Network(c3928aed-f713-4c4c-8990-af3a790b20cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd93019e5-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.165 2 DEBUG os_vif [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:74:91,bridge_name='br-int',has_traffic_filtering=True,id=d93019e5-5f92-4987-9169-bc28ee796c9b,network=Network(c3928aed-f713-4c4c-8990-af3a790b20cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd93019e5-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.166 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.166 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.174 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd93019e5-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.175 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd93019e5-5f, col_values=(('external_ids', {'iface-id': 'd93019e5-5f92-4987-9169-bc28ee796c9b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:74:91', 'vm-uuid': '5f5bee34-d022-4b27-8233-8c05297df26c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:23 np0005464891 NetworkManager[44940]: <info>  [1759337243.1785] manager: (tapd93019e5-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.186 2 INFO os_vif [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:74:91,bridge_name='br-int',has_traffic_filtering=True,id=d93019e5-5f92-4987-9169-bc28ee796c9b,network=Network(c3928aed-f713-4c4c-8990-af3a790b20cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd93019e5-5f')#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.300 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.301 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.301 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] No VIF found with MAC fa:16:3e:28:74:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.302 2 INFO nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Using config drive#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.336 2 DEBUG nova.storage.rbd_utils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] rbd image 5f5bee34-d022-4b27-8233-8c05297df26c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]: {
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:        "osd_id": 2,
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:        "type": "bluestore"
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:    },
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:        "osd_id": 0,
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:        "type": "bluestore"
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:    },
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:        "osd_id": 1,
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:        "type": "bluestore"
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]:    }
Oct  1 12:47:23 np0005464891 hungry_cartwright[277408]: }
Oct  1 12:47:23 np0005464891 systemd[1]: libpod-7164f25fb8106a4d15061df24743f935644fb952f93f2eb6a8d105843afedf2c.scope: Deactivated successfully.
Oct  1 12:47:23 np0005464891 podman[277374]: 2025-10-01 16:47:23.724967696 +0000 UTC m=+1.312556778 container died 7164f25fb8106a4d15061df24743f935644fb952f93f2eb6a8d105843afedf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 12:47:23 np0005464891 systemd[1]: libpod-7164f25fb8106a4d15061df24743f935644fb952f93f2eb6a8d105843afedf2c.scope: Consumed 1.038s CPU time.
Oct  1 12:47:23 np0005464891 systemd[1]: var-lib-containers-storage-overlay-2e7583b76bd9273c9534b76ca19d355cfb060bf2cc523a1e21b4afc0ecab8f69-merged.mount: Deactivated successfully.
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:47:23 np0005464891 podman[277374]: 2025-10-01 16:47:23.889064798 +0000 UTC m=+1.476653890 container remove 7164f25fb8106a4d15061df24743f935644fb952f93f2eb6a8d105843afedf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:47:23 np0005464891 systemd[1]: libpod-conmon-7164f25fb8106a4d15061df24743f935644fb952f93f2eb6a8d105843afedf2c.scope: Deactivated successfully.
Oct  1 12:47:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.961 2 DEBUG nova.network.neutron [req-76401fa8-89a2-42bc-8bbd-131d9ec8ec26 req-0a4314e0-7dd2-4104-a7ea-8e68926bd20d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Updated VIF entry in instance network info cache for port d93019e5-5f92-4987-9169-bc28ee796c9b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.961 2 DEBUG nova.network.neutron [req-76401fa8-89a2-42bc-8bbd-131d9ec8ec26 req-0a4314e0-7dd2-4104-a7ea-8e68926bd20d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Updating instance_info_cache with network_info: [{"id": "d93019e5-5f92-4987-9169-bc28ee796c9b", "address": "fa:16:3e:28:74:91", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93019e5-5f", "ovs_interfaceid": "d93019e5-5f92-4987-9169-bc28ee796c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:47:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:47:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:47:23 np0005464891 nova_compute[259907]: 2025-10-01 16:47:23.987 2 DEBUG oslo_concurrency.lockutils [req-76401fa8-89a2-42bc-8bbd-131d9ec8ec26 req-0a4314e0-7dd2-4104-a7ea-8e68926bd20d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-5f5bee34-d022-4b27-8233-8c05297df26c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:47:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:47:23 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev d0224188-d048-4305-8cc9-81c8478451a9 does not exist
Oct  1 12:47:23 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 524050a4-b0c0-4a71-a134-cb77771c3e1e does not exist
Oct  1 12:47:24 np0005464891 nova_compute[259907]: 2025-10-01 16:47:24.066 2 INFO nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Creating config drive at /var/lib/nova/instances/5f5bee34-d022-4b27-8233-8c05297df26c/disk.config#033[00m
Oct  1 12:47:24 np0005464891 nova_compute[259907]: 2025-10-01 16:47:24.074 2 DEBUG oslo_concurrency.processutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5f5bee34-d022-4b27-8233-8c05297df26c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpivl_uipi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:24 np0005464891 nova_compute[259907]: 2025-10-01 16:47:24.223 2 DEBUG oslo_concurrency.processutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5f5bee34-d022-4b27-8233-8c05297df26c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpivl_uipi" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:24 np0005464891 nova_compute[259907]: 2025-10-01 16:47:24.251 2 DEBUG nova.storage.rbd_utils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] rbd image 5f5bee34-d022-4b27-8233-8c05297df26c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:24 np0005464891 nova_compute[259907]: 2025-10-01 16:47:24.255 2 DEBUG oslo_concurrency.processutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5f5bee34-d022-4b27-8233-8c05297df26c/disk.config 5f5bee34-d022-4b27-8233-8c05297df26c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:24 np0005464891 nova_compute[259907]: 2025-10-01 16:47:24.807 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:47:24 np0005464891 nova_compute[259907]: 2025-10-01 16:47:24.808 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:47:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.2 MiB/s wr, 74 op/s
Oct  1 12:47:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:47:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:47:25 np0005464891 nova_compute[259907]: 2025-10-01 16:47:25.137 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:25 np0005464891 nova_compute[259907]: 2025-10-01 16:47:25.139 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:25 np0005464891 nova_compute[259907]: 2025-10-01 16:47:25.140 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:25 np0005464891 nova_compute[259907]: 2025-10-01 16:47:25.140 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:47:25 np0005464891 nova_compute[259907]: 2025-10-01 16:47:25.141 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:25 np0005464891 nova_compute[259907]: 2025-10-01 16:47:25.211 2 DEBUG oslo_concurrency.processutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5f5bee34-d022-4b27-8233-8c05297df26c/disk.config 5f5bee34-d022-4b27-8233-8c05297df26c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.956s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:25 np0005464891 nova_compute[259907]: 2025-10-01 16:47:25.212 2 INFO nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Deleting local config drive /var/lib/nova/instances/5f5bee34-d022-4b27-8233-8c05297df26c/disk.config because it was imported into RBD.#033[00m
Oct  1 12:47:25 np0005464891 kernel: tapd93019e5-5f: entered promiscuous mode
Oct  1 12:47:25 np0005464891 NetworkManager[44940]: <info>  [1759337245.3006] manager: (tapd93019e5-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Oct  1 12:47:25 np0005464891 systemd-udevd[277612]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:47:25 np0005464891 ovn_controller[152409]: 2025-10-01T16:47:25Z|00045|binding|INFO|Claiming lport d93019e5-5f92-4987-9169-bc28ee796c9b for this chassis.
Oct  1 12:47:25 np0005464891 ovn_controller[152409]: 2025-10-01T16:47:25Z|00046|binding|INFO|d93019e5-5f92-4987-9169-bc28ee796c9b: Claiming fa:16:3e:28:74:91 10.100.0.6
Oct  1 12:47:25 np0005464891 nova_compute[259907]: 2025-10-01 16:47:25.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:25 np0005464891 NetworkManager[44940]: <info>  [1759337245.3730] device (tapd93019e5-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:47:25 np0005464891 NetworkManager[44940]: <info>  [1759337245.3760] device (tapd93019e5-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.375 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:74:91 10.100.0.6'], port_security=['fa:16:3e:28:74:91 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5f5bee34-d022-4b27-8233-8c05297df26c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3928aed-f713-4c4c-8990-af3a790b20cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9a68f4cae7c4848af4537abf8f3a937', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c094c91-85d3-4eaa-9f95-e39d330e2d75', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec106bc6-db39-4f09-a3c7-4a345f13bd23, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=d93019e5-5f92-4987-9169-bc28ee796c9b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.377 162546 INFO neutron.agent.ovn.metadata.agent [-] Port d93019e5-5f92-4987-9169-bc28ee796c9b in datapath c3928aed-f713-4c4c-8990-af3a790b20cf bound to our chassis#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.379 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c3928aed-f713-4c4c-8990-af3a790b20cf#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.396 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[91977eaa-b888-4495-828a-62518b9d049a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.401 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc3928aed-f1 in ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:47:25 np0005464891 systemd-machined[214891]: New machine qemu-4-instance-00000004.
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.405 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc3928aed-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.405 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[37a2c920-dc29-4dba-9a49-c719e325df97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.406 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8ebd0f4c-723c-472d-ba75-1c086fb1f9b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:25 np0005464891 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.426 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[81aee8a1-f729-456e-acf3-6ee8330ca07d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.447 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d7da5774-f7e6-44f2-af18-3890977f5159]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:47:25 np0005464891 ovn_controller[152409]: 2025-10-01T16:47:25Z|00047|binding|INFO|Setting lport d93019e5-5f92-4987-9169-bc28ee796c9b ovn-installed in OVS
Oct  1 12:47:25 np0005464891 ovn_controller[152409]: 2025-10-01T16:47:25Z|00048|binding|INFO|Setting lport d93019e5-5f92-4987-9169-bc28ee796c9b up in Southbound
Oct  1 12:47:25 np0005464891 nova_compute[259907]: 2025-10-01 16:47:25.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.492 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[bca813fc-4813-44d5-9864-0954ef47898a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:25 np0005464891 NetworkManager[44940]: <info>  [1759337245.4997] manager: (tapc3928aed-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.502 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdbda2c-663f-412d-bdd6-9e6278a4733c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:25 np0005464891 nova_compute[259907]: 2025-10-01 16:47:25.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.544 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[48a9484e-5ea2-4d26-8936-a6319aab551f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.547 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb5785f-a9be-4592-8f06-5803a92f713b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:25 np0005464891 NetworkManager[44940]: <info>  [1759337245.5763] device (tapc3928aed-f0): carrier: link connected
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.583 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b59652-fbea-4012-b2dc-8395841da787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.606 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[18e38cb4-09ee-4179-b156-c207e663bfd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc3928aed-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:d6:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419414, 'reachable_time': 32727, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277658, 'error': None, 'target': 'ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.631 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ed274388-f1b6-4272-b7ab-166fbe4b907e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:d62f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 419414, 'tstamp': 419414}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277659, 'error': None, 'target': 'ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.657 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[c99cd8c6-cf9c-4d35-9045-74e2cb470ba8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc3928aed-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:d6:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419414, 'reachable_time': 32727, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277660, 'error': None, 'target': 'ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.687 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[593a7785-3e56-431b-a7b1-39d1c6d870bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:47:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2959086568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:47:25 np0005464891 nova_compute[259907]: 2025-10-01 16:47:25.723 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.747 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8bb6cb-bb1f-4b3b-82e0-0e048c6c7807]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.749 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3928aed-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.749 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.750 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc3928aed-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:25 np0005464891 nova_compute[259907]: 2025-10-01 16:47:25.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:25 np0005464891 NetworkManager[44940]: <info>  [1759337245.7526] manager: (tapc3928aed-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Oct  1 12:47:25 np0005464891 kernel: tapc3928aed-f0: entered promiscuous mode
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.756 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc3928aed-f0, col_values=(('external_ids', {'iface-id': '367ba185-0566-4b48-9fbb-85d1655d5f0a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:25 np0005464891 ovn_controller[152409]: 2025-10-01T16:47:25Z|00049|binding|INFO|Releasing lport 367ba185-0566-4b48-9fbb-85d1655d5f0a from this chassis (sb_readonly=0)
Oct  1 12:47:25 np0005464891 nova_compute[259907]: 2025-10-01 16:47:25.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.795 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c3928aed-f713-4c4c-8990-af3a790b20cf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c3928aed-f713-4c4c-8990-af3a790b20cf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:47:25 np0005464891 nova_compute[259907]: 2025-10-01 16:47:25.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.797 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[cd23de76-13e9-402e-a345-3c05aef07433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.798 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-c3928aed-f713-4c4c-8990-af3a790b20cf
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/c3928aed-f713-4c4c-8990-af3a790b20cf.pid.haproxy
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID c3928aed-f713-4c4c-8990-af3a790b20cf
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:47:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:25.800 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf', 'env', 'PROCESS_TAG=haproxy-c3928aed-f713-4c4c-8990-af3a790b20cf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c3928aed-f713-4c4c-8990-af3a790b20cf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:47:25 np0005464891 nova_compute[259907]: 2025-10-01 16:47:25.850 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:47:25 np0005464891 nova_compute[259907]: 2025-10-01 16:47:25.851 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:47:26 np0005464891 nova_compute[259907]: 2025-10-01 16:47:26.032 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:47:26 np0005464891 nova_compute[259907]: 2025-10-01 16:47:26.033 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4672MB free_disk=59.967525482177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:47:26 np0005464891 nova_compute[259907]: 2025-10-01 16:47:26.033 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:26 np0005464891 nova_compute[259907]: 2025-10-01 16:47:26.034 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:26 np0005464891 nova_compute[259907]: 2025-10-01 16:47:26.169 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 5f5bee34-d022-4b27-8233-8c05297df26c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:47:26 np0005464891 nova_compute[259907]: 2025-10-01 16:47:26.170 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:47:26 np0005464891 nova_compute[259907]: 2025-10-01 16:47:26.170 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:47:26 np0005464891 nova_compute[259907]: 2025-10-01 16:47:26.184 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing inventories for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 12:47:26 np0005464891 nova_compute[259907]: 2025-10-01 16:47:26.198 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating ProviderTree inventory for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 12:47:26 np0005464891 nova_compute[259907]: 2025-10-01 16:47:26.198 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating inventory in ProviderTree for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 12:47:26 np0005464891 nova_compute[259907]: 2025-10-01 16:47:26.214 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing aggregate associations for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 12:47:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Oct  1 12:47:26 np0005464891 nova_compute[259907]: 2025-10-01 16:47:26.237 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing trait associations for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8, traits: HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 12:47:26 np0005464891 podman[277694]: 2025-10-01 16:47:26.167483835 +0000 UTC m=+0.024861638 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:47:26 np0005464891 nova_compute[259907]: 2025-10-01 16:47:26.271 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Oct  1 12:47:26 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Oct  1 12:47:26 np0005464891 podman[277694]: 2025-10-01 16:47:26.374239786 +0000 UTC m=+0.231617579 container create b72253d5e251207c4be101ea552f2ba84ff430c95005de51915dbdb7deb8057e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 12:47:26 np0005464891 systemd[1]: Started libpod-conmon-b72253d5e251207c4be101ea552f2ba84ff430c95005de51915dbdb7deb8057e.scope.
Oct  1 12:47:26 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:47:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e3affc3341164433470f5fe25b4db5d4ce646848a25126aa9dfb54df479432c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:47:26 np0005464891 podman[277694]: 2025-10-01 16:47:26.644481661 +0000 UTC m=+0.501859454 container init b72253d5e251207c4be101ea552f2ba84ff430c95005de51915dbdb7deb8057e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  1 12:47:26 np0005464891 podman[277694]: 2025-10-01 16:47:26.650830976 +0000 UTC m=+0.508208779 container start b72253d5e251207c4be101ea552f2ba84ff430c95005de51915dbdb7deb8057e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  1 12:47:26 np0005464891 neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf[277727]: [NOTICE]   (277733) : New worker (277735) forked
Oct  1 12:47:26 np0005464891 neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf[277727]: [NOTICE]   (277733) : Loading success.
Oct  1 12:47:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:47:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3391965484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:47:26 np0005464891 nova_compute[259907]: 2025-10-01 16:47:26.846 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:26 np0005464891 nova_compute[259907]: 2025-10-01 16:47:26.853 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:47:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.3 MiB/s wr, 45 op/s
Oct  1 12:47:26 np0005464891 nova_compute[259907]: 2025-10-01 16:47:26.889 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:47:27 np0005464891 nova_compute[259907]: 2025-10-01 16:47:27.002 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:47:27 np0005464891 nova_compute[259907]: 2025-10-01 16:47:27.003 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.006 2 DEBUG nova.compute.manager [req-2ea92bd8-64b4-4346-871d-edfee0825b6f req-cb975129-92aa-4219-b2b2-9285eda66905 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Received event network-vif-plugged-d93019e5-5f92-4987-9169-bc28ee796c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.007 2 DEBUG oslo_concurrency.lockutils [req-2ea92bd8-64b4-4346-871d-edfee0825b6f req-cb975129-92aa-4219-b2b2-9285eda66905 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.007 2 DEBUG oslo_concurrency.lockutils [req-2ea92bd8-64b4-4346-871d-edfee0825b6f req-cb975129-92aa-4219-b2b2-9285eda66905 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.007 2 DEBUG oslo_concurrency.lockutils [req-2ea92bd8-64b4-4346-871d-edfee0825b6f req-cb975129-92aa-4219-b2b2-9285eda66905 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.008 2 DEBUG nova.compute.manager [req-2ea92bd8-64b4-4346-871d-edfee0825b6f req-cb975129-92aa-4219-b2b2-9285eda66905 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Processing event network-vif-plugged-d93019e5-5f92-4987-9169-bc28ee796c9b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:47:28 np0005464891 podman[277788]: 2025-10-01 16:47:28.030719651 +0000 UTC m=+0.136014628 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.059 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337248.0568855, 5f5bee34-d022-4b27-8233-8c05297df26c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.059 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] VM Started (Lifecycle Event)#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.061 2 DEBUG nova.compute.manager [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.064 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.067 2 INFO nova.virt.libvirt.driver [-] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Instance spawned successfully.#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.068 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.137 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.143 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.155 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.156 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.156 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.157 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.158 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.159 2 DEBUG nova.virt.libvirt.driver [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.169 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.170 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337248.0595353, 5f5bee34-d022-4b27-8233-8c05297df26c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.170 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.262 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.267 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337248.0633695, 5f5bee34-d022-4b27-8233-8c05297df26c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.268 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.302 2 INFO nova.compute.manager [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Took 11.23 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.303 2 DEBUG nova.compute.manager [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.304 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.317 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.409 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.479 2 INFO nova.compute.manager [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Took 13.10 seconds to build instance.#033[00m
Oct  1 12:47:28 np0005464891 nova_compute[259907]: 2025-10-01 16:47:28.512 2 DEBUG oslo_concurrency.lockutils [None req-a1d308e6-9a9d-46ed-a482-9f2e83e6a785 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "5f5bee34-d022-4b27-8233-8c05297df26c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 1.3 MiB/s wr, 76 op/s
Oct  1 12:47:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:47:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/159285232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:47:30 np0005464891 nova_compute[259907]: 2025-10-01 16:47:30.003 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:47:30 np0005464891 nova_compute[259907]: 2025-10-01 16:47:30.189 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:47:30 np0005464891 nova_compute[259907]: 2025-10-01 16:47:30.192 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:47:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:47:30 np0005464891 nova_compute[259907]: 2025-10-01 16:47:30.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 1.1 MiB/s wr, 63 op/s
Oct  1 12:47:31 np0005464891 nova_compute[259907]: 2025-10-01 16:47:31.221 2 DEBUG nova.compute.manager [req-3e2e52ba-a905-4c32-a8eb-4ba12c40349d req-bc1461a1-bf1a-42b7-9ee4-d152b8370fff af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Received event network-vif-plugged-d93019e5-5f92-4987-9169-bc28ee796c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:47:31 np0005464891 nova_compute[259907]: 2025-10-01 16:47:31.223 2 DEBUG oslo_concurrency.lockutils [req-3e2e52ba-a905-4c32-a8eb-4ba12c40349d req-bc1461a1-bf1a-42b7-9ee4-d152b8370fff af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:31 np0005464891 nova_compute[259907]: 2025-10-01 16:47:31.223 2 DEBUG oslo_concurrency.lockutils [req-3e2e52ba-a905-4c32-a8eb-4ba12c40349d req-bc1461a1-bf1a-42b7-9ee4-d152b8370fff af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:31 np0005464891 nova_compute[259907]: 2025-10-01 16:47:31.224 2 DEBUG oslo_concurrency.lockutils [req-3e2e52ba-a905-4c32-a8eb-4ba12c40349d req-bc1461a1-bf1a-42b7-9ee4-d152b8370fff af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:31 np0005464891 nova_compute[259907]: 2025-10-01 16:47:31.224 2 DEBUG nova.compute.manager [req-3e2e52ba-a905-4c32-a8eb-4ba12c40349d req-bc1461a1-bf1a-42b7-9ee4-d152b8370fff af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] No waiting events found dispatching network-vif-plugged-d93019e5-5f92-4987-9169-bc28ee796c9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:47:31 np0005464891 nova_compute[259907]: 2025-10-01 16:47:31.225 2 WARNING nova.compute.manager [req-3e2e52ba-a905-4c32-a8eb-4ba12c40349d req-bc1461a1-bf1a-42b7-9ee4-d152b8370fff af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Received unexpected event network-vif-plugged-d93019e5-5f92-4987-9169-bc28ee796c9b for instance with vm_state active and task_state None.#033[00m
Oct  1 12:47:31 np0005464891 podman[277814]: 2025-10-01 16:47:31.998620704 +0000 UTC m=+0.098427039 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:47:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 138 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 501 KiB/s rd, 425 KiB/s wr, 49 op/s
Oct  1 12:47:32 np0005464891 nova_compute[259907]: 2025-10-01 16:47:32.963 2 DEBUG oslo_concurrency.lockutils [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "5f5bee34-d022-4b27-8233-8c05297df26c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:32 np0005464891 nova_compute[259907]: 2025-10-01 16:47:32.964 2 DEBUG oslo_concurrency.lockutils [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "5f5bee34-d022-4b27-8233-8c05297df26c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:32 np0005464891 nova_compute[259907]: 2025-10-01 16:47:32.964 2 DEBUG oslo_concurrency.lockutils [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:32 np0005464891 nova_compute[259907]: 2025-10-01 16:47:32.965 2 DEBUG oslo_concurrency.lockutils [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:32 np0005464891 nova_compute[259907]: 2025-10-01 16:47:32.965 2 DEBUG oslo_concurrency.lockutils [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:32 np0005464891 nova_compute[259907]: 2025-10-01 16:47:32.967 2 INFO nova.compute.manager [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Terminating instance#033[00m
Oct  1 12:47:32 np0005464891 nova_compute[259907]: 2025-10-01 16:47:32.969 2 DEBUG nova.compute.manager [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:47:33 np0005464891 nova_compute[259907]: 2025-10-01 16:47:33.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:33 np0005464891 kernel: tapd93019e5-5f (unregistering): left promiscuous mode
Oct  1 12:47:33 np0005464891 NetworkManager[44940]: <info>  [1759337253.8740] device (tapd93019e5-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:47:33 np0005464891 ovn_controller[152409]: 2025-10-01T16:47:33Z|00050|binding|INFO|Releasing lport d93019e5-5f92-4987-9169-bc28ee796c9b from this chassis (sb_readonly=0)
Oct  1 12:47:33 np0005464891 ovn_controller[152409]: 2025-10-01T16:47:33Z|00051|binding|INFO|Setting lport d93019e5-5f92-4987-9169-bc28ee796c9b down in Southbound
Oct  1 12:47:33 np0005464891 ovn_controller[152409]: 2025-10-01T16:47:33Z|00052|binding|INFO|Removing iface tapd93019e5-5f ovn-installed in OVS
Oct  1 12:47:33 np0005464891 nova_compute[259907]: 2025-10-01 16:47:33.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:33 np0005464891 nova_compute[259907]: 2025-10-01 16:47:33.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:33 np0005464891 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct  1 12:47:33 np0005464891 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 7.514s CPU time.
Oct  1 12:47:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:33.946 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:74:91 10.100.0.6'], port_security=['fa:16:3e:28:74:91 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5f5bee34-d022-4b27-8233-8c05297df26c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3928aed-f713-4c4c-8990-af3a790b20cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9a68f4cae7c4848af4537abf8f3a937', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c094c91-85d3-4eaa-9f95-e39d330e2d75', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec106bc6-db39-4f09-a3c7-4a345f13bd23, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=d93019e5-5f92-4987-9169-bc28ee796c9b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:47:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:33.948 162546 INFO neutron.agent.ovn.metadata.agent [-] Port d93019e5-5f92-4987-9169-bc28ee796c9b in datapath c3928aed-f713-4c4c-8990-af3a790b20cf unbound from our chassis#033[00m
Oct  1 12:47:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:33.949 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c3928aed-f713-4c4c-8990-af3a790b20cf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:47:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:33.951 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[150f58b4-224f-4975-a43a-cf77e33ec8d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:33.952 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf namespace which is not needed anymore#033[00m
Oct  1 12:47:33 np0005464891 systemd-machined[214891]: Machine qemu-4-instance-00000004 terminated.
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.018 2 INFO nova.virt.libvirt.driver [-] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Instance destroyed successfully.#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.019 2 DEBUG nova.objects.instance [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lazy-loading 'resources' on Instance uuid 5f5bee34-d022-4b27-8233-8c05297df26c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.102 2 DEBUG nova.virt.libvirt.vif [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1944151369',display_name='tempest-VolumesActionsTest-instance-1944151369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1944151369',id=4,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:47:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9a68f4cae7c4848af4537abf8f3a937',ramdisk_id='',reservation_id='r-vb5rn5kc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-789764846',owner_user_name='tempest-VolumesActionsTest-789764846-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:47:28Z,user_data=None,user_id='85daab3d4ec44eb885d793a27894aab3',uuid=5f5bee34-d022-4b27-8233-8c05297df26c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d93019e5-5f92-4987-9169-bc28ee796c9b", "address": "fa:16:3e:28:74:91", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93019e5-5f", "ovs_interfaceid": "d93019e5-5f92-4987-9169-bc28ee796c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.102 2 DEBUG nova.network.os_vif_util [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Converting VIF {"id": "d93019e5-5f92-4987-9169-bc28ee796c9b", "address": "fa:16:3e:28:74:91", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd93019e5-5f", "ovs_interfaceid": "d93019e5-5f92-4987-9169-bc28ee796c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.103 2 DEBUG nova.network.os_vif_util [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:74:91,bridge_name='br-int',has_traffic_filtering=True,id=d93019e5-5f92-4987-9169-bc28ee796c9b,network=Network(c3928aed-f713-4c4c-8990-af3a790b20cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd93019e5-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.103 2 DEBUG os_vif [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:74:91,bridge_name='br-int',has_traffic_filtering=True,id=d93019e5-5f92-4987-9169-bc28ee796c9b,network=Network(c3928aed-f713-4c4c-8990-af3a790b20cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd93019e5-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.105 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93019e5-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.162 2 INFO os_vif [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:74:91,bridge_name='br-int',has_traffic_filtering=True,id=d93019e5-5f92-4987-9169-bc28ee796c9b,network=Network(c3928aed-f713-4c4c-8990-af3a790b20cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd93019e5-5f')#033[00m
Oct  1 12:47:34 np0005464891 neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf[277727]: [NOTICE]   (277733) : haproxy version is 2.8.14-c23fe91
Oct  1 12:47:34 np0005464891 neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf[277727]: [NOTICE]   (277733) : path to executable is /usr/sbin/haproxy
Oct  1 12:47:34 np0005464891 neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf[277727]: [WARNING]  (277733) : Exiting Master process...
Oct  1 12:47:34 np0005464891 neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf[277727]: [WARNING]  (277733) : Exiting Master process...
Oct  1 12:47:34 np0005464891 neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf[277727]: [ALERT]    (277733) : Current worker (277735) exited with code 143 (Terminated)
Oct  1 12:47:34 np0005464891 neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf[277727]: [WARNING]  (277733) : All workers exited. Exiting... (0)
Oct  1 12:47:34 np0005464891 systemd[1]: libpod-b72253d5e251207c4be101ea552f2ba84ff430c95005de51915dbdb7deb8057e.scope: Deactivated successfully.
Oct  1 12:47:34 np0005464891 podman[277868]: 2025-10-01 16:47:34.287321434 +0000 UTC m=+0.207990426 container died b72253d5e251207c4be101ea552f2ba84ff430c95005de51915dbdb7deb8057e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.288 2 DEBUG nova.compute.manager [req-1dde09cd-172b-430c-ac99-dfb55658a513 req-875395ab-6a50-490e-9927-1ffd507bb136 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Received event network-vif-unplugged-d93019e5-5f92-4987-9169-bc28ee796c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.290 2 DEBUG oslo_concurrency.lockutils [req-1dde09cd-172b-430c-ac99-dfb55658a513 req-875395ab-6a50-490e-9927-1ffd507bb136 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.291 2 DEBUG oslo_concurrency.lockutils [req-1dde09cd-172b-430c-ac99-dfb55658a513 req-875395ab-6a50-490e-9927-1ffd507bb136 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.291 2 DEBUG oslo_concurrency.lockutils [req-1dde09cd-172b-430c-ac99-dfb55658a513 req-875395ab-6a50-490e-9927-1ffd507bb136 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.292 2 DEBUG nova.compute.manager [req-1dde09cd-172b-430c-ac99-dfb55658a513 req-875395ab-6a50-490e-9927-1ffd507bb136 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] No waiting events found dispatching network-vif-unplugged-d93019e5-5f92-4987-9169-bc28ee796c9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:47:34 np0005464891 nova_compute[259907]: 2025-10-01 16:47:34.292 2 DEBUG nova.compute.manager [req-1dde09cd-172b-430c-ac99-dfb55658a513 req-875395ab-6a50-490e-9927-1ffd507bb136 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Received event network-vif-unplugged-d93019e5-5f92-4987-9169-bc28ee796c9b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:47:34 np0005464891 podman[277866]: 2025-10-01 16:47:34.750243841 +0000 UTC m=+0.670932314 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  1 12:47:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 174 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.0 MiB/s wr, 81 op/s
Oct  1 12:47:34 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b72253d5e251207c4be101ea552f2ba84ff430c95005de51915dbdb7deb8057e-userdata-shm.mount: Deactivated successfully.
Oct  1 12:47:34 np0005464891 systemd[1]: var-lib-containers-storage-overlay-8e3affc3341164433470f5fe25b4db5d4ce646848a25126aa9dfb54df479432c-merged.mount: Deactivated successfully.
Oct  1 12:47:35 np0005464891 podman[277868]: 2025-10-01 16:47:35.245047979 +0000 UTC m=+1.165716941 container cleanup b72253d5e251207c4be101ea552f2ba84ff430c95005de51915dbdb7deb8057e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct  1 12:47:35 np0005464891 systemd[1]: libpod-conmon-b72253d5e251207c4be101ea552f2ba84ff430c95005de51915dbdb7deb8057e.scope: Deactivated successfully.
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:47:35 np0005464891 nova_compute[259907]: 2025-10-01 16:47:35.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:47:35.579723) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337255579807, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1082, "num_deletes": 258, "total_data_size": 1350297, "memory_usage": 1373424, "flush_reason": "Manual Compaction"}
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337255738126, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1334984, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23444, "largest_seqno": 24525, "table_properties": {"data_size": 1329757, "index_size": 2623, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11922, "raw_average_key_size": 19, "raw_value_size": 1318812, "raw_average_value_size": 2176, "num_data_blocks": 117, "num_entries": 606, "num_filter_entries": 606, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759337180, "oldest_key_time": 1759337180, "file_creation_time": 1759337255, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 158466 microseconds, and 7460 cpu microseconds.
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:47:35 np0005464891 podman[277937]: 2025-10-01 16:47:35.744119544 +0000 UTC m=+0.471108984 container remove b72253d5e251207c4be101ea552f2ba84ff430c95005de51915dbdb7deb8057e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 12:47:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:35.754 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[162d16f8-90e6-40e4-83c9-da5dee2916d5]: (4, ('Wed Oct  1 04:47:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf (b72253d5e251207c4be101ea552f2ba84ff430c95005de51915dbdb7deb8057e)\nb72253d5e251207c4be101ea552f2ba84ff430c95005de51915dbdb7deb8057e\nWed Oct  1 04:47:35 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf (b72253d5e251207c4be101ea552f2ba84ff430c95005de51915dbdb7deb8057e)\nb72253d5e251207c4be101ea552f2ba84ff430c95005de51915dbdb7deb8057e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:35.757 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[65338747-36cf-4edb-b1cd-1e85cfd9bbb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:35.758 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3928aed-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:35 np0005464891 nova_compute[259907]: 2025-10-01 16:47:35.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:35 np0005464891 kernel: tapc3928aed-f0: left promiscuous mode
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:47:35.738200) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1334984 bytes OK
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:47:35.738231) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:47:35.762988) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:47:35.763045) EVENT_LOG_v1 {"time_micros": 1759337255763033, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:47:35.763074) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1345081, prev total WAL file size 1345081, number of live WAL files 2.
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:47:35.764566) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1303KB)], [53(9067KB)]
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337255764604, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10619763, "oldest_snapshot_seqno": -1}
Oct  1 12:47:35 np0005464891 nova_compute[259907]: 2025-10-01 16:47:35.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:35.814 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[62aca2a5-ef3b-4641-934b-05a3ae27eccd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:35.844 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4e8312-9511-49d6-91ab-13c0b3f96572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:35.845 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b950831e-89bb-48bc-bf6f-952ee345927b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5036 keys, 10524268 bytes, temperature: kUnknown
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337255856165, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 10524268, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10485372, "index_size": 25218, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 125013, "raw_average_key_size": 24, "raw_value_size": 10389337, "raw_average_value_size": 2063, "num_data_blocks": 1051, "num_entries": 5036, "num_filter_entries": 5036, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759337255, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:47:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:35.870 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5109b7ef-5e30-42a5-8c83-0459e6375aed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419405, 'reachable_time': 20860, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277953, 'error': None, 'target': 'ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:35 np0005464891 systemd[1]: run-netns-ovnmeta\x2dc3928aed\x2df713\x2d4c4c\x2d8990\x2daf3a790b20cf.mount: Deactivated successfully.
Oct  1 12:47:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:35.875 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:47:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:35.875 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3ef0bf-0bee-412b-a733-3f84e5d3830f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:47:35.856402) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10524268 bytes
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:47:35.883216) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.9 rd, 114.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.9 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(15.8) write-amplify(7.9) OK, records in: 5567, records dropped: 531 output_compression: NoCompression
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:47:35.883294) EVENT_LOG_v1 {"time_micros": 1759337255883263, "job": 28, "event": "compaction_finished", "compaction_time_micros": 91638, "compaction_time_cpu_micros": 49632, "output_level": 6, "num_output_files": 1, "total_output_size": 10524268, "num_input_records": 5567, "num_output_records": 5036, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337255884795, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337255890132, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:47:35.764437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:47:35.890321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:47:35.890328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:47:35.890331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:47:35.890334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:47:35 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:47:35.890336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:47:36 np0005464891 nova_compute[259907]: 2025-10-01 16:47:36.456 2 DEBUG nova.compute.manager [req-51cd98f4-9805-4d1c-9367-69d39666120f req-04654186-3282-46de-b6b9-45a7a8b7ed0b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Received event network-vif-plugged-d93019e5-5f92-4987-9169-bc28ee796c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:47:36 np0005464891 nova_compute[259907]: 2025-10-01 16:47:36.457 2 DEBUG oslo_concurrency.lockutils [req-51cd98f4-9805-4d1c-9367-69d39666120f req-04654186-3282-46de-b6b9-45a7a8b7ed0b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:36 np0005464891 nova_compute[259907]: 2025-10-01 16:47:36.457 2 DEBUG oslo_concurrency.lockutils [req-51cd98f4-9805-4d1c-9367-69d39666120f req-04654186-3282-46de-b6b9-45a7a8b7ed0b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:36 np0005464891 nova_compute[259907]: 2025-10-01 16:47:36.458 2 DEBUG oslo_concurrency.lockutils [req-51cd98f4-9805-4d1c-9367-69d39666120f req-04654186-3282-46de-b6b9-45a7a8b7ed0b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "5f5bee34-d022-4b27-8233-8c05297df26c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:36 np0005464891 nova_compute[259907]: 2025-10-01 16:47:36.458 2 DEBUG nova.compute.manager [req-51cd98f4-9805-4d1c-9367-69d39666120f req-04654186-3282-46de-b6b9-45a7a8b7ed0b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] No waiting events found dispatching network-vif-plugged-d93019e5-5f92-4987-9169-bc28ee796c9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:47:36 np0005464891 nova_compute[259907]: 2025-10-01 16:47:36.459 2 WARNING nova.compute.manager [req-51cd98f4-9805-4d1c-9367-69d39666120f req-04654186-3282-46de-b6b9-45a7a8b7ed0b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Received unexpected event network-vif-plugged-d93019e5-5f92-4987-9169-bc28ee796c9b for instance with vm_state active and task_state deleting.#033[00m
Oct  1 12:47:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:47:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/769875773' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:47:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:47:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/769875773' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:47:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 222 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 8.4 MiB/s wr, 143 op/s
Oct  1 12:47:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 283 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 MiB/s wr, 140 op/s
Oct  1 12:47:39 np0005464891 nova_compute[259907]: 2025-10-01 16:47:39.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:39 np0005464891 nova_compute[259907]: 2025-10-01 16:47:39.963 2 INFO nova.virt.libvirt.driver [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Deleting instance files /var/lib/nova/instances/5f5bee34-d022-4b27-8233-8c05297df26c_del#033[00m
Oct  1 12:47:39 np0005464891 nova_compute[259907]: 2025-10-01 16:47:39.965 2 INFO nova.virt.libvirt.driver [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Deletion of /var/lib/nova/instances/5f5bee34-d022-4b27-8233-8c05297df26c_del complete#033[00m
Oct  1 12:47:40 np0005464891 nova_compute[259907]: 2025-10-01 16:47:40.058 2 INFO nova.compute.manager [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Took 7.09 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:47:40 np0005464891 nova_compute[259907]: 2025-10-01 16:47:40.058 2 DEBUG oslo.service.loopingcall [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:47:40 np0005464891 nova_compute[259907]: 2025-10-01 16:47:40.059 2 DEBUG nova.compute.manager [-] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:47:40 np0005464891 nova_compute[259907]: 2025-10-01 16:47:40.059 2 DEBUG nova.network.neutron [-] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:47:40 np0005464891 nova_compute[259907]: 2025-10-01 16:47:40.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:47:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 468 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 32 MiB/s wr, 187 op/s
Oct  1 12:47:41 np0005464891 nova_compute[259907]: 2025-10-01 16:47:41.145 2 DEBUG nova.network.neutron [-] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:47:41 np0005464891 nova_compute[259907]: 2025-10-01 16:47:41.230 2 DEBUG nova.compute.manager [req-0f4ea81a-404f-4ad4-a311-3507ef68c90d req-7dc72f6e-5c49-4de9-b640-34e036fc7f06 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Received event network-vif-deleted-d93019e5-5f92-4987-9169-bc28ee796c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:47:41 np0005464891 nova_compute[259907]: 2025-10-01 16:47:41.230 2 INFO nova.compute.manager [req-0f4ea81a-404f-4ad4-a311-3507ef68c90d req-7dc72f6e-5c49-4de9-b640-34e036fc7f06 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Neutron deleted interface d93019e5-5f92-4987-9169-bc28ee796c9b; detaching it from the instance and deleting it from the info cache#033[00m
Oct  1 12:47:41 np0005464891 nova_compute[259907]: 2025-10-01 16:47:41.231 2 DEBUG nova.network.neutron [req-0f4ea81a-404f-4ad4-a311-3507ef68c90d req-7dc72f6e-5c49-4de9-b640-34e036fc7f06 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:47:41 np0005464891 nova_compute[259907]: 2025-10-01 16:47:41.244 2 INFO nova.compute.manager [-] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Took 1.18 seconds to deallocate network for instance.#033[00m
Oct  1 12:47:41 np0005464891 nova_compute[259907]: 2025-10-01 16:47:41.253 2 DEBUG nova.compute.manager [req-0f4ea81a-404f-4ad4-a311-3507ef68c90d req-7dc72f6e-5c49-4de9-b640-34e036fc7f06 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Detach interface failed, port_id=d93019e5-5f92-4987-9169-bc28ee796c9b, reason: Instance 5f5bee34-d022-4b27-8233-8c05297df26c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  1 12:47:41 np0005464891 nova_compute[259907]: 2025-10-01 16:47:41.293 2 DEBUG oslo_concurrency.lockutils [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:41 np0005464891 nova_compute[259907]: 2025-10-01 16:47:41.294 2 DEBUG oslo_concurrency.lockutils [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:41 np0005464891 nova_compute[259907]: 2025-10-01 16:47:41.353 2 DEBUG oslo_concurrency.processutils [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:47:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1456562958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:47:41 np0005464891 nova_compute[259907]: 2025-10-01 16:47:41.847 2 DEBUG oslo_concurrency.processutils [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:41 np0005464891 nova_compute[259907]: 2025-10-01 16:47:41.853 2 DEBUG nova.compute.provider_tree [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:47:41 np0005464891 nova_compute[259907]: 2025-10-01 16:47:41.873 2 DEBUG nova.scheduler.client.report [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:47:41 np0005464891 nova_compute[259907]: 2025-10-01 16:47:41.925 2 DEBUG oslo_concurrency.lockutils [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:41 np0005464891 nova_compute[259907]: 2025-10-01 16:47:41.977 2 INFO nova.scheduler.client.report [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Deleted allocations for instance 5f5bee34-d022-4b27-8233-8c05297df26c#033[00m
Oct  1 12:47:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:47:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:47:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:47:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:47:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:47:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:47:42 np0005464891 nova_compute[259907]: 2025-10-01 16:47:42.079 2 DEBUG oslo_concurrency.lockutils [None req-84580e12-c32e-44d9-8399-e5a4d420507a 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "5f5bee34-d022-4b27-8233-8c05297df26c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 680 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 49 MiB/s wr, 198 op/s
Oct  1 12:47:44 np0005464891 nova_compute[259907]: 2025-10-01 16:47:44.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 900 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 67 MiB/s wr, 193 op/s
Oct  1 12:47:45 np0005464891 nova_compute[259907]: 2025-10-01 16:47:45.502 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "b9ff95de-17ee-4a78-822e-f4c081509b00" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:45 np0005464891 nova_compute[259907]: 2025-10-01 16:47:45.503 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "b9ff95de-17ee-4a78-822e-f4c081509b00" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:45 np0005464891 nova_compute[259907]: 2025-10-01 16:47:45.528 2 DEBUG nova.compute.manager [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:47:45 np0005464891 nova_compute[259907]: 2025-10-01 16:47:45.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:47:45 np0005464891 nova_compute[259907]: 2025-10-01 16:47:45.593 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:45 np0005464891 nova_compute[259907]: 2025-10-01 16:47:45.594 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:45 np0005464891 nova_compute[259907]: 2025-10-01 16:47:45.601 2 DEBUG nova.virt.hardware [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:47:45 np0005464891 nova_compute[259907]: 2025-10-01 16:47:45.601 2 INFO nova.compute.claims [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:47:45 np0005464891 nova_compute[259907]: 2025-10-01 16:47:45.714 2 DEBUG oslo_concurrency.processutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:47:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/954128438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.166 2 DEBUG oslo_concurrency.processutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.173 2 DEBUG nova.compute.provider_tree [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.189 2 DEBUG nova.scheduler.client.report [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.213 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.214 2 DEBUG nova.compute.manager [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.274 2 DEBUG nova.compute.manager [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.275 2 DEBUG nova.network.neutron [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.296 2 INFO nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.312 2 DEBUG nova.compute.manager [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.399 2 DEBUG nova.compute.manager [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.401 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.402 2 INFO nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Creating image(s)#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.425 2 DEBUG nova.storage.rbd_utils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] rbd image b9ff95de-17ee-4a78-822e-f4c081509b00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.453 2 DEBUG nova.storage.rbd_utils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] rbd image b9ff95de-17ee-4a78-822e-f4c081509b00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.474 2 DEBUG nova.storage.rbd_utils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] rbd image b9ff95de-17ee-4a78-822e-f4c081509b00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.478 2 DEBUG oslo_concurrency.processutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.499 2 DEBUG nova.policy [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '85daab3d4ec44eb885d793a27894aab3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9a68f4cae7c4848af4537abf8f3a937', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.551 2 DEBUG oslo_concurrency.processutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.552 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.552 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.552 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.592 2 DEBUG nova.storage.rbd_utils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] rbd image b9ff95de-17ee-4a78-822e-f4c081509b00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:46 np0005464891 nova_compute[259907]: 2025-10-01 16:47:46.596 2 DEBUG oslo_concurrency.processutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa b9ff95de-17ee-4a78-822e-f4c081509b00_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 1.0 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 77 MiB/s wr, 266 op/s
Oct  1 12:47:47 np0005464891 nova_compute[259907]: 2025-10-01 16:47:47.007 2 DEBUG oslo_concurrency.processutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa b9ff95de-17ee-4a78-822e-f4c081509b00_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:47 np0005464891 nova_compute[259907]: 2025-10-01 16:47:47.085 2 DEBUG nova.storage.rbd_utils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] resizing rbd image b9ff95de-17ee-4a78-822e-f4c081509b00_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  1 12:47:47 np0005464891 nova_compute[259907]: 2025-10-01 16:47:47.165 2 DEBUG nova.network.neutron [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Successfully created port: 74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:47:47 np0005464891 nova_compute[259907]: 2025-10-01 16:47:47.229 2 DEBUG nova.objects.instance [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lazy-loading 'migration_context' on Instance uuid b9ff95de-17ee-4a78-822e-f4c081509b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:47:47 np0005464891 nova_compute[259907]: 2025-10-01 16:47:47.244 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  1 12:47:47 np0005464891 nova_compute[259907]: 2025-10-01 16:47:47.244 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Ensure instance console log exists: /var/lib/nova/instances/b9ff95de-17ee-4a78-822e-f4c081509b00/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:47:47 np0005464891 nova_compute[259907]: 2025-10-01 16:47:47.245 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:47 np0005464891 nova_compute[259907]: 2025-10-01 16:47:47.246 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:47 np0005464891 nova_compute[259907]: 2025-10-01 16:47:47.246 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Oct  1 12:47:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Oct  1 12:47:47 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Oct  1 12:47:47 np0005464891 podman[278166]: 2025-10-01 16:47:47.963378862 +0000 UTC m=+0.070446857 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  1 12:47:48 np0005464891 nova_compute[259907]: 2025-10-01 16:47:48.018 2 DEBUG nova.network.neutron [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Successfully updated port: 74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:47:48 np0005464891 nova_compute[259907]: 2025-10-01 16:47:48.036 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "refresh_cache-b9ff95de-17ee-4a78-822e-f4c081509b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:47:48 np0005464891 nova_compute[259907]: 2025-10-01 16:47:48.037 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquired lock "refresh_cache-b9ff95de-17ee-4a78-822e-f4c081509b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:47:48 np0005464891 nova_compute[259907]: 2025-10-01 16:47:48.037 2 DEBUG nova.network.neutron [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:47:48 np0005464891 nova_compute[259907]: 2025-10-01 16:47:48.139 2 DEBUG nova.compute.manager [req-26a16917-ebb1-4614-b3df-ec34961ec674 req-452aa37e-8e43-40e3-9bb5-00287e5a6ecc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Received event network-changed-74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:47:48 np0005464891 nova_compute[259907]: 2025-10-01 16:47:48.140 2 DEBUG nova.compute.manager [req-26a16917-ebb1-4614-b3df-ec34961ec674 req-452aa37e-8e43-40e3-9bb5-00287e5a6ecc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Refreshing instance network info cache due to event network-changed-74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:47:48 np0005464891 nova_compute[259907]: 2025-10-01 16:47:48.141 2 DEBUG oslo_concurrency.lockutils [req-26a16917-ebb1-4614-b3df-ec34961ec674 req-452aa37e-8e43-40e3-9bb5-00287e5a6ecc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-b9ff95de-17ee-4a78-822e-f4c081509b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:47:48 np0005464891 nova_compute[259907]: 2025-10-01 16:47:48.188 2 DEBUG nova.network.neutron [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:47:48 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:48.281 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:47:48 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:48.282 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:47:48 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:48.283 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:48 np0005464891 nova_compute[259907]: 2025-10-01 16:47:48.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 772 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 155 KiB/s rd, 87 MiB/s wr, 270 op/s
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.016 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337254.015212, 5f5bee34-d022-4b27-8233-8c05297df26c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.016 2 INFO nova.compute.manager [-] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.038 2 DEBUG nova.compute.manager [None req-d5619a27-f934-4efa-89b2-9d61c81cf0d9 - - - - - -] [instance: 5f5bee34-d022-4b27-8233-8c05297df26c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.209 2 DEBUG nova.network.neutron [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Updating instance_info_cache with network_info: [{"id": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "address": "fa:16:3e:f5:2e:e2", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f6fe8c-4f", "ovs_interfaceid": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.244 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Releasing lock "refresh_cache-b9ff95de-17ee-4a78-822e-f4c081509b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.245 2 DEBUG nova.compute.manager [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Instance network_info: |[{"id": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "address": "fa:16:3e:f5:2e:e2", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f6fe8c-4f", "ovs_interfaceid": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.245 2 DEBUG oslo_concurrency.lockutils [req-26a16917-ebb1-4614-b3df-ec34961ec674 req-452aa37e-8e43-40e3-9bb5-00287e5a6ecc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-b9ff95de-17ee-4a78-822e-f4c081509b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.245 2 DEBUG nova.network.neutron [req-26a16917-ebb1-4614-b3df-ec34961ec674 req-452aa37e-8e43-40e3-9bb5-00287e5a6ecc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Refreshing network info cache for port 74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.249 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Start _get_guest_xml network_info=[{"id": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "address": "fa:16:3e:f5:2e:e2", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f6fe8c-4f", "ovs_interfaceid": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'image_id': 'f01c1e7c-fea3-4433-a44a-d71153552c78'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.255 2 WARNING nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.266 2 DEBUG nova.virt.libvirt.host [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.267 2 DEBUG nova.virt.libvirt.host [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.272 2 DEBUG nova.virt.libvirt.host [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.273 2 DEBUG nova.virt.libvirt.host [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.273 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.274 2 DEBUG nova.virt.hardware [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.274 2 DEBUG nova.virt.hardware [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.275 2 DEBUG nova.virt.hardware [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.275 2 DEBUG nova.virt.hardware [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.276 2 DEBUG nova.virt.hardware [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.276 2 DEBUG nova.virt.hardware [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.277 2 DEBUG nova.virt.hardware [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.277 2 DEBUG nova.virt.hardware [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.277 2 DEBUG nova.virt.hardware [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.278 2 DEBUG nova.virt.hardware [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.278 2 DEBUG nova.virt.hardware [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.283 2 DEBUG oslo_concurrency.processutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:47:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1679583998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.735 2 DEBUG oslo_concurrency.processutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.771 2 DEBUG nova.storage.rbd_utils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] rbd image b9ff95de-17ee-4a78-822e-f4c081509b00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:49 np0005464891 nova_compute[259907]: 2025-10-01 16:47:49.777 2 DEBUG oslo_concurrency.processutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:47:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3187443000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.333 2 DEBUG oslo_concurrency.processutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.337 2 DEBUG nova.virt.libvirt.vif [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:47:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-569462946',display_name='tempest-VolumesActionsTest-instance-569462946',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-569462946',id=5,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9a68f4cae7c4848af4537abf8f3a937',ramdisk_id='',reservation_id='r-d494d8d4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-789764846',owner_user_name='tempest-VolumesActionsTest-789764846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:47:46Z,user_data=None,user_id='85daab3d4ec44eb885d793a27894aab3',uuid=b9ff95de-17ee-4a78-822e-f4c081509b00,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "address": "fa:16:3e:f5:2e:e2", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f6fe8c-4f", "ovs_interfaceid": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.338 2 DEBUG nova.network.os_vif_util [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Converting VIF {"id": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "address": "fa:16:3e:f5:2e:e2", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f6fe8c-4f", "ovs_interfaceid": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.339 2 DEBUG nova.network.os_vif_util [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:2e:e2,bridge_name='br-int',has_traffic_filtering=True,id=74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4,network=Network(c3928aed-f713-4c4c-8990-af3a790b20cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74f6fe8c-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.341 2 DEBUG nova.objects.instance [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lazy-loading 'pci_devices' on Instance uuid b9ff95de-17ee-4a78-822e-f4c081509b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.361 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  <uuid>b9ff95de-17ee-4a78-822e-f4c081509b00</uuid>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  <name>instance-00000005</name>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <nova:name>tempest-VolumesActionsTest-instance-569462946</nova:name>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:47:49</nova:creationTime>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:        <nova:user uuid="85daab3d4ec44eb885d793a27894aab3">tempest-VolumesActionsTest-789764846-project-member</nova:user>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:        <nova:project uuid="b9a68f4cae7c4848af4537abf8f3a937">tempest-VolumesActionsTest-789764846</nova:project>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="f01c1e7c-fea3-4433-a44a-d71153552c78"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:        <nova:port uuid="74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <entry name="serial">b9ff95de-17ee-4a78-822e-f4c081509b00</entry>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <entry name="uuid">b9ff95de-17ee-4a78-822e-f4c081509b00</entry>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/b9ff95de-17ee-4a78-822e-f4c081509b00_disk">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/b9ff95de-17ee-4a78-822e-f4c081509b00_disk.config">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:f5:2e:e2"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <target dev="tap74f6fe8c-4f"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/b9ff95de-17ee-4a78-822e-f4c081509b00/console.log" append="off"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:47:50 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:47:50 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:47:50 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:47:50 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.364 2 DEBUG nova.compute.manager [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Preparing to wait for external event network-vif-plugged-74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.364 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.365 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.365 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.366 2 DEBUG nova.virt.libvirt.vif [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:47:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-569462946',display_name='tempest-VolumesActionsTest-instance-569462946',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-569462946',id=5,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9a68f4cae7c4848af4537abf8f3a937',ramdisk_id='',reservation_id='r-d494d8d4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-789764846',owner_user_name='tempest-VolumesActionsTest-789764846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:47:46Z,user_data=None,user_id='85daab3d4ec44eb885d793a27894aab3',uuid=b9ff95de-17ee-4a78-822e-f4c081509b00,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "address": "fa:16:3e:f5:2e:e2", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f6fe8c-4f", "ovs_interfaceid": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.366 2 DEBUG nova.network.os_vif_util [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Converting VIF {"id": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "address": "fa:16:3e:f5:2e:e2", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f6fe8c-4f", "ovs_interfaceid": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.367 2 DEBUG nova.network.os_vif_util [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:2e:e2,bridge_name='br-int',has_traffic_filtering=True,id=74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4,network=Network(c3928aed-f713-4c4c-8990-af3a790b20cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74f6fe8c-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.368 2 DEBUG os_vif [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:2e:e2,bridge_name='br-int',has_traffic_filtering=True,id=74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4,network=Network(c3928aed-f713-4c4c-8990-af3a790b20cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74f6fe8c-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.369 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.370 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.374 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f6fe8c-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.375 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap74f6fe8c-4f, col_values=(('external_ids', {'iface-id': '74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:2e:e2', 'vm-uuid': 'b9ff95de-17ee-4a78-822e-f4c081509b00'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:50 np0005464891 NetworkManager[44940]: <info>  [1759337270.3783] manager: (tap74f6fe8c-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.389 2 INFO os_vif [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:2e:e2,bridge_name='br-int',has_traffic_filtering=True,id=74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4,network=Network(c3928aed-f713-4c4c-8990-af3a790b20cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74f6fe8c-4f')#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.449 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.450 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.450 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] No VIF found with MAC fa:16:3e:f5:2e:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.451 2 INFO nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Using config drive#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.473 2 DEBUG nova.storage.rbd_utils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] rbd image b9ff95de-17ee-4a78-822e-f4c081509b00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.537 2 DEBUG nova.network.neutron [req-26a16917-ebb1-4614-b3df-ec34961ec674 req-452aa37e-8e43-40e3-9bb5-00287e5a6ecc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Updated VIF entry in instance network info cache for port 74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.537 2 DEBUG nova.network.neutron [req-26a16917-ebb1-4614-b3df-ec34961ec674 req-452aa37e-8e43-40e3-9bb5-00287e5a6ecc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Updating instance_info_cache with network_info: [{"id": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "address": "fa:16:3e:f5:2e:e2", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f6fe8c-4f", "ovs_interfaceid": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:47:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.592 2 DEBUG oslo_concurrency.lockutils [req-26a16917-ebb1-4614-b3df-ec34961ec674 req-452aa37e-8e43-40e3-9bb5-00287e5a6ecc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-b9ff95de-17ee-4a78-822e-f4c081509b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.792 2 INFO nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Creating config drive at /var/lib/nova/instances/b9ff95de-17ee-4a78-822e-f4c081509b00/disk.config#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.801 2 DEBUG oslo_concurrency.processutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b9ff95de-17ee-4a78-822e-f4c081509b00/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzqn2lg1_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 132 KiB/s rd, 67 MiB/s wr, 240 op/s
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.935 2 DEBUG oslo_concurrency.processutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b9ff95de-17ee-4a78-822e-f4c081509b00/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzqn2lg1_" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.971 2 DEBUG nova.storage.rbd_utils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] rbd image b9ff95de-17ee-4a78-822e-f4c081509b00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:50 np0005464891 nova_compute[259907]: 2025-10-01 16:47:50.977 2 DEBUG oslo_concurrency.processutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b9ff95de-17ee-4a78-822e-f4c081509b00/disk.config b9ff95de-17ee-4a78-822e-f4c081509b00_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:51 np0005464891 nova_compute[259907]: 2025-10-01 16:47:51.163 2 DEBUG oslo_concurrency.processutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b9ff95de-17ee-4a78-822e-f4c081509b00/disk.config b9ff95de-17ee-4a78-822e-f4c081509b00_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:51 np0005464891 nova_compute[259907]: 2025-10-01 16:47:51.164 2 INFO nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Deleting local config drive /var/lib/nova/instances/b9ff95de-17ee-4a78-822e-f4c081509b00/disk.config because it was imported into RBD.#033[00m
Oct  1 12:47:51 np0005464891 kernel: tap74f6fe8c-4f: entered promiscuous mode
Oct  1 12:47:51 np0005464891 NetworkManager[44940]: <info>  [1759337271.2229] manager: (tap74f6fe8c-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Oct  1 12:47:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:47:51Z|00053|binding|INFO|Claiming lport 74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 for this chassis.
Oct  1 12:47:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:47:51Z|00054|binding|INFO|74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4: Claiming fa:16:3e:f5:2e:e2 10.100.0.14
Oct  1 12:47:51 np0005464891 nova_compute[259907]: 2025-10-01 16:47:51.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.237 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:2e:e2 10.100.0.14'], port_security=['fa:16:3e:f5:2e:e2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b9ff95de-17ee-4a78-822e-f4c081509b00', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3928aed-f713-4c4c-8990-af3a790b20cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9a68f4cae7c4848af4537abf8f3a937', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c094c91-85d3-4eaa-9f95-e39d330e2d75', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec106bc6-db39-4f09-a3c7-4a345f13bd23, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.240 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 in datapath c3928aed-f713-4c4c-8990-af3a790b20cf bound to our chassis#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.244 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c3928aed-f713-4c4c-8990-af3a790b20cf#033[00m
Oct  1 12:47:51 np0005464891 nova_compute[259907]: 2025-10-01 16:47:51.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:51 np0005464891 nova_compute[259907]: 2025-10-01 16:47:51.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:47:51Z|00055|binding|INFO|Setting lport 74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 up in Southbound
Oct  1 12:47:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:47:51Z|00056|binding|INFO|Setting lport 74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 ovn-installed in OVS
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.269 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[38e6d57a-fc04-48cd-b43f-a91d876869fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.271 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc3928aed-f1 in ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.273 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc3928aed-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.274 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[0253df8d-347b-415e-839b-3fd41690854b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.275 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9c86d8-ff16-4470-b035-0a57f545b151]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:51 np0005464891 systemd-machined[214891]: New machine qemu-5-instance-00000005.
Oct  1 12:47:51 np0005464891 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.299 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[1fcf9192-7a5e-438b-b586-32422335f459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:51 np0005464891 systemd-udevd[278323]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:47:51 np0005464891 NetworkManager[44940]: <info>  [1759337271.3261] device (tap74f6fe8c-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:47:51 np0005464891 NetworkManager[44940]: <info>  [1759337271.3276] device (tap74f6fe8c-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.326 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8a9145-7a0a-4b32-8675-b15e4a5ed6ff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.357 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[3ceb6c48-1197-4dec-9eae-c28bee53f367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:51 np0005464891 systemd-udevd[278325]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.365 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[78248c8e-e0fa-4c22-b405-56c6896e95db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:51 np0005464891 NetworkManager[44940]: <info>  [1759337271.3663] manager: (tapc3928aed-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.401 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[135f0b81-dcd5-4f17-845b-b279d3f79b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.403 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[613d93fc-5266-4f9c-a8b5-53b6b8447fee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:51 np0005464891 NetworkManager[44940]: <info>  [1759337271.4282] device (tapc3928aed-f0): carrier: link connected
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.434 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[61978672-9111-4770-a71f-af3bed871014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.452 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e613dc93-3278-4085-b0be-366f9619a206]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc3928aed-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:d6:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422000, 'reachable_time': 22416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278353, 'error': None, 'target': 'ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.475 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[af228cc2-bd9f-43b0-b1ba-76ebea9d0e56]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:d62f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422000, 'tstamp': 422000}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278354, 'error': None, 'target': 'ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.500 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[03270f56-5e22-4ff4-81bf-7e97c6038eee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc3928aed-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:d6:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422000, 'reachable_time': 22416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278362, 'error': None, 'target': 'ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.542 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[9c11bb4f-9d5c-4692-bd0c-6178eb352a8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.618 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[22e72405-ae38-4439-a3cb-aca04de09214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.622 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3928aed-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.622 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.623 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc3928aed-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:51 np0005464891 nova_compute[259907]: 2025-10-01 16:47:51.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:51 np0005464891 kernel: tapc3928aed-f0: entered promiscuous mode
Oct  1 12:47:51 np0005464891 NetworkManager[44940]: <info>  [1759337271.6592] manager: (tapc3928aed-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.663 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc3928aed-f0, col_values=(('external_ids', {'iface-id': '367ba185-0566-4b48-9fbb-85d1655d5f0a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:51 np0005464891 nova_compute[259907]: 2025-10-01 16:47:51.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:47:51Z|00057|binding|INFO|Releasing lport 367ba185-0566-4b48-9fbb-85d1655d5f0a from this chassis (sb_readonly=0)
Oct  1 12:47:51 np0005464891 nova_compute[259907]: 2025-10-01 16:47:51.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.668 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c3928aed-f713-4c4c-8990-af3a790b20cf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c3928aed-f713-4c4c-8990-af3a790b20cf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.670 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[97eaa866-c3e4-4593-bfd5-d40dbcbd8359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.671 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-c3928aed-f713-4c4c-8990-af3a790b20cf
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/c3928aed-f713-4c4c-8990-af3a790b20cf.pid.haproxy
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID c3928aed-f713-4c4c-8990-af3a790b20cf
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:47:51 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:51.672 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf', 'env', 'PROCESS_TAG=haproxy-c3928aed-f713-4c4c-8990-af3a790b20cf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c3928aed-f713-4c4c-8990-af3a790b20cf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:47:51 np0005464891 nova_compute[259907]: 2025-10-01 16:47:51.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:52 np0005464891 nova_compute[259907]: 2025-10-01 16:47:52.068 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337272.0677187, b9ff95de-17ee-4a78-822e-f4c081509b00 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:47:52 np0005464891 nova_compute[259907]: 2025-10-01 16:47:52.068 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] VM Started (Lifecycle Event)#033[00m
Oct  1 12:47:52 np0005464891 podman[278429]: 2025-10-01 16:47:52.07626174 +0000 UTC m=+0.056374008 container create 8151a636bac441a95a544a9639f9b008b8ecb90f97196afb3946cb0e7f51ced3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 12:47:52 np0005464891 nova_compute[259907]: 2025-10-01 16:47:52.096 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:47:52 np0005464891 nova_compute[259907]: 2025-10-01 16:47:52.104 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337272.067948, b9ff95de-17ee-4a78-822e-f4c081509b00 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:47:52 np0005464891 nova_compute[259907]: 2025-10-01 16:47:52.104 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:47:52 np0005464891 systemd[1]: Started libpod-conmon-8151a636bac441a95a544a9639f9b008b8ecb90f97196afb3946cb0e7f51ced3.scope.
Oct  1 12:47:52 np0005464891 nova_compute[259907]: 2025-10-01 16:47:52.124 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:47:52 np0005464891 nova_compute[259907]: 2025-10-01 16:47:52.128 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:47:52 np0005464891 nova_compute[259907]: 2025-10-01 16:47:52.143 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:47:52 np0005464891 podman[278429]: 2025-10-01 16:47:52.047559027 +0000 UTC m=+0.027671335 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:47:52 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:47:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea91543c4db9577d662f27519186daa6776e3f88aa1b677610e7478c8a2ab22/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:47:52 np0005464891 podman[278429]: 2025-10-01 16:47:52.182251968 +0000 UTC m=+0.162364236 container init 8151a636bac441a95a544a9639f9b008b8ecb90f97196afb3946cb0e7f51ced3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  1 12:47:52 np0005464891 podman[278429]: 2025-10-01 16:47:52.189454127 +0000 UTC m=+0.169566385 container start 8151a636bac441a95a544a9639f9b008b8ecb90f97196afb3946cb0e7f51ced3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:47:52 np0005464891 neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf[278444]: [NOTICE]   (278448) : New worker (278450) forked
Oct  1 12:47:52 np0005464891 neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf[278444]: [NOTICE]   (278448) : Loading success.
Oct  1 12:47:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 45 MiB/s wr, 231 op/s
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.254 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.255 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.277 2 DEBUG nova.compute.manager [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.375 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.376 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.385 2 DEBUG nova.virt.hardware [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.385 2 INFO nova.compute.claims [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.438 2 DEBUG nova.compute.manager [req-1bffd98b-9e09-4402-9cfc-37f433ce1a2e req-c169d701-c8fc-4f11-bdac-7ec8c6c4242c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Received event network-vif-plugged-74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.439 2 DEBUG oslo_concurrency.lockutils [req-1bffd98b-9e09-4402-9cfc-37f433ce1a2e req-c169d701-c8fc-4f11-bdac-7ec8c6c4242c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.439 2 DEBUG oslo_concurrency.lockutils [req-1bffd98b-9e09-4402-9cfc-37f433ce1a2e req-c169d701-c8fc-4f11-bdac-7ec8c6c4242c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.439 2 DEBUG oslo_concurrency.lockutils [req-1bffd98b-9e09-4402-9cfc-37f433ce1a2e req-c169d701-c8fc-4f11-bdac-7ec8c6c4242c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.440 2 DEBUG nova.compute.manager [req-1bffd98b-9e09-4402-9cfc-37f433ce1a2e req-c169d701-c8fc-4f11-bdac-7ec8c6c4242c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Processing event network-vif-plugged-74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.440 2 DEBUG nova.compute.manager [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.444 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337274.4446473, b9ff95de-17ee-4a78-822e-f4c081509b00 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.445 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.446 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.452 2 INFO nova.virt.libvirt.driver [-] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Instance spawned successfully.#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.453 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.488 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.494 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.497 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.498 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.499 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.499 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.500 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.500 2 DEBUG nova.virt.libvirt.driver [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.521 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.555 2 INFO nova.compute.manager [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Took 8.15 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.556 2 DEBUG nova.compute.manager [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.561 2 DEBUG oslo_concurrency.processutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.651 2 INFO nova.compute.manager [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Took 9.08 seconds to build instance.#033[00m
Oct  1 12:47:54 np0005464891 nova_compute[259907]: 2025-10-01 16:47:54.670 2 DEBUG oslo_concurrency.lockutils [None req-5d48ddb4-f513-4c7b-ae92-c3334790ce06 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "b9ff95de-17ee-4a78-822e-f4c081509b00" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 146 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 590 KiB/s rd, 24 MiB/s wr, 238 op/s
Oct  1 12:47:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:47:55 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501851856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.091 2 DEBUG oslo_concurrency.processutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.101 2 DEBUG nova.compute.provider_tree [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.118 2 DEBUG nova.scheduler.client.report [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.147 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.148 2 DEBUG nova.compute.manager [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.204 2 DEBUG nova.compute.manager [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.205 2 DEBUG nova.network.neutron [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.225 2 INFO nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.245 2 DEBUG nova.compute.manager [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.316 2 DEBUG nova.compute.manager [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.318 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.319 2 INFO nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Creating image(s)#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.342 2 DEBUG nova.storage.rbd_utils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] rbd image 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.369 2 DEBUG nova.storage.rbd_utils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] rbd image 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.390 2 DEBUG nova.storage.rbd_utils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] rbd image 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.394 2 DEBUG oslo_concurrency.processutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.452 2 DEBUG oslo_concurrency.processutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.453 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.454 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.454 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.483 2 DEBUG nova.storage.rbd_utils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] rbd image 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.489 2 DEBUG oslo_concurrency.processutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:47:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Oct  1 12:47:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Oct  1 12:47:55 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.867 2 DEBUG oslo_concurrency.processutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.930 2 DEBUG nova.storage.rbd_utils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] resizing rbd image 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  1 12:47:55 np0005464891 nova_compute[259907]: 2025-10-01 16:47:55.966 2 DEBUG nova.policy [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0a821557545f49ad9c15eee1cf0bd82b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f395084b84f48d182c3be9d7961475e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.041 2 DEBUG nova.objects.instance [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lazy-loading 'migration_context' on Instance uuid 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.058 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.058 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Ensure instance console log exists: /var/lib/nova/instances/4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.059 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.060 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.060 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.545 2 DEBUG nova.compute.manager [req-9fd2ce1a-d1bc-44b1-8560-b36c3a707818 req-b0a136f0-3bd6-4bfa-bc27-bfcb69b915ac af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Received event network-vif-plugged-74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.546 2 DEBUG oslo_concurrency.lockutils [req-9fd2ce1a-d1bc-44b1-8560-b36c3a707818 req-b0a136f0-3bd6-4bfa-bc27-bfcb69b915ac af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.546 2 DEBUG oslo_concurrency.lockutils [req-9fd2ce1a-d1bc-44b1-8560-b36c3a707818 req-b0a136f0-3bd6-4bfa-bc27-bfcb69b915ac af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.546 2 DEBUG oslo_concurrency.lockutils [req-9fd2ce1a-d1bc-44b1-8560-b36c3a707818 req-b0a136f0-3bd6-4bfa-bc27-bfcb69b915ac af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.547 2 DEBUG nova.compute.manager [req-9fd2ce1a-d1bc-44b1-8560-b36c3a707818 req-b0a136f0-3bd6-4bfa-bc27-bfcb69b915ac af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] No waiting events found dispatching network-vif-plugged-74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.547 2 WARNING nova.compute.manager [req-9fd2ce1a-d1bc-44b1-8560-b36c3a707818 req-b0a136f0-3bd6-4bfa-bc27-bfcb69b915ac af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Received unexpected event network-vif-plugged-74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 for instance with vm_state active and task_state None.#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.796 2 DEBUG nova.network.neutron [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Successfully created port: 5d498a06-e5b8-4d33-87a1-cfc873bebe29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:47:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 161 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 9.7 MiB/s wr, 147 op/s
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.936 2 DEBUG oslo_concurrency.lockutils [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "b9ff95de-17ee-4a78-822e-f4c081509b00" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.937 2 DEBUG oslo_concurrency.lockutils [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "b9ff95de-17ee-4a78-822e-f4c081509b00" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.938 2 DEBUG oslo_concurrency.lockutils [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.939 2 DEBUG oslo_concurrency.lockutils [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.939 2 DEBUG oslo_concurrency.lockutils [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.942 2 INFO nova.compute.manager [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Terminating instance#033[00m
Oct  1 12:47:56 np0005464891 nova_compute[259907]: 2025-10-01 16:47:56.944 2 DEBUG nova.compute.manager [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:47:57 np0005464891 kernel: tap74f6fe8c-4f (unregistering): left promiscuous mode
Oct  1 12:47:57 np0005464891 NetworkManager[44940]: <info>  [1759337277.0111] device (tap74f6fe8c-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:47:57 np0005464891 ovn_controller[152409]: 2025-10-01T16:47:57Z|00058|binding|INFO|Releasing lport 74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 from this chassis (sb_readonly=0)
Oct  1 12:47:57 np0005464891 ovn_controller[152409]: 2025-10-01T16:47:57Z|00059|binding|INFO|Setting lport 74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 down in Southbound
Oct  1 12:47:57 np0005464891 ovn_controller[152409]: 2025-10-01T16:47:57Z|00060|binding|INFO|Removing iface tap74f6fe8c-4f ovn-installed in OVS
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:57 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:57.032 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:2e:e2 10.100.0.14'], port_security=['fa:16:3e:f5:2e:e2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b9ff95de-17ee-4a78-822e-f4c081509b00', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3928aed-f713-4c4c-8990-af3a790b20cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9a68f4cae7c4848af4537abf8f3a937', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c094c91-85d3-4eaa-9f95-e39d330e2d75', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec106bc6-db39-4f09-a3c7-4a345f13bd23, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:47:57 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:57.033 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 in datapath c3928aed-f713-4c4c-8990-af3a790b20cf unbound from our chassis#033[00m
Oct  1 12:47:57 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:57.035 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c3928aed-f713-4c4c-8990-af3a790b20cf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:47:57 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:57.036 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4cc4c5-ffdd-4460-ac01-f04ac21ccaad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:57 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:57.036 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf namespace which is not needed anymore#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:57 np0005464891 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct  1 12:47:57 np0005464891 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 3.237s CPU time.
Oct  1 12:47:57 np0005464891 systemd-machined[214891]: Machine qemu-5-instance-00000005 terminated.
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:57 np0005464891 neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf[278444]: [NOTICE]   (278448) : haproxy version is 2.8.14-c23fe91
Oct  1 12:47:57 np0005464891 neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf[278444]: [NOTICE]   (278448) : path to executable is /usr/sbin/haproxy
Oct  1 12:47:57 np0005464891 neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf[278444]: [WARNING]  (278448) : Exiting Master process...
Oct  1 12:47:57 np0005464891 neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf[278444]: [ALERT]    (278448) : Current worker (278450) exited with code 143 (Terminated)
Oct  1 12:47:57 np0005464891 neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf[278444]: [WARNING]  (278448) : All workers exited. Exiting... (0)
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:57 np0005464891 systemd[1]: libpod-8151a636bac441a95a544a9639f9b008b8ecb90f97196afb3946cb0e7f51ced3.scope: Deactivated successfully.
Oct  1 12:47:57 np0005464891 podman[278671]: 2025-10-01 16:47:57.188836453 +0000 UTC m=+0.051279117 container died 8151a636bac441a95a544a9639f9b008b8ecb90f97196afb3946cb0e7f51ced3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.191 2 INFO nova.virt.libvirt.driver [-] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Instance destroyed successfully.#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.192 2 DEBUG nova.objects.instance [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lazy-loading 'resources' on Instance uuid b9ff95de-17ee-4a78-822e-f4c081509b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.209 2 DEBUG nova.virt.libvirt.vif [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:47:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-569462946',display_name='tempest-VolumesActionsTest-instance-569462946',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-569462946',id=5,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:47:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9a68f4cae7c4848af4537abf8f3a937',ramdisk_id='',reservation_id='r-d494d8d4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-789764846',owner_user_name='tempest-VolumesActionsTest-789764846-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:47:54Z,user_data=None,user_id='85daab3d4ec44eb885d793a27894aab3',uuid=b9ff95de-17ee-4a78-822e-f4c081509b00,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "address": "fa:16:3e:f5:2e:e2", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f6fe8c-4f", "ovs_interfaceid": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.210 2 DEBUG nova.network.os_vif_util [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Converting VIF {"id": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "address": "fa:16:3e:f5:2e:e2", "network": {"id": "c3928aed-f713-4c4c-8990-af3a790b20cf", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1351204213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9a68f4cae7c4848af4537abf8f3a937", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f6fe8c-4f", "ovs_interfaceid": "74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.211 2 DEBUG nova.network.os_vif_util [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:2e:e2,bridge_name='br-int',has_traffic_filtering=True,id=74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4,network=Network(c3928aed-f713-4c4c-8990-af3a790b20cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74f6fe8c-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.212 2 DEBUG os_vif [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:2e:e2,bridge_name='br-int',has_traffic_filtering=True,id=74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4,network=Network(c3928aed-f713-4c4c-8990-af3a790b20cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74f6fe8c-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.215 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f6fe8c-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:57 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8151a636bac441a95a544a9639f9b008b8ecb90f97196afb3946cb0e7f51ced3-userdata-shm.mount: Deactivated successfully.
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:47:57 np0005464891 systemd[1]: var-lib-containers-storage-overlay-aea91543c4db9577d662f27519186daa6776e3f88aa1b677610e7478c8a2ab22-merged.mount: Deactivated successfully.
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.227 2 INFO os_vif [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:2e:e2,bridge_name='br-int',has_traffic_filtering=True,id=74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4,network=Network(c3928aed-f713-4c4c-8990-af3a790b20cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74f6fe8c-4f')#033[00m
Oct  1 12:47:57 np0005464891 podman[278671]: 2025-10-01 16:47:57.242482795 +0000 UTC m=+0.104925459 container cleanup 8151a636bac441a95a544a9639f9b008b8ecb90f97196afb3946cb0e7f51ced3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:47:57 np0005464891 systemd[1]: libpod-conmon-8151a636bac441a95a544a9639f9b008b8ecb90f97196afb3946cb0e7f51ced3.scope: Deactivated successfully.
Oct  1 12:47:57 np0005464891 podman[278726]: 2025-10-01 16:47:57.324481021 +0000 UTC m=+0.055504445 container remove 8151a636bac441a95a544a9639f9b008b8ecb90f97196afb3946cb0e7f51ced3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:47:57 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:57.335 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8520cea2-c194-433c-8eb2-67bce6ded310]: (4, ('Wed Oct  1 04:47:57 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf (8151a636bac441a95a544a9639f9b008b8ecb90f97196afb3946cb0e7f51ced3)\n8151a636bac441a95a544a9639f9b008b8ecb90f97196afb3946cb0e7f51ced3\nWed Oct  1 04:47:57 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf (8151a636bac441a95a544a9639f9b008b8ecb90f97196afb3946cb0e7f51ced3)\n8151a636bac441a95a544a9639f9b008b8ecb90f97196afb3946cb0e7f51ced3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:57 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:57.337 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7de86d-8175-46a8-ba11-813152da0b19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:57 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:57.338 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3928aed-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:57 np0005464891 kernel: tapc3928aed-f0: left promiscuous mode
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:47:57 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:57.383 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[1882d852-956f-4447-89ef-c95a535e8fa2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:57 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:57.413 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[852659cd-31ba-4a84-8c87-ddcde99ca14c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:57 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:57.414 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1841e1-d05a-4e53-9997-27338b760f57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:57 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:57.430 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7a0f35-4cca-4ff5-aa19-fb70aa80790f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421992, 'reachable_time': 23822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278742, 'error': None, 'target': 'ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:57 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:57.433 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c3928aed-f713-4c4c-8990-af3a790b20cf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:47:57 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:47:57.433 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[38c4ff42-66a8-4451-b40e-a64e4a2e069f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:47:57 np0005464891 systemd[1]: run-netns-ovnmeta\x2dc3928aed\x2df713\x2d4c4c\x2d8990\x2daf3a790b20cf.mount: Deactivated successfully.
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.685 2 INFO nova.virt.libvirt.driver [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Deleting instance files /var/lib/nova/instances/b9ff95de-17ee-4a78-822e-f4c081509b00_del#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.686 2 INFO nova.virt.libvirt.driver [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Deletion of /var/lib/nova/instances/b9ff95de-17ee-4a78-822e-f4c081509b00_del complete#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.764 2 INFO nova.compute.manager [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.765 2 DEBUG oslo.service.loopingcall [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.766 2 DEBUG nova.compute.manager [-] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:47:57 np0005464891 nova_compute[259907]: 2025-10-01 16:47:57.766 2 DEBUG nova.network.neutron [-] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:47:58 np0005464891 nova_compute[259907]: 2025-10-01 16:47:58.119 2 DEBUG nova.network.neutron [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Successfully updated port: 5d498a06-e5b8-4d33-87a1-cfc873bebe29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:47:58 np0005464891 nova_compute[259907]: 2025-10-01 16:47:58.140 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "refresh_cache-4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:47:58 np0005464891 nova_compute[259907]: 2025-10-01 16:47:58.140 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquired lock "refresh_cache-4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:47:58 np0005464891 nova_compute[259907]: 2025-10-01 16:47:58.141 2 DEBUG nova.network.neutron [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:47:58 np0005464891 nova_compute[259907]: 2025-10-01 16:47:58.245 2 DEBUG nova.compute.manager [req-c18a3af5-d3b0-46e2-9a99-89a68f769537 req-dd086083-9597-4c71-bf37-21914e335eb3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Received event network-changed-5d498a06-e5b8-4d33-87a1-cfc873bebe29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:47:58 np0005464891 nova_compute[259907]: 2025-10-01 16:47:58.246 2 DEBUG nova.compute.manager [req-c18a3af5-d3b0-46e2-9a99-89a68f769537 req-dd086083-9597-4c71-bf37-21914e335eb3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Refreshing instance network info cache due to event network-changed-5d498a06-e5b8-4d33-87a1-cfc873bebe29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:47:58 np0005464891 nova_compute[259907]: 2025-10-01 16:47:58.246 2 DEBUG oslo_concurrency.lockutils [req-c18a3af5-d3b0-46e2-9a99-89a68f769537 req-dd086083-9597-4c71-bf37-21914e335eb3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:47:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 180 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Oct  1 12:47:58 np0005464891 nova_compute[259907]: 2025-10-01 16:47:58.949 2 DEBUG nova.network.neutron [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:47:59 np0005464891 podman[278744]: 2025-10-01 16:47:59.006174352 +0000 UTC m=+0.114287338 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller)
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.219 2 DEBUG nova.network.neutron [-] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.244 2 INFO nova.compute.manager [-] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Took 1.48 seconds to deallocate network for instance.#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.281 2 DEBUG nova.compute.manager [req-09f5ea51-376c-4cb8-9551-1067e5535652 req-e734dcad-09bc-46f7-a4f0-96a9977c58f0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Received event network-vif-unplugged-74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.282 2 DEBUG oslo_concurrency.lockutils [req-09f5ea51-376c-4cb8-9551-1067e5535652 req-e734dcad-09bc-46f7-a4f0-96a9977c58f0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.282 2 DEBUG oslo_concurrency.lockutils [req-09f5ea51-376c-4cb8-9551-1067e5535652 req-e734dcad-09bc-46f7-a4f0-96a9977c58f0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.283 2 DEBUG oslo_concurrency.lockutils [req-09f5ea51-376c-4cb8-9551-1067e5535652 req-e734dcad-09bc-46f7-a4f0-96a9977c58f0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.283 2 DEBUG nova.compute.manager [req-09f5ea51-376c-4cb8-9551-1067e5535652 req-e734dcad-09bc-46f7-a4f0-96a9977c58f0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] No waiting events found dispatching network-vif-unplugged-74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.284 2 DEBUG nova.compute.manager [req-09f5ea51-376c-4cb8-9551-1067e5535652 req-e734dcad-09bc-46f7-a4f0-96a9977c58f0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Received event network-vif-unplugged-74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.284 2 DEBUG nova.compute.manager [req-09f5ea51-376c-4cb8-9551-1067e5535652 req-e734dcad-09bc-46f7-a4f0-96a9977c58f0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Received event network-vif-plugged-74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.285 2 DEBUG oslo_concurrency.lockutils [req-09f5ea51-376c-4cb8-9551-1067e5535652 req-e734dcad-09bc-46f7-a4f0-96a9977c58f0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.285 2 DEBUG oslo_concurrency.lockutils [req-09f5ea51-376c-4cb8-9551-1067e5535652 req-e734dcad-09bc-46f7-a4f0-96a9977c58f0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.286 2 DEBUG oslo_concurrency.lockutils [req-09f5ea51-376c-4cb8-9551-1067e5535652 req-e734dcad-09bc-46f7-a4f0-96a9977c58f0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b9ff95de-17ee-4a78-822e-f4c081509b00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.286 2 DEBUG nova.compute.manager [req-09f5ea51-376c-4cb8-9551-1067e5535652 req-e734dcad-09bc-46f7-a4f0-96a9977c58f0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] No waiting events found dispatching network-vif-plugged-74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.286 2 WARNING nova.compute.manager [req-09f5ea51-376c-4cb8-9551-1067e5535652 req-e734dcad-09bc-46f7-a4f0-96a9977c58f0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Received unexpected event network-vif-plugged-74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 for instance with vm_state active and task_state deleting.#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.294 2 DEBUG oslo_concurrency.lockutils [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.294 2 DEBUG oslo_concurrency.lockutils [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.362 2 DEBUG oslo_concurrency.processutils [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:47:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2978414263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.811 2 DEBUG oslo_concurrency.processutils [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.821 2 DEBUG nova.compute.provider_tree [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.845 2 DEBUG nova.scheduler.client.report [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.869 2 DEBUG oslo_concurrency.lockutils [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.876 2 DEBUG nova.network.neutron [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Updating instance_info_cache with network_info: [{"id": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "address": "fa:16:3e:21:ca:d4", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d498a06-e5", "ovs_interfaceid": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.900 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Releasing lock "refresh_cache-4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.901 2 DEBUG nova.compute.manager [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Instance network_info: |[{"id": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "address": "fa:16:3e:21:ca:d4", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d498a06-e5", "ovs_interfaceid": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.902 2 DEBUG oslo_concurrency.lockutils [req-c18a3af5-d3b0-46e2-9a99-89a68f769537 req-dd086083-9597-4c71-bf37-21914e335eb3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.902 2 DEBUG nova.network.neutron [req-c18a3af5-d3b0-46e2-9a99-89a68f769537 req-dd086083-9597-4c71-bf37-21914e335eb3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Refreshing network info cache for port 5d498a06-e5b8-4d33-87a1-cfc873bebe29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.908 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Start _get_guest_xml network_info=[{"id": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "address": "fa:16:3e:21:ca:d4", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d498a06-e5", "ovs_interfaceid": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'image_id': 'f01c1e7c-fea3-4433-a44a-d71153552c78'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.911 2 INFO nova.scheduler.client.report [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Deleted allocations for instance b9ff95de-17ee-4a78-822e-f4c081509b00#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.918 2 WARNING nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.924 2 DEBUG nova.virt.libvirt.host [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.925 2 DEBUG nova.virt.libvirt.host [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.940 2 DEBUG nova.virt.libvirt.host [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.941 2 DEBUG nova.virt.libvirt.host [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.941 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.942 2 DEBUG nova.virt.hardware [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.943 2 DEBUG nova.virt.hardware [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.943 2 DEBUG nova.virt.hardware [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.944 2 DEBUG nova.virt.hardware [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.945 2 DEBUG nova.virt.hardware [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.945 2 DEBUG nova.virt.hardware [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.946 2 DEBUG nova.virt.hardware [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.947 2 DEBUG nova.virt.hardware [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.947 2 DEBUG nova.virt.hardware [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.948 2 DEBUG nova.virt.hardware [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.948 2 DEBUG nova.virt.hardware [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.954 2 DEBUG oslo_concurrency.processutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:47:59 np0005464891 nova_compute[259907]: 2025-10-01 16:47:59.988 2 DEBUG oslo_concurrency.lockutils [None req-12135396-e3ae-417b-874f-f14fbd43a79f 85daab3d4ec44eb885d793a27894aab3 b9a68f4cae7c4848af4537abf8f3a937 - - default default] Lock "b9ff95de-17ee-4a78-822e-f4c081509b00" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:48:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/975018296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.472 2 DEBUG oslo_concurrency.processutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.505 2 DEBUG nova.storage.rbd_utils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] rbd image 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.511 2 DEBUG oslo_concurrency.processutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.693 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.694 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.712 2 DEBUG nova.compute.manager [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.769 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.770 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.775 2 DEBUG nova.virt.hardware [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.775 2 INFO nova.compute.claims [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:48:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 180 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 203 op/s
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.898 2 DEBUG oslo_concurrency.processutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:48:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/272944688' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.944 2 DEBUG oslo_concurrency.processutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.947 2 DEBUG nova.virt.libvirt.vif [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:47:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-405249637',display_name='tempest-TestStampPattern-server-405249637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-405249637',id=6,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCck7nxcoGk0qQMqmOkhPfker9ncjX3MedwZy1gvsVFGYBG7D5wvyJC+lFiT/6un7wQpds+bs1FRdVcdDnlHzQimOGzqeJBoWgRzI2+A/i117tgAu+tGkXiUBUgSD0X9yA==',key_name='tempest-TestStampPattern-1388282123',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f395084b84f48d182c3be9d7961475e',ramdisk_id='',reservation_id='r-8f5t7auv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-305826503',owner_user_name='tempest-TestStampPattern-305826503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:47:55Z,user_data=None,user_id='0a821557545f49ad9c15eee1cf0bd82b',uuid=4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "address": "fa:16:3e:21:ca:d4", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d498a06-e5", "ovs_interfaceid": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.948 2 DEBUG nova.network.os_vif_util [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Converting VIF {"id": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "address": "fa:16:3e:21:ca:d4", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d498a06-e5", "ovs_interfaceid": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.950 2 DEBUG nova.network.os_vif_util [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:ca:d4,bridge_name='br-int',has_traffic_filtering=True,id=5d498a06-e5b8-4d33-87a1-cfc873bebe29,network=Network(0b8d6144-4eec-41cd-aaa9-d3e718f03c5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d498a06-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.953 2 DEBUG nova.objects.instance [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lazy-loading 'pci_devices' on Instance uuid 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.981 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  <uuid>4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83</uuid>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  <name>instance-00000006</name>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <nova:name>tempest-TestStampPattern-server-405249637</nova:name>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:47:59</nova:creationTime>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:        <nova:user uuid="0a821557545f49ad9c15eee1cf0bd82b">tempest-TestStampPattern-305826503-project-member</nova:user>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:        <nova:project uuid="1f395084b84f48d182c3be9d7961475e">tempest-TestStampPattern-305826503</nova:project>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="f01c1e7c-fea3-4433-a44a-d71153552c78"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:        <nova:port uuid="5d498a06-e5b8-4d33-87a1-cfc873bebe29">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <entry name="serial">4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83</entry>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <entry name="uuid">4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83</entry>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk.config">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:21:ca:d4"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <target dev="tap5d498a06-e5"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83/console.log" append="off"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:48:00 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:48:00 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:48:00 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:48:00 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.984 2 DEBUG nova.compute.manager [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Preparing to wait for external event network-vif-plugged-5d498a06-e5b8-4d33-87a1-cfc873bebe29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.985 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.985 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.986 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.988 2 DEBUG nova.virt.libvirt.vif [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:47:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-405249637',display_name='tempest-TestStampPattern-server-405249637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-405249637',id=6,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCck7nxcoGk0qQMqmOkhPfker9ncjX3MedwZy1gvsVFGYBG7D5wvyJC+lFiT/6un7wQpds+bs1FRdVcdDnlHzQimOGzqeJBoWgRzI2+A/i117tgAu+tGkXiUBUgSD0X9yA==',key_name='tempest-TestStampPattern-1388282123',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f395084b84f48d182c3be9d7961475e',ramdisk_id='',reservation_id='r-8f5t7auv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-305826503',owner_user_name='tempest-TestStampPattern-305826503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:47:55Z,user_data=None,user_id='0a821557545f49ad9c15eee1cf0bd82b',uuid=4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "address": "fa:16:3e:21:ca:d4", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d498a06-e5", "ovs_interfaceid": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.988 2 DEBUG nova.network.os_vif_util [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Converting VIF {"id": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "address": "fa:16:3e:21:ca:d4", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d498a06-e5", "ovs_interfaceid": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.989 2 DEBUG nova.network.os_vif_util [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:ca:d4,bridge_name='br-int',has_traffic_filtering=True,id=5d498a06-e5b8-4d33-87a1-cfc873bebe29,network=Network(0b8d6144-4eec-41cd-aaa9-d3e718f03c5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d498a06-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.990 2 DEBUG os_vif [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:ca:d4,bridge_name='br-int',has_traffic_filtering=True,id=5d498a06-e5b8-4d33-87a1-cfc873bebe29,network=Network(0b8d6144-4eec-41cd-aaa9-d3e718f03c5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d498a06-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.992 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.993 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:00 np0005464891 nova_compute[259907]: 2025-10-01 16:48:00.999 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d498a06-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.000 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5d498a06-e5, col_values=(('external_ids', {'iface-id': '5d498a06-e5b8-4d33-87a1-cfc873bebe29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:ca:d4', 'vm-uuid': '4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:01 np0005464891 NetworkManager[44940]: <info>  [1759337281.0035] manager: (tap5d498a06-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.004 2 DEBUG nova.network.neutron [req-c18a3af5-d3b0-46e2-9a99-89a68f769537 req-dd086083-9597-4c71-bf37-21914e335eb3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Updated VIF entry in instance network info cache for port 5d498a06-e5b8-4d33-87a1-cfc873bebe29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.004 2 DEBUG nova.network.neutron [req-c18a3af5-d3b0-46e2-9a99-89a68f769537 req-dd086083-9597-4c71-bf37-21914e335eb3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Updating instance_info_cache with network_info: [{"id": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "address": "fa:16:3e:21:ca:d4", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d498a06-e5", "ovs_interfaceid": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.012 2 INFO os_vif [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:ca:d4,bridge_name='br-int',has_traffic_filtering=True,id=5d498a06-e5b8-4d33-87a1-cfc873bebe29,network=Network(0b8d6144-4eec-41cd-aaa9-d3e718f03c5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d498a06-e5')#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.024 2 DEBUG oslo_concurrency.lockutils [req-c18a3af5-d3b0-46e2-9a99-89a68f769537 req-dd086083-9597-4c71-bf37-21914e335eb3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.088 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.089 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.089 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] No VIF found with MAC fa:16:3e:21:ca:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.090 2 INFO nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Using config drive#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.120 2 DEBUG nova.storage.rbd_utils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] rbd image 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:48:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/893590902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.366 2 DEBUG oslo_concurrency.processutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.373 2 DEBUG nova.compute.provider_tree [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.421 2 DEBUG nova.compute.manager [req-62bc7245-738c-4a0a-befb-93159ef82caf req-75e188b6-6dce-4280-83bd-2cb576f192a5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Received event network-vif-deleted-74f6fe8c-4f3c-4c0c-b4e0-ec5f970968d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.448 2 DEBUG nova.scheduler.client.report [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.456 2 INFO nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Creating config drive at /var/lib/nova/instances/4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83/disk.config#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.465 2 DEBUG oslo_concurrency.processutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk6zqja9g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.512 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.514 2 DEBUG nova.compute.manager [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.568 2 DEBUG nova.compute.manager [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.569 2 DEBUG nova.network.neutron [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.594 2 INFO nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.610 2 DEBUG oslo_concurrency.processutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk6zqja9g" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.654 2 DEBUG nova.storage.rbd_utils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] rbd image 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.660 2 DEBUG oslo_concurrency.processutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83/disk.config 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.696 2 DEBUG nova.compute.manager [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.742 2 DEBUG nova.policy [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3517dc72472c436aaf2fe65b5ce2f240', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '69d5fb4f7a0b4337a1b8774e04c97b9a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.800 2 DEBUG nova.compute.manager [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.802 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.803 2 INFO nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Creating image(s)#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.838 2 DEBUG nova.storage.rbd_utils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] rbd image 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.872 2 DEBUG nova.storage.rbd_utils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] rbd image 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.903 2 DEBUG nova.storage.rbd_utils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] rbd image 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.909 2 DEBUG oslo_concurrency.processutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.945 2 DEBUG oslo_concurrency.processutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83/disk.config 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:01 np0005464891 nova_compute[259907]: 2025-10-01 16:48:01.947 2 INFO nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Deleting local config drive /var/lib/nova/instances/4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83/disk.config because it was imported into RBD.#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.005 2 DEBUG oslo_concurrency.processutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.006 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.007 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.007 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:02 np0005464891 kernel: tap5d498a06-e5: entered promiscuous mode
Oct  1 12:48:02 np0005464891 NetworkManager[44940]: <info>  [1759337282.0216] manager: (tap5d498a06-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct  1 12:48:02 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:02Z|00061|binding|INFO|Claiming lport 5d498a06-e5b8-4d33-87a1-cfc873bebe29 for this chassis.
Oct  1 12:48:02 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:02Z|00062|binding|INFO|5d498a06-e5b8-4d33-87a1-cfc873bebe29: Claiming fa:16:3e:21:ca:d4 10.100.0.6
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.038 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:ca:d4 10.100.0.6'], port_security=['fa:16:3e:21:ca:d4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f395084b84f48d182c3be9d7961475e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a473cde3-a378-4504-81c4-9d8fada1bc14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a03153c4-51cb-49a4-a16a-ed6a97c8c003, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=5d498a06-e5b8-4d33-87a1-cfc873bebe29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.040 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 5d498a06-e5b8-4d33-87a1-cfc873bebe29 in datapath 0b8d6144-4eec-41cd-aaa9-d3e718f03c5c bound to our chassis#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.043 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b8d6144-4eec-41cd-aaa9-d3e718f03c5c#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.060 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f30a8c6a-9824-472c-a655-27f45169cfcc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.061 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0b8d6144-41 in ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.063 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0b8d6144-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.063 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2d122c-e651-4520-b162-2f29726b2a9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.065 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[decb2e0c-94d6-461a-a29b-21dc63d5baac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:02 np0005464891 systemd-udevd[279033]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:48:02 np0005464891 systemd-machined[214891]: New machine qemu-6-instance-00000006.
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.080 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[50bed4c1-03ea-4a77-b643-a0423e528af2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:02 np0005464891 NetworkManager[44940]: <info>  [1759337282.0981] device (tap5d498a06-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:48:02 np0005464891 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Oct  1 12:48:02 np0005464891 NetworkManager[44940]: <info>  [1759337282.1000] device (tap5d498a06-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.104 2 DEBUG nova.storage.rbd_utils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] rbd image 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.115 2 DEBUG oslo_concurrency.processutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.119 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e51acb-8107-48c1-96f0-2d7e5f91f371]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:02 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:02Z|00063|binding|INFO|Setting lport 5d498a06-e5b8-4d33-87a1-cfc873bebe29 ovn-installed in OVS
Oct  1 12:48:02 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:02Z|00064|binding|INFO|Setting lport 5d498a06-e5b8-4d33-87a1-cfc873bebe29 up in Southbound
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.168 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdd4460-6a94-4be2-b3a9-318491b3c18d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.175 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f4bc949d-5a63-4d06-a4da-7c47f79a2e82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:02 np0005464891 NetworkManager[44940]: <info>  [1759337282.1771] manager: (tap0b8d6144-40): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Oct  1 12:48:02 np0005464891 podman[279011]: 2025-10-01 16:48:02.187483989 +0000 UTC m=+0.134145636 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.226 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[789d0e80-89b5-49f3-a986-f8edba5586dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.230 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[11af7d15-bf20-40c8-add2-07380cd7cb08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:02 np0005464891 NetworkManager[44940]: <info>  [1759337282.2583] device (tap0b8d6144-40): carrier: link connected
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.267 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ea25de-29d5-4a25-877d-e198b8175c60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.290 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[364f3315-dce6-401c-a9db-b473598d3da2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b8d6144-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:55:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423083, 'reachable_time': 37210, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279096, 'error': None, 'target': 'ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.310 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[058b693c-6e4e-41bd-9ef4-ae6502481169]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:554c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 423083, 'tstamp': 423083}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279097, 'error': None, 'target': 'ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.349 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3278ec-c7b4-46ce-9595-db77102d1c1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b8d6144-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:55:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423083, 'reachable_time': 37210, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279098, 'error': None, 'target': 'ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.396 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[28263cc7-6f86-4e21-b798-f8e51377eebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.408 2 DEBUG nova.compute.manager [req-7b436c81-a448-4a1c-b6d8-b5f923ae41b4 req-0465da84-5d0c-472a-82ff-5f74d57bf3f6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Received event network-vif-plugged-5d498a06-e5b8-4d33-87a1-cfc873bebe29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.409 2 DEBUG oslo_concurrency.lockutils [req-7b436c81-a448-4a1c-b6d8-b5f923ae41b4 req-0465da84-5d0c-472a-82ff-5f74d57bf3f6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.409 2 DEBUG oslo_concurrency.lockutils [req-7b436c81-a448-4a1c-b6d8-b5f923ae41b4 req-0465da84-5d0c-472a-82ff-5f74d57bf3f6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.409 2 DEBUG oslo_concurrency.lockutils [req-7b436c81-a448-4a1c-b6d8-b5f923ae41b4 req-0465da84-5d0c-472a-82ff-5f74d57bf3f6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.409 2 DEBUG nova.compute.manager [req-7b436c81-a448-4a1c-b6d8-b5f923ae41b4 req-0465da84-5d0c-472a-82ff-5f74d57bf3f6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Processing event network-vif-plugged-5d498a06-e5b8-4d33-87a1-cfc873bebe29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.461 2 DEBUG oslo_concurrency.processutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.486 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[0eef140a-9ed6-4476-9918-0a19bf68a7a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.487 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b8d6144-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.487 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.488 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b8d6144-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:02 np0005464891 kernel: tap0b8d6144-40: entered promiscuous mode
Oct  1 12:48:02 np0005464891 NetworkManager[44940]: <info>  [1759337282.4905] manager: (tap0b8d6144-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.493 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b8d6144-40, col_values=(('external_ids', {'iface-id': 'c2ef6608-b2db-40dc-8fde-a94b501b7f75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:02 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:02Z|00065|binding|INFO|Releasing lport c2ef6608-b2db-40dc-8fde-a94b501b7f75 from this chassis (sb_readonly=0)
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.511 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0b8d6144-4eec-41cd-aaa9-d3e718f03c5c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0b8d6144-4eec-41cd-aaa9-d3e718f03c5c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.512 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[1124d0b2-888c-44c6-a639-f52cfcea2c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.512 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/0b8d6144-4eec-41cd-aaa9-d3e718f03c5c.pid.haproxy
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID 0b8d6144-4eec-41cd-aaa9-d3e718f03c5c
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:48:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:02.513 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c', 'env', 'PROCESS_TAG=haproxy-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0b8d6144-4eec-41cd-aaa9-d3e718f03c5c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.552 2 DEBUG nova.storage.rbd_utils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] resizing rbd image 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.666 2 DEBUG nova.objects.instance [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lazy-loading 'migration_context' on Instance uuid 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.672 2 DEBUG nova.network.neutron [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Successfully created port: 845fe902-041f-4c80-897c-0bc9525fbeaf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.689 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.690 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Ensure instance console log exists: /var/lib/nova/instances/7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.690 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.691 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.691 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 186 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.5 MiB/s wr, 221 op/s
Oct  1 12:48:02 np0005464891 podman[279245]: 2025-10-01 16:48:02.954091244 +0000 UTC m=+0.077069559 container create d237d4a41decf4140d6c3f50755a403a34240d2dc903873eb50a48ca06995c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  1 12:48:02 np0005464891 systemd[1]: Started libpod-conmon-d237d4a41decf4140d6c3f50755a403a34240d2dc903873eb50a48ca06995c8f.scope.
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.995 2 DEBUG nova.compute.manager [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.997 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337282.996718, 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:48:02 np0005464891 nova_compute[259907]: 2025-10-01 16:48:02.997 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] VM Started (Lifecycle Event)#033[00m
Oct  1 12:48:02 np0005464891 podman[279245]: 2025-10-01 16:48:02.902107299 +0000 UTC m=+0.025085624 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.002 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.009 2 INFO nova.virt.libvirt.driver [-] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Instance spawned successfully.#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.009 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:48:03 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.019 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:03 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dfce6f35efeb4f353cb30dd790014d6091e7c67633b3e5cb04a494e31807604/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.029 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.038 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.038 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.039 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.039 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.039 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.040 2 DEBUG nova.virt.libvirt.driver [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:03 np0005464891 podman[279245]: 2025-10-01 16:48:03.045187811 +0000 UTC m=+0.168166136 container init d237d4a41decf4140d6c3f50755a403a34240d2dc903873eb50a48ca06995c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.048 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.049 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337282.9989727, 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.049 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:48:03 np0005464891 podman[279245]: 2025-10-01 16:48:03.052237695 +0000 UTC m=+0.175216000 container start d237d4a41decf4140d6c3f50755a403a34240d2dc903873eb50a48ca06995c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  1 12:48:03 np0005464891 neutron-haproxy-ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c[279259]: [NOTICE]   (279264) : New worker (279266) forked
Oct  1 12:48:03 np0005464891 neutron-haproxy-ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c[279259]: [NOTICE]   (279264) : Loading success.
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.104 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.108 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337283.002029, 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.108 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.114 2 INFO nova.compute.manager [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Took 7.80 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.114 2 DEBUG nova.compute.manager [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.125 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.130 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.158 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.181 2 INFO nova.compute.manager [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Took 8.85 seconds to build instance.#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.221 2 DEBUG oslo_concurrency.lockutils [None req-0aa358dd-3460-4b1b-840c-d4a49946010e 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.949 2 DEBUG nova.network.neutron [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Successfully updated port: 845fe902-041f-4c80-897c-0bc9525fbeaf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.962 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "refresh_cache-7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.963 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquired lock "refresh_cache-7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:48:03 np0005464891 nova_compute[259907]: 2025-10-01 16:48:03.964 2 DEBUG nova.network.neutron [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:48:04 np0005464891 nova_compute[259907]: 2025-10-01 16:48:04.315 2 DEBUG nova.network.neutron [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:48:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 216 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 5.9 MiB/s wr, 248 op/s
Oct  1 12:48:04 np0005464891 podman[279275]: 2025-10-01 16:48:04.972805065 +0000 UTC m=+0.081632046 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible)
Oct  1 12:48:05 np0005464891 nova_compute[259907]: 2025-10-01 16:48:05.252 2 DEBUG nova.compute.manager [req-cbb7dd30-8fa6-4e1f-9355-2bcb2bc136f0 req-92ada635-c75e-4da9-bd08-321e79d9b520 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Received event network-vif-plugged-5d498a06-e5b8-4d33-87a1-cfc873bebe29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:05 np0005464891 nova_compute[259907]: 2025-10-01 16:48:05.253 2 DEBUG oslo_concurrency.lockutils [req-cbb7dd30-8fa6-4e1f-9355-2bcb2bc136f0 req-92ada635-c75e-4da9-bd08-321e79d9b520 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:05 np0005464891 nova_compute[259907]: 2025-10-01 16:48:05.253 2 DEBUG oslo_concurrency.lockutils [req-cbb7dd30-8fa6-4e1f-9355-2bcb2bc136f0 req-92ada635-c75e-4da9-bd08-321e79d9b520 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:05 np0005464891 nova_compute[259907]: 2025-10-01 16:48:05.253 2 DEBUG oslo_concurrency.lockutils [req-cbb7dd30-8fa6-4e1f-9355-2bcb2bc136f0 req-92ada635-c75e-4da9-bd08-321e79d9b520 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:05 np0005464891 nova_compute[259907]: 2025-10-01 16:48:05.254 2 DEBUG nova.compute.manager [req-cbb7dd30-8fa6-4e1f-9355-2bcb2bc136f0 req-92ada635-c75e-4da9-bd08-321e79d9b520 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] No waiting events found dispatching network-vif-plugged-5d498a06-e5b8-4d33-87a1-cfc873bebe29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:48:05 np0005464891 nova_compute[259907]: 2025-10-01 16:48:05.254 2 WARNING nova.compute.manager [req-cbb7dd30-8fa6-4e1f-9355-2bcb2bc136f0 req-92ada635-c75e-4da9-bd08-321e79d9b520 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Received unexpected event network-vif-plugged-5d498a06-e5b8-4d33-87a1-cfc873bebe29 for instance with vm_state active and task_state None.#033[00m
Oct  1 12:48:05 np0005464891 nova_compute[259907]: 2025-10-01 16:48:05.254 2 DEBUG nova.compute.manager [req-cbb7dd30-8fa6-4e1f-9355-2bcb2bc136f0 req-92ada635-c75e-4da9-bd08-321e79d9b520 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Received event network-changed-845fe902-041f-4c80-897c-0bc9525fbeaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:05 np0005464891 nova_compute[259907]: 2025-10-01 16:48:05.254 2 DEBUG nova.compute.manager [req-cbb7dd30-8fa6-4e1f-9355-2bcb2bc136f0 req-92ada635-c75e-4da9-bd08-321e79d9b520 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Refreshing instance network info cache due to event network-changed-845fe902-041f-4c80-897c-0bc9525fbeaf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:48:05 np0005464891 nova_compute[259907]: 2025-10-01 16:48:05.255 2 DEBUG oslo_concurrency.lockutils [req-cbb7dd30-8fa6-4e1f-9355-2bcb2bc136f0 req-92ada635-c75e-4da9-bd08-321e79d9b520 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:48:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:48:05 np0005464891 nova_compute[259907]: 2025-10-01 16:48:05.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.128 2 DEBUG nova.network.neutron [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Updating instance_info_cache with network_info: [{"id": "845fe902-041f-4c80-897c-0bc9525fbeaf", "address": "fa:16:3e:d5:01:af", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap845fe902-04", "ovs_interfaceid": "845fe902-041f-4c80-897c-0bc9525fbeaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.155 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Releasing lock "refresh_cache-7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.156 2 DEBUG nova.compute.manager [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Instance network_info: |[{"id": "845fe902-041f-4c80-897c-0bc9525fbeaf", "address": "fa:16:3e:d5:01:af", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap845fe902-04", "ovs_interfaceid": "845fe902-041f-4c80-897c-0bc9525fbeaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.156 2 DEBUG oslo_concurrency.lockutils [req-cbb7dd30-8fa6-4e1f-9355-2bcb2bc136f0 req-92ada635-c75e-4da9-bd08-321e79d9b520 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.157 2 DEBUG nova.network.neutron [req-cbb7dd30-8fa6-4e1f-9355-2bcb2bc136f0 req-92ada635-c75e-4da9-bd08-321e79d9b520 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Refreshing network info cache for port 845fe902-041f-4c80-897c-0bc9525fbeaf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.160 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Start _get_guest_xml network_info=[{"id": "845fe902-041f-4c80-897c-0bc9525fbeaf", "address": "fa:16:3e:d5:01:af", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap845fe902-04", "ovs_interfaceid": "845fe902-041f-4c80-897c-0bc9525fbeaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'image_id': 'f01c1e7c-fea3-4433-a44a-d71153552c78'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.164 2 WARNING nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.169 2 DEBUG nova.virt.libvirt.host [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.170 2 DEBUG nova.virt.libvirt.host [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.172 2 DEBUG nova.virt.libvirt.host [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.173 2 DEBUG nova.virt.libvirt.host [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.173 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.173 2 DEBUG nova.virt.hardware [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.174 2 DEBUG nova.virt.hardware [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.174 2 DEBUG nova.virt.hardware [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.175 2 DEBUG nova.virt.hardware [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.175 2 DEBUG nova.virt.hardware [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.175 2 DEBUG nova.virt.hardware [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.175 2 DEBUG nova.virt.hardware [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.176 2 DEBUG nova.virt.hardware [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.176 2 DEBUG nova.virt.hardware [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.176 2 DEBUG nova.virt.hardware [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.177 2 DEBUG nova.virt.hardware [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.179 2 DEBUG oslo_concurrency.processutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:48:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2147394454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.684 2 DEBUG oslo_concurrency.processutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.717 2 DEBUG nova.storage.rbd_utils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] rbd image 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:06 np0005464891 nova_compute[259907]: 2025-10-01 16:48:06.721 2 DEBUG oslo_concurrency.processutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 315 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 MiB/s wr, 238 op/s
Oct  1 12:48:07 np0005464891 NetworkManager[44940]: <info>  [1759337287.1451] manager: (patch-br-int-to-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct  1 12:48:07 np0005464891 NetworkManager[44940]: <info>  [1759337287.1462] manager: (patch-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:48:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3913929756' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.228 2 DEBUG oslo_concurrency.processutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.229 2 DEBUG nova.virt.libvirt.vif [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:48:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-187997207',display_name='tempest-VolumesSnapshotTestJSON-instance-187997207',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-187997207',id=7,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIgJ5lnYrRG9Tvahx/0tSLtZSVgD2INhdzXfpcSBqcZL49XMXL/YBTjYN8RCIxGRhkQRdTJRtEGxUzu5k5Idy0y3T1S4/yeZcsyqD5M4i4aaygdIFOkEu6aldQewsVqs7w==',key_name='tempest-keypair-1059486929',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='69d5fb4f7a0b4337a1b8774e04c97b9a',ramdisk_id='',reservation_id='r-m9pan7uv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-1941074907',owner_user_name='tempest-VolumesSnapshotTestJSON-1941074907-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:48:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3517dc72472c436aaf2fe65b5ce2f240',uuid=7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "845fe902-041f-4c80-897c-0bc9525fbeaf", "address": "fa:16:3e:d5:01:af", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap845fe902-04", "ovs_interfaceid": "845fe902-041f-4c80-897c-0bc9525fbeaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.230 2 DEBUG nova.network.os_vif_util [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Converting VIF {"id": "845fe902-041f-4c80-897c-0bc9525fbeaf", "address": "fa:16:3e:d5:01:af", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap845fe902-04", "ovs_interfaceid": "845fe902-041f-4c80-897c-0bc9525fbeaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.231 2 DEBUG nova.network.os_vif_util [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:01:af,bridge_name='br-int',has_traffic_filtering=True,id=845fe902-041f-4c80-897c-0bc9525fbeaf,network=Network(3401e30b-97c6-4012-a9d4-0114c56bacd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap845fe902-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.232 2 DEBUG nova.objects.instance [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.233 2 DEBUG nova.network.neutron [req-cbb7dd30-8fa6-4e1f-9355-2bcb2bc136f0 req-92ada635-c75e-4da9-bd08-321e79d9b520 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Updated VIF entry in instance network info cache for port 845fe902-041f-4c80-897c-0bc9525fbeaf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.234 2 DEBUG nova.network.neutron [req-cbb7dd30-8fa6-4e1f-9355-2bcb2bc136f0 req-92ada635-c75e-4da9-bd08-321e79d9b520 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Updating instance_info_cache with network_info: [{"id": "845fe902-041f-4c80-897c-0bc9525fbeaf", "address": "fa:16:3e:d5:01:af", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap845fe902-04", "ovs_interfaceid": "845fe902-041f-4c80-897c-0bc9525fbeaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.249 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  <uuid>7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec</uuid>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  <name>instance-00000007</name>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <nova:name>tempest-VolumesSnapshotTestJSON-instance-187997207</nova:name>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:48:06</nova:creationTime>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:        <nova:user uuid="3517dc72472c436aaf2fe65b5ce2f240">tempest-VolumesSnapshotTestJSON-1941074907-project-member</nova:user>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:        <nova:project uuid="69d5fb4f7a0b4337a1b8774e04c97b9a">tempest-VolumesSnapshotTestJSON-1941074907</nova:project>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="f01c1e7c-fea3-4433-a44a-d71153552c78"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:        <nova:port uuid="845fe902-041f-4c80-897c-0bc9525fbeaf">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <entry name="serial">7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec</entry>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <entry name="uuid">7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec</entry>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec_disk">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec_disk.config">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:d5:01:af"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <target dev="tap845fe902-04"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec/console.log" append="off"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:48:07 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:48:07 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:48:07 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:48:07 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.254 2 DEBUG nova.compute.manager [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Preparing to wait for external event network-vif-plugged-845fe902-041f-4c80-897c-0bc9525fbeaf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.254 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.254 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.255 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.255 2 DEBUG nova.virt.libvirt.vif [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:48:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-187997207',display_name='tempest-VolumesSnapshotTestJSON-instance-187997207',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-187997207',id=7,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIgJ5lnYrRG9Tvahx/0tSLtZSVgD2INhdzXfpcSBqcZL49XMXL/YBTjYN8RCIxGRhkQRdTJRtEGxUzu5k5Idy0y3T1S4/yeZcsyqD5M4i4aaygdIFOkEu6aldQewsVqs7w==',key_name='tempest-keypair-1059486929',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='69d5fb4f7a0b4337a1b8774e04c97b9a',ramdisk_id='',reservation_id='r-m9pan7uv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-1941074907',owner_user_name='tempest-VolumesSnapshotTestJSON-1941074907-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:48:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3517dc72472c436aaf2fe65b5ce2f240',uuid=7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "845fe902-041f-4c80-897c-0bc9525fbeaf", "address": "fa:16:3e:d5:01:af", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap845fe902-04", "ovs_interfaceid": "845fe902-041f-4c80-897c-0bc9525fbeaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.255 2 DEBUG nova.network.os_vif_util [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Converting VIF {"id": "845fe902-041f-4c80-897c-0bc9525fbeaf", "address": "fa:16:3e:d5:01:af", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap845fe902-04", "ovs_interfaceid": "845fe902-041f-4c80-897c-0bc9525fbeaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.256 2 DEBUG nova.network.os_vif_util [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:01:af,bridge_name='br-int',has_traffic_filtering=True,id=845fe902-041f-4c80-897c-0bc9525fbeaf,network=Network(3401e30b-97c6-4012-a9d4-0114c56bacd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap845fe902-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.256 2 DEBUG os_vif [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:01:af,bridge_name='br-int',has_traffic_filtering=True,id=845fe902-041f-4c80-897c-0bc9525fbeaf,network=Network(3401e30b-97c6-4012-a9d4-0114c56bacd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap845fe902-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.257 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.257 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.259 2 DEBUG oslo_concurrency.lockutils [req-cbb7dd30-8fa6-4e1f-9355-2bcb2bc136f0 req-92ada635-c75e-4da9-bd08-321e79d9b520 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.261 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap845fe902-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.261 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap845fe902-04, col_values=(('external_ids', {'iface-id': '845fe902-041f-4c80-897c-0bc9525fbeaf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:01:af', 'vm-uuid': '7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:07 np0005464891 NetworkManager[44940]: <info>  [1759337287.2639] manager: (tap845fe902-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.280 2 INFO os_vif [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:01:af,bridge_name='br-int',has_traffic_filtering=True,id=845fe902-041f-4c80-897c-0bc9525fbeaf,network=Network(3401e30b-97c6-4012-a9d4-0114c56bacd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap845fe902-04')#033[00m
Oct  1 12:48:07 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:07Z|00066|binding|INFO|Releasing lport c2ef6608-b2db-40dc-8fde-a94b501b7f75 from this chassis (sb_readonly=0)
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.341 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.342 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.342 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] No VIF found with MAC fa:16:3e:d5:01:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.343 2 INFO nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Using config drive#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.366 2 DEBUG nova.storage.rbd_utils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] rbd image 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.519 2 DEBUG nova.compute.manager [req-94010ac8-0ed4-4c2d-aeac-d0147dfcb3e7 req-8256e034-d7ed-49de-b8c6-9cf6f0848317 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Received event network-changed-5d498a06-e5b8-4d33-87a1-cfc873bebe29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.520 2 DEBUG nova.compute.manager [req-94010ac8-0ed4-4c2d-aeac-d0147dfcb3e7 req-8256e034-d7ed-49de-b8c6-9cf6f0848317 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Refreshing instance network info cache due to event network-changed-5d498a06-e5b8-4d33-87a1-cfc873bebe29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.520 2 DEBUG oslo_concurrency.lockutils [req-94010ac8-0ed4-4c2d-aeac-d0147dfcb3e7 req-8256e034-d7ed-49de-b8c6-9cf6f0848317 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.521 2 DEBUG oslo_concurrency.lockutils [req-94010ac8-0ed4-4c2d-aeac-d0147dfcb3e7 req-8256e034-d7ed-49de-b8c6-9cf6f0848317 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.521 2 DEBUG nova.network.neutron [req-94010ac8-0ed4-4c2d-aeac-d0147dfcb3e7 req-8256e034-d7ed-49de-b8c6-9cf6f0848317 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Refreshing network info cache for port 5d498a06-e5b8-4d33-87a1-cfc873bebe29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.921 2 INFO nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Creating config drive at /var/lib/nova/instances/7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec/disk.config#033[00m
Oct  1 12:48:07 np0005464891 nova_compute[259907]: 2025-10-01 16:48:07.928 2 DEBUG oslo_concurrency.processutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnfmtlt_h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.072 2 DEBUG oslo_concurrency.processutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnfmtlt_h" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.126 2 DEBUG nova.storage.rbd_utils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] rbd image 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.132 2 DEBUG oslo_concurrency.processutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec/disk.config 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.277 2 DEBUG oslo_concurrency.processutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec/disk.config 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.278 2 INFO nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Deleting local config drive /var/lib/nova/instances/7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec/disk.config because it was imported into RBD.#033[00m
Oct  1 12:48:08 np0005464891 kernel: tap845fe902-04: entered promiscuous mode
Oct  1 12:48:08 np0005464891 NetworkManager[44940]: <info>  [1759337288.3223] manager: (tap845fe902-04): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Oct  1 12:48:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:08Z|00067|binding|INFO|Claiming lport 845fe902-041f-4c80-897c-0bc9525fbeaf for this chassis.
Oct  1 12:48:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:08Z|00068|binding|INFO|845fe902-041f-4c80-897c-0bc9525fbeaf: Claiming fa:16:3e:d5:01:af 10.100.0.11
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.353 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:01:af 10.100.0.11'], port_security=['fa:16:3e:d5:01:af 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3401e30b-97c6-4012-a9d4-0114c56bacd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '69d5fb4f7a0b4337a1b8774e04c97b9a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f86ffdec-54c4-4f1d-8b56-111fa7d84206', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6048fd95-db94-4f1d-be7e-ff0b5269a1e3, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=845fe902-041f-4c80-897c-0bc9525fbeaf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.355 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 845fe902-041f-4c80-897c-0bc9525fbeaf in datapath 3401e30b-97c6-4012-a9d4-0114c56bacd5 bound to our chassis#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.357 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3401e30b-97c6-4012-a9d4-0114c56bacd5#033[00m
Oct  1 12:48:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:08Z|00069|binding|INFO|Setting lport 845fe902-041f-4c80-897c-0bc9525fbeaf ovn-installed in OVS
Oct  1 12:48:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:08Z|00070|binding|INFO|Setting lport 845fe902-041f-4c80-897c-0bc9525fbeaf up in Southbound
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:08 np0005464891 systemd-machined[214891]: New machine qemu-7-instance-00000007.
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.379 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[6da5d316-2a21-4791-a8d1-5569eb81afb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.381 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3401e30b-91 in ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.383 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3401e30b-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.384 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[c99a195d-132e-4712-9e28-9035d085d4f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:08 np0005464891 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.386 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2e0c55-1ed3-45f3-991b-fcd8be6ac341]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:08 np0005464891 systemd-udevd[279433]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:48:08 np0005464891 NetworkManager[44940]: <info>  [1759337288.4105] device (tap845fe902-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:48:08 np0005464891 NetworkManager[44940]: <info>  [1759337288.4113] device (tap845fe902-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.403 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[78e415bc-bd75-4000-9b6c-e990588f96ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.425 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[cbae6a3d-cc38-4f1e-b22d-e6d4b3b13a67]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.466 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[f27325bf-36b2-4cc5-9677-bedc17308daf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.471 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[4668b49e-191b-410c-adc9-ea4711538424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:08 np0005464891 NetworkManager[44940]: <info>  [1759337288.4722] manager: (tap3401e30b-90): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Oct  1 12:48:08 np0005464891 systemd-udevd[279436]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.510 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[82e02826-be67-4a43-8921-5a07eb69787a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.513 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed1142d-4223-40f9-94da-460d485063b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.520 2 DEBUG nova.compute.manager [req-71e328fe-09d2-457e-bf9c-03d4d879175d req-bc750034-0ae3-4578-b4ee-4bf190830347 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Received event network-vif-plugged-845fe902-041f-4c80-897c-0bc9525fbeaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.521 2 DEBUG oslo_concurrency.lockutils [req-71e328fe-09d2-457e-bf9c-03d4d879175d req-bc750034-0ae3-4578-b4ee-4bf190830347 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.522 2 DEBUG oslo_concurrency.lockutils [req-71e328fe-09d2-457e-bf9c-03d4d879175d req-bc750034-0ae3-4578-b4ee-4bf190830347 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.522 2 DEBUG oslo_concurrency.lockutils [req-71e328fe-09d2-457e-bf9c-03d4d879175d req-bc750034-0ae3-4578-b4ee-4bf190830347 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.523 2 DEBUG nova.compute.manager [req-71e328fe-09d2-457e-bf9c-03d4d879175d req-bc750034-0ae3-4578-b4ee-4bf190830347 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Processing event network-vif-plugged-845fe902-041f-4c80-897c-0bc9525fbeaf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:48:08 np0005464891 NetworkManager[44940]: <info>  [1759337288.5437] device (tap3401e30b-90): carrier: link connected
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.550 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[0991ecc5-cc15-4945-8a10-49899b6412a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.571 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[95813b39-0d6b-4996-97dc-c172487bec09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3401e30b-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:b8:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423711, 'reachable_time': 37773, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279465, 'error': None, 'target': 'ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.597 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ea6b0102-db5a-414c-a971-b2a77080f22d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:b811'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 423711, 'tstamp': 423711}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279467, 'error': None, 'target': 'ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.616 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5e50ad39-cedc-4485-9a83-d833d765a316]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3401e30b-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:b8:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423711, 'reachable_time': 37773, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279468, 'error': None, 'target': 'ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.647 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2716c8bd-1a82-4f24-9886-e7848c065742]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.700 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[53aaf873-f20b-4422-9347-7ee2fea24434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.702 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3401e30b-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.702 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.703 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3401e30b-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:08 np0005464891 kernel: tap3401e30b-90: entered promiscuous mode
Oct  1 12:48:08 np0005464891 NetworkManager[44940]: <info>  [1759337288.7053] manager: (tap3401e30b-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.716 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3401e30b-90, col_values=(('external_ids', {'iface-id': '72585314-0d9f-4f28-bd98-a3592b2b3241'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:08Z|00071|binding|INFO|Releasing lport 72585314-0d9f-4f28-bd98-a3592b2b3241 from this chassis (sb_readonly=0)
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.720 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3401e30b-97c6-4012-a9d4-0114c56bacd5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3401e30b-97c6-4012-a9d4-0114c56bacd5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.720 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f273a4a0-0969-4733-8dd6-3a41cb076731]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.721 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-3401e30b-97c6-4012-a9d4-0114c56bacd5
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/3401e30b-97c6-4012-a9d4-0114c56bacd5.pid.haproxy
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID 3401e30b-97c6-4012-a9d4-0114c56bacd5
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:48:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:08.722 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5', 'env', 'PROCESS_TAG=haproxy-3401e30b-97c6-4012-a9d4-0114c56bacd5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3401e30b-97c6-4012-a9d4-0114c56bacd5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 395 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 18 MiB/s wr, 276 op/s
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.972 2 DEBUG nova.network.neutron [req-94010ac8-0ed4-4c2d-aeac-d0147dfcb3e7 req-8256e034-d7ed-49de-b8c6-9cf6f0848317 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Updated VIF entry in instance network info cache for port 5d498a06-e5b8-4d33-87a1-cfc873bebe29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:48:08 np0005464891 nova_compute[259907]: 2025-10-01 16:48:08.973 2 DEBUG nova.network.neutron [req-94010ac8-0ed4-4c2d-aeac-d0147dfcb3e7 req-8256e034-d7ed-49de-b8c6-9cf6f0848317 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Updating instance_info_cache with network_info: [{"id": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "address": "fa:16:3e:21:ca:d4", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d498a06-e5", "ovs_interfaceid": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.003 2 DEBUG oslo_concurrency.lockutils [req-94010ac8-0ed4-4c2d-aeac-d0147dfcb3e7 req-8256e034-d7ed-49de-b8c6-9cf6f0848317 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:48:09 np0005464891 podman[279542]: 2025-10-01 16:48:09.19526767 +0000 UTC m=+0.127974186 container create 64a57215c9274a26dfe28573cef36324d4859a5d349186ff9af5ff16877239ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  1 12:48:09 np0005464891 podman[279542]: 2025-10-01 16:48:09.12355907 +0000 UTC m=+0.056265586 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:48:09 np0005464891 systemd[1]: Started libpod-conmon-64a57215c9274a26dfe28573cef36324d4859a5d349186ff9af5ff16877239ba.scope.
Oct  1 12:48:09 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:48:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb678adcecee5bbc6402527ea1a8ebab3a2e00e3d33c8411b8876bc8501fd7fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:48:09 np0005464891 podman[279542]: 2025-10-01 16:48:09.367059405 +0000 UTC m=+0.299765911 container init 64a57215c9274a26dfe28573cef36324d4859a5d349186ff9af5ff16877239ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:48:09 np0005464891 podman[279542]: 2025-10-01 16:48:09.377625657 +0000 UTC m=+0.310332133 container start 64a57215c9274a26dfe28573cef36324d4859a5d349186ff9af5ff16877239ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.391 2 DEBUG nova.compute.manager [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.391 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337289.3904462, 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.392 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] VM Started (Lifecycle Event)#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.394 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.397 2 INFO nova.virt.libvirt.driver [-] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Instance spawned successfully.#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.397 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:48:09 np0005464891 neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5[279557]: [NOTICE]   (279561) : New worker (279563) forked
Oct  1 12:48:09 np0005464891 neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5[279557]: [NOTICE]   (279561) : Loading success.
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.419 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.425 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.429 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.430 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.431 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.431 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.432 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.432 2 DEBUG nova.virt.libvirt.driver [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.462 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.467 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337289.3906953, 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.467 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.510 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.519 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337289.3943775, 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.520 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.525 2 INFO nova.compute.manager [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Took 7.72 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.525 2 DEBUG nova.compute.manager [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.558 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.564 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.588 2 INFO nova.compute.manager [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Took 8.84 seconds to build instance.#033[00m
Oct  1 12:48:09 np0005464891 nova_compute[259907]: 2025-10-01 16:48:09.614 2 DEBUG oslo_concurrency.lockutils [None req-d300c8d6-3d4c-4a8d-b8b1-2d257111c7fe 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:48:10 np0005464891 nova_compute[259907]: 2025-10-01 16:48:10.595 2 DEBUG nova.compute.manager [req-2dc83722-daa9-4f4c-91a1-03fc71b79d5f req-742557ad-aafd-498e-be54-0094c780be24 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Received event network-vif-plugged-845fe902-041f-4c80-897c-0bc9525fbeaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:10 np0005464891 nova_compute[259907]: 2025-10-01 16:48:10.596 2 DEBUG oslo_concurrency.lockutils [req-2dc83722-daa9-4f4c-91a1-03fc71b79d5f req-742557ad-aafd-498e-be54-0094c780be24 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:10 np0005464891 nova_compute[259907]: 2025-10-01 16:48:10.596 2 DEBUG oslo_concurrency.lockutils [req-2dc83722-daa9-4f4c-91a1-03fc71b79d5f req-742557ad-aafd-498e-be54-0094c780be24 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:10 np0005464891 nova_compute[259907]: 2025-10-01 16:48:10.596 2 DEBUG oslo_concurrency.lockutils [req-2dc83722-daa9-4f4c-91a1-03fc71b79d5f req-742557ad-aafd-498e-be54-0094c780be24 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:10 np0005464891 nova_compute[259907]: 2025-10-01 16:48:10.596 2 DEBUG nova.compute.manager [req-2dc83722-daa9-4f4c-91a1-03fc71b79d5f req-742557ad-aafd-498e-be54-0094c780be24 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] No waiting events found dispatching network-vif-plugged-845fe902-041f-4c80-897c-0bc9525fbeaf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:48:10 np0005464891 nova_compute[259907]: 2025-10-01 16:48:10.596 2 WARNING nova.compute.manager [req-2dc83722-daa9-4f4c-91a1-03fc71b79d5f req-742557ad-aafd-498e-be54-0094c780be24 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Received unexpected event network-vif-plugged-845fe902-041f-4c80-897c-0bc9525fbeaf for instance with vm_state active and task_state None.#033[00m
Oct  1 12:48:10 np0005464891 nova_compute[259907]: 2025-10-01 16:48:10.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 563 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 31 MiB/s wr, 247 op/s
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:48:12
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['default.rgw.log', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'vms', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr']
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:48:12 np0005464891 nova_compute[259907]: 2025-10-01 16:48:12.189 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337277.188096, b9ff95de-17ee-4a78-822e-f4c081509b00 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:48:12 np0005464891 nova_compute[259907]: 2025-10-01 16:48:12.190 2 INFO nova.compute.manager [-] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:48:12 np0005464891 nova_compute[259907]: 2025-10-01 16:48:12.211 2 DEBUG nova.compute.manager [None req-784c1558-df03-43dd-95cf-da18c1438a4d - - - - - -] [instance: b9ff95de-17ee-4a78-822e-f4c081509b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:12 np0005464891 nova_compute[259907]: 2025-10-01 16:48:12.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:48:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:12.450 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:12.451 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:12.451 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 699 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 41 MiB/s wr, 192 op/s
Oct  1 12:48:14 np0005464891 nova_compute[259907]: 2025-10-01 16:48:14.301 2 DEBUG nova.compute.manager [req-81109df4-f7d0-47ac-9378-c6f5e0afcff9 req-b12cd34b-2234-4e20-bef9-49e229cc4c78 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Received event network-changed-845fe902-041f-4c80-897c-0bc9525fbeaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:14 np0005464891 nova_compute[259907]: 2025-10-01 16:48:14.302 2 DEBUG nova.compute.manager [req-81109df4-f7d0-47ac-9378-c6f5e0afcff9 req-b12cd34b-2234-4e20-bef9-49e229cc4c78 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Refreshing instance network info cache due to event network-changed-845fe902-041f-4c80-897c-0bc9525fbeaf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:48:14 np0005464891 nova_compute[259907]: 2025-10-01 16:48:14.302 2 DEBUG oslo_concurrency.lockutils [req-81109df4-f7d0-47ac-9378-c6f5e0afcff9 req-b12cd34b-2234-4e20-bef9-49e229cc4c78 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:48:14 np0005464891 nova_compute[259907]: 2025-10-01 16:48:14.302 2 DEBUG oslo_concurrency.lockutils [req-81109df4-f7d0-47ac-9378-c6f5e0afcff9 req-b12cd34b-2234-4e20-bef9-49e229cc4c78 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:48:14 np0005464891 nova_compute[259907]: 2025-10-01 16:48:14.303 2 DEBUG nova.network.neutron [req-81109df4-f7d0-47ac-9378-c6f5e0afcff9 req-b12cd34b-2234-4e20-bef9-49e229cc4c78 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Refreshing network info cache for port 845fe902-041f-4c80-897c-0bc9525fbeaf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:48:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 822 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 51 MiB/s wr, 193 op/s
Oct  1 12:48:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:48:15 np0005464891 nova_compute[259907]: 2025-10-01 16:48:15.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:16 np0005464891 nova_compute[259907]: 2025-10-01 16:48:16.317 2 DEBUG nova.network.neutron [req-81109df4-f7d0-47ac-9378-c6f5e0afcff9 req-b12cd34b-2234-4e20-bef9-49e229cc4c78 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Updated VIF entry in instance network info cache for port 845fe902-041f-4c80-897c-0bc9525fbeaf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:48:16 np0005464891 nova_compute[259907]: 2025-10-01 16:48:16.318 2 DEBUG nova.network.neutron [req-81109df4-f7d0-47ac-9378-c6f5e0afcff9 req-b12cd34b-2234-4e20-bef9-49e229cc4c78 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Updating instance_info_cache with network_info: [{"id": "845fe902-041f-4c80-897c-0bc9525fbeaf", "address": "fa:16:3e:d5:01:af", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap845fe902-04", "ovs_interfaceid": "845fe902-041f-4c80-897c-0bc9525fbeaf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:48:16 np0005464891 nova_compute[259907]: 2025-10-01 16:48:16.347 2 DEBUG oslo_concurrency.lockutils [req-81109df4-f7d0-47ac-9378-c6f5e0afcff9 req-b12cd34b-2234-4e20-bef9-49e229cc4c78 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:48:16 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:16Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:21:ca:d4 10.100.0.6
Oct  1 12:48:16 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:16Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:21:ca:d4 10.100.0.6
Oct  1 12:48:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 923 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 58 MiB/s wr, 211 op/s
Oct  1 12:48:17 np0005464891 nova_compute[259907]: 2025-10-01 16:48:17.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 1.0 GiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 62 MiB/s wr, 209 op/s
Oct  1 12:48:18 np0005464891 podman[279572]: 2025-10-01 16:48:18.997688629 +0000 UTC m=+0.097297782 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 12:48:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Oct  1 12:48:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Oct  1 12:48:20 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Oct  1 12:48:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:48:20 np0005464891 nova_compute[259907]: 2025-10-01 16:48:20.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 1.2 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 67 MiB/s wr, 222 op/s
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011054608647522303 of space, bias 1.0, pg target 0.3316382594256691 quantized to 32 (current 32)
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006929479431204011 of space, bias 1.0, pg target 0.20788438293612033 quantized to 32 (current 32)
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.016684739145829262 of space, bias 1.0, pg target 5.005421743748779 quantized to 32 (current 32)
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006002962818258775 quantized to 16 (current 32)
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.503703522823468e-05 quantized to 32 (current 32)
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006378147994399948 quantized to 32 (current 32)
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015007407045646937 quantized to 32 (current 32)
Oct  1 12:48:22 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:22Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:01:af 10.100.0.11
Oct  1 12:48:22 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:22Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:01:af 10.100.0.11
Oct  1 12:48:22 np0005464891 nova_compute[259907]: 2025-10-01 16:48:22.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Oct  1 12:48:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Oct  1 12:48:22 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Oct  1 12:48:22 np0005464891 nova_compute[259907]: 2025-10-01 16:48:22.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:48:22 np0005464891 nova_compute[259907]: 2025-10-01 16:48:22.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:48:22 np0005464891 nova_compute[259907]: 2025-10-01 16:48:22.806 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:48:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 899 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 54 MiB/s wr, 290 op/s
Oct  1 12:48:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:48:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1601413791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:48:23 np0005464891 nova_compute[259907]: 2025-10-01 16:48:23.640 2 DEBUG oslo_concurrency.lockutils [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:23 np0005464891 nova_compute[259907]: 2025-10-01 16:48:23.641 2 DEBUG oslo_concurrency.lockutils [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:48:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1601413791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:48:23 np0005464891 nova_compute[259907]: 2025-10-01 16:48:23.657 2 DEBUG nova.objects.instance [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lazy-loading 'flavor' on Instance uuid 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:48:23 np0005464891 nova_compute[259907]: 2025-10-01 16:48:23.729 2 DEBUG oslo_concurrency.lockutils [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:23 np0005464891 nova_compute[259907]: 2025-10-01 16:48:23.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:48:23 np0005464891 nova_compute[259907]: 2025-10-01 16:48:23.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:48:23 np0005464891 nova_compute[259907]: 2025-10-01 16:48:23.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:48:23 np0005464891 nova_compute[259907]: 2025-10-01 16:48:23.987 2 DEBUG oslo_concurrency.lockutils [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:23 np0005464891 nova_compute[259907]: 2025-10-01 16:48:23.988 2 DEBUG oslo_concurrency.lockutils [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:23 np0005464891 nova_compute[259907]: 2025-10-01 16:48:23.988 2 INFO nova.compute.manager [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Attaching volume 9fccdcc6-0843-49bf-808d-af2b28d5c283 to /dev/vdb#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.090 2 DEBUG os_brick.utils [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.092 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.112 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.112 741 DEBUG oslo.privsep.daemon [-] privsep: reply[1da096ef-d8dd-475f-8a73-2c7c6f9851c9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.113 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.125 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.126 741 DEBUG oslo.privsep.daemon [-] privsep: reply[f24fb4f0-4ce8-49e8-9120-fa41b2404316]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.127 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.141 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.141 741 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e88254-4c31-414f-955f-546cf99b414e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.142 741 DEBUG oslo.privsep.daemon [-] privsep: reply[0804f313-1599-47d4-b99c-bc6eb8027632]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.143 2 DEBUG oslo_concurrency.processutils [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.174 2 DEBUG oslo_concurrency.processutils [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.178 2 DEBUG os_brick.initiator.connectors.lightos [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.179 2 DEBUG os_brick.initiator.connectors.lightos [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.179 2 DEBUG os_brick.initiator.connectors.lightos [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.180 2 DEBUG os_brick.utils [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] <== get_connector_properties: return (89ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.180 2 DEBUG nova.virt.block_device [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Updating existing volume attachment record: 7d971cbf-ae21-4bcd-98ad-fc450c83e1c0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:48:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:48:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1330646379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.825 2 DEBUG nova.objects.instance [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lazy-loading 'flavor' on Instance uuid 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.849 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "refresh_cache-4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.849 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquired lock "refresh_cache-4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.850 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.850 2 DEBUG nova.objects.instance [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.855 2 DEBUG nova.virt.libvirt.driver [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Attempting to attach volume 9fccdcc6-0843-49bf-808d-af2b28d5c283 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.858 2 DEBUG nova.virt.libvirt.guest [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] attach device xml: <disk type="network" device="disk">
Oct  1 12:48:24 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:48:24 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-9fccdcc6-0843-49bf-808d-af2b28d5c283">
Oct  1 12:48:24 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:48:24 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:48:24 np0005464891 nova_compute[259907]:  <auth username="openstack">
Oct  1 12:48:24 np0005464891 nova_compute[259907]:    <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:48:24 np0005464891 nova_compute[259907]:  </auth>
Oct  1 12:48:24 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:48:24 np0005464891 nova_compute[259907]:  <serial>9fccdcc6-0843-49bf-808d-af2b28d5c283</serial>
Oct  1 12:48:24 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:48:24 np0005464891 nova_compute[259907]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  1 12:48:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 665 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 936 KiB/s rd, 44 MiB/s wr, 244 op/s
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.965 2 DEBUG nova.virt.libvirt.driver [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.966 2 DEBUG nova.virt.libvirt.driver [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.967 2 DEBUG nova.virt.libvirt.driver [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:48:24 np0005464891 nova_compute[259907]: 2025-10-01 16:48:24.967 2 DEBUG nova.virt.libvirt.driver [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] No VIF found with MAC fa:16:3e:21:ca:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:48:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:48:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:48:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:48:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:48:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:48:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:48:25 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev df0f199f-80ac-4945-bf8f-cd1f3de31242 does not exist
Oct  1 12:48:25 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 0d6d825f-c4a0-4b17-bfef-fd0489d8cadc does not exist
Oct  1 12:48:25 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 9f23a741-44d6-47fe-b692-2f33b92a4ce4 does not exist
Oct  1 12:48:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:48:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:48:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:48:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:48:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:48:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:48:25 np0005464891 nova_compute[259907]: 2025-10-01 16:48:25.175 2 DEBUG oslo_concurrency.lockutils [None req-6eb29eac-b311-4e65-9e05-fb7a953a5cf3 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:48:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:48:25 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:48:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:48:25 np0005464891 nova_compute[259907]: 2025-10-01 16:48:25.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:25 np0005464891 podman[279890]: 2025-10-01 16:48:25.758676489 +0000 UTC m=+0.050662277 container create b7c94f44ba11edfe620b5a1328de1d1a352a474c50cd43ec4206e633b1c95e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kepler, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 12:48:25 np0005464891 systemd[1]: Started libpod-conmon-b7c94f44ba11edfe620b5a1328de1d1a352a474c50cd43ec4206e633b1c95e90.scope.
Oct  1 12:48:25 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:48:25 np0005464891 podman[279890]: 2025-10-01 16:48:25.734239756 +0000 UTC m=+0.026225594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:48:25 np0005464891 podman[279890]: 2025-10-01 16:48:25.84361741 +0000 UTC m=+0.135603218 container init b7c94f44ba11edfe620b5a1328de1d1a352a474c50cd43ec4206e633b1c95e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:48:25 np0005464891 podman[279890]: 2025-10-01 16:48:25.852097534 +0000 UTC m=+0.144083322 container start b7c94f44ba11edfe620b5a1328de1d1a352a474c50cd43ec4206e633b1c95e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct  1 12:48:25 np0005464891 goofy_kepler[279907]: 167 167
Oct  1 12:48:25 np0005464891 systemd[1]: libpod-b7c94f44ba11edfe620b5a1328de1d1a352a474c50cd43ec4206e633b1c95e90.scope: Deactivated successfully.
Oct  1 12:48:25 np0005464891 podman[279890]: 2025-10-01 16:48:25.86029043 +0000 UTC m=+0.152276248 container attach b7c94f44ba11edfe620b5a1328de1d1a352a474c50cd43ec4206e633b1c95e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kepler, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:48:25 np0005464891 podman[279890]: 2025-10-01 16:48:25.860977589 +0000 UTC m=+0.152963377 container died b7c94f44ba11edfe620b5a1328de1d1a352a474c50cd43ec4206e633b1c95e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kepler, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:48:25 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c11f5681e8696168486728fc861c63c3cbbdd945dee2c94c271279070376a1ca-merged.mount: Deactivated successfully.
Oct  1 12:48:25 np0005464891 podman[279890]: 2025-10-01 16:48:25.903341176 +0000 UTC m=+0.195326964 container remove b7c94f44ba11edfe620b5a1328de1d1a352a474c50cd43ec4206e633b1c95e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kepler, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct  1 12:48:25 np0005464891 systemd[1]: libpod-conmon-b7c94f44ba11edfe620b5a1328de1d1a352a474c50cd43ec4206e633b1c95e90.scope: Deactivated successfully.
Oct  1 12:48:26 np0005464891 podman[279930]: 2025-10-01 16:48:26.11062421 +0000 UTC m=+0.072033216 container create f347160d84ec86cb946de77d491eebb3f2f7afecb8e40aec3da8bcb38842069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 12:48:26 np0005464891 podman[279930]: 2025-10-01 16:48:26.067882582 +0000 UTC m=+0.029291608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:48:26 np0005464891 systemd[1]: Started libpod-conmon-f347160d84ec86cb946de77d491eebb3f2f7afecb8e40aec3da8bcb38842069f.scope.
Oct  1 12:48:26 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:48:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ea8696ebf30b9ba5c9b7af8c1fa84114a54b5e142cac36e723ba99d98b31d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:48:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ea8696ebf30b9ba5c9b7af8c1fa84114a54b5e142cac36e723ba99d98b31d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:48:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ea8696ebf30b9ba5c9b7af8c1fa84114a54b5e142cac36e723ba99d98b31d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:48:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ea8696ebf30b9ba5c9b7af8c1fa84114a54b5e142cac36e723ba99d98b31d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:48:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ea8696ebf30b9ba5c9b7af8c1fa84114a54b5e142cac36e723ba99d98b31d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:48:26 np0005464891 podman[279930]: 2025-10-01 16:48:26.426262011 +0000 UTC m=+0.387671077 container init f347160d84ec86cb946de77d491eebb3f2f7afecb8e40aec3da8bcb38842069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:48:26 np0005464891 podman[279930]: 2025-10-01 16:48:26.438843618 +0000 UTC m=+0.400252664 container start f347160d84ec86cb946de77d491eebb3f2f7afecb8e40aec3da8bcb38842069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:48:26 np0005464891 nova_compute[259907]: 2025-10-01 16:48:26.504 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Updating instance_info_cache with network_info: [{"id": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "address": "fa:16:3e:21:ca:d4", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d498a06-e5", "ovs_interfaceid": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:48:26 np0005464891 nova_compute[259907]: 2025-10-01 16:48:26.526 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Releasing lock "refresh_cache-4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:48:26 np0005464891 nova_compute[259907]: 2025-10-01 16:48:26.526 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  1 12:48:26 np0005464891 nova_compute[259907]: 2025-10-01 16:48:26.527 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:48:26 np0005464891 nova_compute[259907]: 2025-10-01 16:48:26.527 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:48:26 np0005464891 nova_compute[259907]: 2025-10-01 16:48:26.527 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:48:26 np0005464891 nova_compute[259907]: 2025-10-01 16:48:26.527 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:48:26 np0005464891 podman[279930]: 2025-10-01 16:48:26.576880483 +0000 UTC m=+0.538289539 container attach f347160d84ec86cb946de77d491eebb3f2f7afecb8e40aec3da8bcb38842069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct  1 12:48:26 np0005464891 nova_compute[259907]: 2025-10-01 16:48:26.689 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:26 np0005464891 nova_compute[259907]: 2025-10-01 16:48:26.689 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:26 np0005464891 nova_compute[259907]: 2025-10-01 16:48:26.689 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:26 np0005464891 nova_compute[259907]: 2025-10-01 16:48:26.690 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:48:26 np0005464891 nova_compute[259907]: 2025-10-01 16:48:26.690 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 293 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 999 KiB/s rd, 30 MiB/s wr, 236 op/s
Oct  1 12:48:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:48:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/71247959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.156 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.256 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.256 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.257 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.263 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.263 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.323 2 DEBUG oslo_concurrency.lockutils [None req-f222a8df-cb77-463d-8db3-4ac32e71306b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.324 2 DEBUG oslo_concurrency.lockutils [None req-f222a8df-cb77-463d-8db3-4ac32e71306b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.341 2 INFO nova.compute.manager [None req-f222a8df-cb77-463d-8db3-4ac32e71306b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Detaching volume 9fccdcc6-0843-49bf-808d-af2b28d5c283#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.443 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.444 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4204MB free_disk=59.90909957885742GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.444 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.445 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.454 2 INFO nova.virt.block_device [None req-f222a8df-cb77-463d-8db3-4ac32e71306b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Attempting to driver detach volume 9fccdcc6-0843-49bf-808d-af2b28d5c283 from mountpoint /dev/vdb#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.460 2 DEBUG nova.virt.libvirt.driver [None req-f222a8df-cb77-463d-8db3-4ac32e71306b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Attempting to detach device vdb from instance 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.460 2 DEBUG nova.virt.libvirt.guest [None req-f222a8df-cb77-463d-8db3-4ac32e71306b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:48:27 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:48:27 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-9fccdcc6-0843-49bf-808d-af2b28d5c283">
Oct  1 12:48:27 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:48:27 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:48:27 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:48:27 np0005464891 nova_compute[259907]:  <serial>9fccdcc6-0843-49bf-808d-af2b28d5c283</serial>
Oct  1 12:48:27 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:48:27 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:48:27 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.466 2 INFO nova.virt.libvirt.driver [None req-f222a8df-cb77-463d-8db3-4ac32e71306b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Successfully detached device vdb from instance 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 from the persistent domain config.#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.466 2 DEBUG nova.virt.libvirt.driver [None req-f222a8df-cb77-463d-8db3-4ac32e71306b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.466 2 DEBUG nova.virt.libvirt.guest [None req-f222a8df-cb77-463d-8db3-4ac32e71306b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:48:27 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:48:27 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-9fccdcc6-0843-49bf-808d-af2b28d5c283">
Oct  1 12:48:27 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:48:27 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:48:27 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:48:27 np0005464891 nova_compute[259907]:  <serial>9fccdcc6-0843-49bf-808d-af2b28d5c283</serial>
Oct  1 12:48:27 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:48:27 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:48:27 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.518 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.518 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.519 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.519 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.574 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:27 np0005464891 modest_feistel[279946]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:48:27 np0005464891 modest_feistel[279946]: --> relative data size: 1.0
Oct  1 12:48:27 np0005464891 modest_feistel[279946]: --> All data devices are unavailable
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.599 2 DEBUG nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Received event <DeviceRemovedEvent: 1759337307.584498, 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.600 2 DEBUG nova.virt.libvirt.driver [None req-f222a8df-cb77-463d-8db3-4ac32e71306b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.603 2 INFO nova.virt.libvirt.driver [None req-f222a8df-cb77-463d-8db3-4ac32e71306b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Successfully detached device vdb from instance 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 from the live domain config.#033[00m
Oct  1 12:48:27 np0005464891 systemd[1]: libpod-f347160d84ec86cb946de77d491eebb3f2f7afecb8e40aec3da8bcb38842069f.scope: Deactivated successfully.
Oct  1 12:48:27 np0005464891 systemd[1]: libpod-f347160d84ec86cb946de77d491eebb3f2f7afecb8e40aec3da8bcb38842069f.scope: Consumed 1.080s CPU time.
Oct  1 12:48:27 np0005464891 podman[279930]: 2025-10-01 16:48:27.619890822 +0000 UTC m=+1.581299818 container died f347160d84ec86cb946de77d491eebb3f2f7afecb8e40aec3da8bcb38842069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct  1 12:48:27 np0005464891 systemd[1]: var-lib-containers-storage-overlay-f6ea8696ebf30b9ba5c9b7af8c1fa84114a54b5e142cac36e723ba99d98b31d3-merged.mount: Deactivated successfully.
Oct  1 12:48:27 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:27Z|00072|binding|INFO|Releasing lport 72585314-0d9f-4f28-bd98-a3592b2b3241 from this chassis (sb_readonly=0)
Oct  1 12:48:27 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:27Z|00073|binding|INFO|Releasing lport c2ef6608-b2db-40dc-8fde-a94b501b7f75 from this chassis (sb_readonly=0)
Oct  1 12:48:27 np0005464891 podman[279930]: 2025-10-01 16:48:27.689782638 +0000 UTC m=+1.651191644 container remove f347160d84ec86cb946de77d491eebb3f2f7afecb8e40aec3da8bcb38842069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 12:48:27 np0005464891 systemd[1]: libpod-conmon-f347160d84ec86cb946de77d491eebb3f2f7afecb8e40aec3da8bcb38842069f.scope: Deactivated successfully.
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.756 2 DEBUG nova.objects.instance [None req-f222a8df-cb77-463d-8db3-4ac32e71306b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lazy-loading 'flavor' on Instance uuid 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:27 np0005464891 nova_compute[259907]: 2025-10-01 16:48:27.803 2 DEBUG oslo_concurrency.lockutils [None req-f222a8df-cb77-463d-8db3-4ac32e71306b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:48:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/572867705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:48:28 np0005464891 nova_compute[259907]: 2025-10-01 16:48:28.039 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:28 np0005464891 nova_compute[259907]: 2025-10-01 16:48:28.044 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:48:28 np0005464891 nova_compute[259907]: 2025-10-01 16:48:28.057 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:48:28 np0005464891 nova_compute[259907]: 2025-10-01 16:48:28.075 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:48:28 np0005464891 nova_compute[259907]: 2025-10-01 16:48:28.075 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:28 np0005464891 podman[280177]: 2025-10-01 16:48:28.33633263 +0000 UTC m=+0.043429949 container create 26eaab868d3364097e9d62dc7a951ef7107933d17bca1ab4c7fcfb9cb8cf0171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bhaskara, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct  1 12:48:28 np0005464891 systemd[1]: Started libpod-conmon-26eaab868d3364097e9d62dc7a951ef7107933d17bca1ab4c7fcfb9cb8cf0171.scope.
Oct  1 12:48:28 np0005464891 podman[280177]: 2025-10-01 16:48:28.313043038 +0000 UTC m=+0.020140367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:48:28 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:48:28 np0005464891 podman[280177]: 2025-10-01 16:48:28.432538892 +0000 UTC m=+0.139636241 container init 26eaab868d3364097e9d62dc7a951ef7107933d17bca1ab4c7fcfb9cb8cf0171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 12:48:28 np0005464891 podman[280177]: 2025-10-01 16:48:28.441506059 +0000 UTC m=+0.148603358 container start 26eaab868d3364097e9d62dc7a951ef7107933d17bca1ab4c7fcfb9cb8cf0171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bhaskara, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 12:48:28 np0005464891 podman[280177]: 2025-10-01 16:48:28.44515193 +0000 UTC m=+0.152249289 container attach 26eaab868d3364097e9d62dc7a951ef7107933d17bca1ab4c7fcfb9cb8cf0171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bhaskara, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:48:28 np0005464891 blissful_bhaskara[280193]: 167 167
Oct  1 12:48:28 np0005464891 podman[280177]: 2025-10-01 16:48:28.447916346 +0000 UTC m=+0.155013645 container died 26eaab868d3364097e9d62dc7a951ef7107933d17bca1ab4c7fcfb9cb8cf0171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 12:48:28 np0005464891 systemd[1]: libpod-26eaab868d3364097e9d62dc7a951ef7107933d17bca1ab4c7fcfb9cb8cf0171.scope: Deactivated successfully.
Oct  1 12:48:28 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9c2b65e8163b876645609b1bf9010bc662771cb7ff5e2513ef2f3697dae2921e-merged.mount: Deactivated successfully.
Oct  1 12:48:28 np0005464891 podman[280177]: 2025-10-01 16:48:28.485417619 +0000 UTC m=+0.192514918 container remove 26eaab868d3364097e9d62dc7a951ef7107933d17bca1ab4c7fcfb9cb8cf0171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_bhaskara, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 12:48:28 np0005464891 systemd[1]: libpod-conmon-26eaab868d3364097e9d62dc7a951ef7107933d17bca1ab4c7fcfb9cb8cf0171.scope: Deactivated successfully.
Oct  1 12:48:28 np0005464891 podman[280216]: 2025-10-01 16:48:28.679826948 +0000 UTC m=+0.045216048 container create cca8e6881135bc088d81ebe2f4c497163b6c1e0389098bd871c6a43b90a4e842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:48:28 np0005464891 systemd[1]: Started libpod-conmon-cca8e6881135bc088d81ebe2f4c497163b6c1e0389098bd871c6a43b90a4e842.scope.
Oct  1 12:48:28 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:48:28 np0005464891 podman[280216]: 2025-10-01 16:48:28.663107938 +0000 UTC m=+0.028497068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:48:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60126b9dc5db4d234cd8af69ac4fdf519d1bd3fb9f8c9de91026c1ee3bd88b4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:48:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60126b9dc5db4d234cd8af69ac4fdf519d1bd3fb9f8c9de91026c1ee3bd88b4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:48:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60126b9dc5db4d234cd8af69ac4fdf519d1bd3fb9f8c9de91026c1ee3bd88b4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:48:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60126b9dc5db4d234cd8af69ac4fdf519d1bd3fb9f8c9de91026c1ee3bd88b4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:48:28 np0005464891 podman[280216]: 2025-10-01 16:48:28.779774573 +0000 UTC m=+0.145163733 container init cca8e6881135bc088d81ebe2f4c497163b6c1e0389098bd871c6a43b90a4e842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_noyce, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 12:48:28 np0005464891 podman[280216]: 2025-10-01 16:48:28.789026628 +0000 UTC m=+0.154415728 container start cca8e6881135bc088d81ebe2f4c497163b6c1e0389098bd871c6a43b90a4e842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_noyce, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:48:28 np0005464891 podman[280216]: 2025-10-01 16:48:28.792231607 +0000 UTC m=+0.157620717 container attach cca8e6881135bc088d81ebe2f4c497163b6c1e0389098bd871c6a43b90a4e842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct  1 12:48:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 295 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 513 KiB/s rd, 8.8 MiB/s wr, 184 op/s
Oct  1 12:48:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Oct  1 12:48:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Oct  1 12:48:29 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]: {
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:    "0": [
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:        {
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "devices": [
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "/dev/loop3"
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            ],
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "lv_name": "ceph_lv0",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "lv_size": "21470642176",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "name": "ceph_lv0",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "tags": {
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.cluster_name": "ceph",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.crush_device_class": "",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.encrypted": "0",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.osd_id": "0",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.type": "block",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.vdo": "0"
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            },
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "type": "block",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "vg_name": "ceph_vg0"
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:        }
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:    ],
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:    "1": [
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:        {
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "devices": [
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "/dev/loop4"
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            ],
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "lv_name": "ceph_lv1",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "lv_size": "21470642176",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "name": "ceph_lv1",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "tags": {
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.cluster_name": "ceph",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.crush_device_class": "",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.encrypted": "0",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.osd_id": "1",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.type": "block",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.vdo": "0"
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            },
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "type": "block",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "vg_name": "ceph_vg1"
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:        }
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:    ],
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:    "2": [
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:        {
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "devices": [
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "/dev/loop5"
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            ],
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "lv_name": "ceph_lv2",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "lv_size": "21470642176",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "name": "ceph_lv2",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "tags": {
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.cluster_name": "ceph",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.crush_device_class": "",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.encrypted": "0",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.osd_id": "2",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.type": "block",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:                "ceph.vdo": "0"
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            },
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "type": "block",
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:            "vg_name": "ceph_vg2"
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:        }
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]:    ]
Oct  1 12:48:29 np0005464891 wizardly_noyce[280232]: }
Oct  1 12:48:29 np0005464891 systemd[1]: libpod-cca8e6881135bc088d81ebe2f4c497163b6c1e0389098bd871c6a43b90a4e842.scope: Deactivated successfully.
Oct  1 12:48:29 np0005464891 podman[280216]: 2025-10-01 16:48:29.56477471 +0000 UTC m=+0.930163820 container died cca8e6881135bc088d81ebe2f4c497163b6c1e0389098bd871c6a43b90a4e842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_noyce, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:48:29 np0005464891 systemd[1]: var-lib-containers-storage-overlay-60126b9dc5db4d234cd8af69ac4fdf519d1bd3fb9f8c9de91026c1ee3bd88b4a-merged.mount: Deactivated successfully.
Oct  1 12:48:29 np0005464891 podman[280216]: 2025-10-01 16:48:29.652785977 +0000 UTC m=+1.018175077 container remove cca8e6881135bc088d81ebe2f4c497163b6c1e0389098bd871c6a43b90a4e842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_noyce, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:48:29 np0005464891 systemd[1]: libpod-conmon-cca8e6881135bc088d81ebe2f4c497163b6c1e0389098bd871c6a43b90a4e842.scope: Deactivated successfully.
Oct  1 12:48:29 np0005464891 podman[280241]: 2025-10-01 16:48:29.707380491 +0000 UTC m=+0.097734195 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.104 2 DEBUG oslo_concurrency.lockutils [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.105 2 DEBUG oslo_concurrency.lockutils [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.118 2 DEBUG nova.objects.instance [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lazy-loading 'flavor' on Instance uuid 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.146 2 INFO nova.virt.libvirt.driver [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Ignoring supplied device name: /dev/vdb#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.164 2 DEBUG oslo_concurrency.lockutils [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:30 np0005464891 podman[280416]: 2025-10-01 16:48:30.292982613 +0000 UTC m=+0.045292499 container create 4791eac7f0f3093eeb26ba13a812399e290b37258641e8879b8d5949da3b5a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nightingale, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:48:30 np0005464891 systemd[1]: Started libpod-conmon-4791eac7f0f3093eeb26ba13a812399e290b37258641e8879b8d5949da3b5a56.scope.
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.353 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.355 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:48:30 np0005464891 podman[280416]: 2025-10-01 16:48:30.270770261 +0000 UTC m=+0.023080167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.370 2 DEBUG oslo_concurrency.lockutils [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.371 2 DEBUG oslo_concurrency.lockutils [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.371 2 INFO nova.compute.manager [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Attaching volume ce80aa53-4d70-4d07-9413-64fd0af6dd95 to /dev/vdb#033[00m
Oct  1 12:48:30 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:48:30 np0005464891 podman[280416]: 2025-10-01 16:48:30.391738786 +0000 UTC m=+0.144048722 container init 4791eac7f0f3093eeb26ba13a812399e290b37258641e8879b8d5949da3b5a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nightingale, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:48:30 np0005464891 podman[280416]: 2025-10-01 16:48:30.400000473 +0000 UTC m=+0.152310359 container start 4791eac7f0f3093eeb26ba13a812399e290b37258641e8879b8d5949da3b5a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nightingale, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 12:48:30 np0005464891 podman[280416]: 2025-10-01 16:48:30.402991116 +0000 UTC m=+0.155301052 container attach 4791eac7f0f3093eeb26ba13a812399e290b37258641e8879b8d5949da3b5a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:48:30 np0005464891 agitated_nightingale[280433]: 167 167
Oct  1 12:48:30 np0005464891 systemd[1]: libpod-4791eac7f0f3093eeb26ba13a812399e290b37258641e8879b8d5949da3b5a56.scope: Deactivated successfully.
Oct  1 12:48:30 np0005464891 podman[280416]: 2025-10-01 16:48:30.408160407 +0000 UTC m=+0.160470303 container died 4791eac7f0f3093eeb26ba13a812399e290b37258641e8879b8d5949da3b5a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nightingale, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:48:30 np0005464891 systemd[1]: var-lib-containers-storage-overlay-337a7736e42fc7712b63f2a859e4aa62c8a4665fc184fff3a4ff8a442112d132-merged.mount: Deactivated successfully.
Oct  1 12:48:30 np0005464891 podman[280416]: 2025-10-01 16:48:30.444217421 +0000 UTC m=+0.196527307 container remove 4791eac7f0f3093eeb26ba13a812399e290b37258641e8879b8d5949da3b5a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:48:30 np0005464891 systemd[1]: libpod-conmon-4791eac7f0f3093eeb26ba13a812399e290b37258641e8879b8d5949da3b5a56.scope: Deactivated successfully.
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.496 2 DEBUG os_brick.utils [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.497 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.515 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.516 741 DEBUG oslo.privsep.daemon [-] privsep: reply[410d8a3c-8703-4dce-aeb3-2b3f093182af]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.518 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.530 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.531 741 DEBUG oslo.privsep.daemon [-] privsep: reply[f494bb25-e71e-4f41-b47a-2bc127ea7556]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.532 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.546 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.546 741 DEBUG oslo.privsep.daemon [-] privsep: reply[868dd82d-2dda-4e08-9195-429e512aedee]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.548 741 DEBUG oslo.privsep.daemon [-] privsep: reply[19fa310e-c09d-48ac-885a-1670ba45a02b]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.548 2 DEBUG oslo_concurrency.processutils [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.568 2 DEBUG oslo_concurrency.processutils [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.570 2 DEBUG os_brick.initiator.connectors.lightos [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.570 2 DEBUG os_brick.initiator.connectors.lightos [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.571 2 DEBUG os_brick.initiator.connectors.lightos [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.571 2 DEBUG os_brick.utils [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] <== get_connector_properties: return (74ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.571 2 DEBUG nova.virt.block_device [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Updating existing volume attachment record: a5a0ce8f-98d4-404b-bacb-f46a47c51674 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:48:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:48:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Oct  1 12:48:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Oct  1 12:48:30 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.659 2 DEBUG nova.compute.manager [None req-6e771eda-a46c-4d98-bbba-9729de4bdb5c 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:30 np0005464891 podman[280465]: 2025-10-01 16:48:30.687818546 +0000 UTC m=+0.061669321 container create e485291ca4931d37c8f605678715a0589bae3ecac03b7ad76e7da5085dc911e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:48:30 np0005464891 nova_compute[259907]: 2025-10-01 16:48:30.698 2 INFO nova.compute.manager [None req-6e771eda-a46c-4d98-bbba-9729de4bdb5c 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] instance snapshotting#033[00m
Oct  1 12:48:30 np0005464891 systemd[1]: Started libpod-conmon-e485291ca4931d37c8f605678715a0589bae3ecac03b7ad76e7da5085dc911e2.scope.
Oct  1 12:48:30 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:48:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda381abeabcb6283bfcb36e7605f6e9a0d70168e89cea8ef79041cb9d83c89a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:48:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda381abeabcb6283bfcb36e7605f6e9a0d70168e89cea8ef79041cb9d83c89a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:48:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda381abeabcb6283bfcb36e7605f6e9a0d70168e89cea8ef79041cb9d83c89a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:48:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda381abeabcb6283bfcb36e7605f6e9a0d70168e89cea8ef79041cb9d83c89a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:48:30 np0005464891 podman[280465]: 2025-10-01 16:48:30.670304023 +0000 UTC m=+0.044154828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:48:30 np0005464891 podman[280465]: 2025-10-01 16:48:30.77535791 +0000 UTC m=+0.149208705 container init e485291ca4931d37c8f605678715a0589bae3ecac03b7ad76e7da5085dc911e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 12:48:30 np0005464891 podman[280465]: 2025-10-01 16:48:30.782306871 +0000 UTC m=+0.156157646 container start e485291ca4931d37c8f605678715a0589bae3ecac03b7ad76e7da5085dc911e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:48:30 np0005464891 podman[280465]: 2025-10-01 16:48:30.786178527 +0000 UTC m=+0.160029352 container attach e485291ca4931d37c8f605678715a0589bae3ecac03b7ad76e7da5085dc911e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cannon, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:48:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 295 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 5.9 MiB/s wr, 116 op/s
Oct  1 12:48:31 np0005464891 nova_compute[259907]: 2025-10-01 16:48:31.087 2 INFO nova.virt.libvirt.driver [None req-6e771eda-a46c-4d98-bbba-9729de4bdb5c 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Beginning live snapshot process#033[00m
Oct  1 12:48:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:48:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1538631266' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:48:31 np0005464891 nova_compute[259907]: 2025-10-01 16:48:31.223 2 DEBUG nova.virt.libvirt.imagebackend [None req-6e771eda-a46c-4d98-bbba-9729de4bdb5c 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] No parent info for f01c1e7c-fea3-4433-a44a-d71153552c78; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  1 12:48:31 np0005464891 nova_compute[259907]: 2025-10-01 16:48:31.232 2 DEBUG nova.objects.instance [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lazy-loading 'flavor' on Instance uuid 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:48:31 np0005464891 nova_compute[259907]: 2025-10-01 16:48:31.257 2 DEBUG nova.virt.libvirt.driver [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Attempting to attach volume ce80aa53-4d70-4d07-9413-64fd0af6dd95 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  1 12:48:31 np0005464891 nova_compute[259907]: 2025-10-01 16:48:31.260 2 DEBUG nova.virt.libvirt.guest [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] attach device xml: <disk type="network" device="disk">
Oct  1 12:48:31 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:48:31 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-ce80aa53-4d70-4d07-9413-64fd0af6dd95">
Oct  1 12:48:31 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:48:31 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:48:31 np0005464891 nova_compute[259907]:  <auth username="openstack">
Oct  1 12:48:31 np0005464891 nova_compute[259907]:    <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:48:31 np0005464891 nova_compute[259907]:  </auth>
Oct  1 12:48:31 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:48:31 np0005464891 nova_compute[259907]:  <serial>ce80aa53-4d70-4d07-9413-64fd0af6dd95</serial>
Oct  1 12:48:31 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:48:31 np0005464891 nova_compute[259907]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  1 12:48:31 np0005464891 nova_compute[259907]: 2025-10-01 16:48:31.359 2 DEBUG nova.virt.libvirt.driver [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:48:31 np0005464891 nova_compute[259907]: 2025-10-01 16:48:31.359 2 DEBUG nova.virt.libvirt.driver [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:48:31 np0005464891 nova_compute[259907]: 2025-10-01 16:48:31.360 2 DEBUG nova.virt.libvirt.driver [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:48:31 np0005464891 nova_compute[259907]: 2025-10-01 16:48:31.360 2 DEBUG nova.virt.libvirt.driver [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] No VIF found with MAC fa:16:3e:d5:01:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:48:31 np0005464891 nova_compute[259907]: 2025-10-01 16:48:31.571 2 DEBUG nova.storage.rbd_utils [None req-6e771eda-a46c-4d98-bbba-9729de4bdb5c 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] creating snapshot(1c0b0bfdfe9f49c6ad1a07210e9cbba5) on rbd image(4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  1 12:48:31 np0005464891 nova_compute[259907]: 2025-10-01 16:48:31.654 2 DEBUG oslo_concurrency.lockutils [None req-7788f015-514c-4245-973b-624c4da82bfd 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.284s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:31 np0005464891 competent_cannon[280482]: {
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:        "osd_id": 2,
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:        "type": "bluestore"
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:    },
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:        "osd_id": 0,
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:        "type": "bluestore"
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:    },
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:        "osd_id": 1,
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:        "type": "bluestore"
Oct  1 12:48:31 np0005464891 competent_cannon[280482]:    }
Oct  1 12:48:31 np0005464891 competent_cannon[280482]: }
Oct  1 12:48:31 np0005464891 systemd[1]: libpod-e485291ca4931d37c8f605678715a0589bae3ecac03b7ad76e7da5085dc911e2.scope: Deactivated successfully.
Oct  1 12:48:31 np0005464891 systemd[1]: libpod-e485291ca4931d37c8f605678715a0589bae3ecac03b7ad76e7da5085dc911e2.scope: Consumed 1.076s CPU time.
Oct  1 12:48:31 np0005464891 podman[280465]: 2025-10-01 16:48:31.896843732 +0000 UTC m=+1.270694517 container died e485291ca4931d37c8f605678715a0589bae3ecac03b7ad76e7da5085dc911e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 12:48:31 np0005464891 systemd[1]: var-lib-containers-storage-overlay-dda381abeabcb6283bfcb36e7605f6e9a0d70168e89cea8ef79041cb9d83c89a-merged.mount: Deactivated successfully.
Oct  1 12:48:31 np0005464891 podman[280465]: 2025-10-01 16:48:31.951807396 +0000 UTC m=+1.325658191 container remove e485291ca4931d37c8f605678715a0589bae3ecac03b7ad76e7da5085dc911e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 12:48:31 np0005464891 systemd[1]: libpod-conmon-e485291ca4931d37c8f605678715a0589bae3ecac03b7ad76e7da5085dc911e2.scope: Deactivated successfully.
Oct  1 12:48:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:48:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:48:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:48:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:48:32 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 520e46b0-7c6b-4cc1-8b1e-ee46473890e3 does not exist
Oct  1 12:48:32 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev fa5bfefc-c250-47a8-b98b-9f8aa006ecfd does not exist
Oct  1 12:48:32 np0005464891 nova_compute[259907]: 2025-10-01 16:48:32.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:32 np0005464891 podman[280650]: 2025-10-01 16:48:32.379982729 +0000 UTC m=+0.102232159 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  1 12:48:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Oct  1 12:48:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Oct  1 12:48:32 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Oct  1 12:48:32 np0005464891 nova_compute[259907]: 2025-10-01 16:48:32.654 2 DEBUG nova.storage.rbd_utils [None req-6e771eda-a46c-4d98-bbba-9729de4bdb5c 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] cloning vms/4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk@1c0b0bfdfe9f49c6ad1a07210e9cbba5 to images/e120b782-f4fe-48ea-9d54-439be8e800a6 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  1 12:48:32 np0005464891 nova_compute[259907]: 2025-10-01 16:48:32.755 2 DEBUG nova.storage.rbd_utils [None req-6e771eda-a46c-4d98-bbba-9729de4bdb5c 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] flattening images/e120b782-f4fe-48ea-9d54-439be8e800a6 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  1 12:48:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 295 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 412 KiB/s wr, 62 op/s
Oct  1 12:48:33 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:48:33 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:48:33 np0005464891 nova_compute[259907]: 2025-10-01 16:48:33.195 2 DEBUG nova.storage.rbd_utils [None req-6e771eda-a46c-4d98-bbba-9729de4bdb5c 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] removing snapshot(1c0b0bfdfe9f49c6ad1a07210e9cbba5) on rbd image(4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  1 12:48:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Oct  1 12:48:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Oct  1 12:48:34 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Oct  1 12:48:34 np0005464891 nova_compute[259907]: 2025-10-01 16:48:34.128 2 DEBUG nova.storage.rbd_utils [None req-6e771eda-a46c-4d98-bbba-9729de4bdb5c 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] creating snapshot(snap) on rbd image(e120b782-f4fe-48ea-9d54-439be8e800a6) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  1 12:48:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 313 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.3 MiB/s wr, 63 op/s
Oct  1 12:48:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Oct  1 12:48:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Oct  1 12:48:35 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Oct  1 12:48:35 np0005464891 nova_compute[259907]: 2025-10-01 16:48:35.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:48:35 np0005464891 nova_compute[259907]: 2025-10-01 16:48:35.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:35 np0005464891 podman[280764]: 2025-10-01 16:48:35.940765049 +0000 UTC m=+0.051199642 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct  1 12:48:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Oct  1 12:48:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Oct  1 12:48:36 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Oct  1 12:48:36 np0005464891 nova_compute[259907]: 2025-10-01 16:48:36.497 2 INFO nova.virt.libvirt.driver [None req-6e771eda-a46c-4d98-bbba-9729de4bdb5c 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Snapshot image upload complete#033[00m
Oct  1 12:48:36 np0005464891 nova_compute[259907]: 2025-10-01 16:48:36.498 2 INFO nova.compute.manager [None req-6e771eda-a46c-4d98-bbba-9729de4bdb5c 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Took 5.80 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  1 12:48:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:48:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2920373259' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:48:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:48:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2920373259' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:48:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 350 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 9.2 MiB/s rd, 7.7 MiB/s wr, 179 op/s
Oct  1 12:48:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Oct  1 12:48:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Oct  1 12:48:37 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Oct  1 12:48:37 np0005464891 nova_compute[259907]: 2025-10-01 16:48:37.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:38 np0005464891 nova_compute[259907]: 2025-10-01 16:48:38.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 374 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 9.7 MiB/s rd, 9.6 MiB/s wr, 217 op/s
Oct  1 12:48:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Oct  1 12:48:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Oct  1 12:48:39 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Oct  1 12:48:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:48:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Oct  1 12:48:40 np0005464891 nova_compute[259907]: 2025-10-01 16:48:40.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Oct  1 12:48:40 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Oct  1 12:48:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 374 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.8 MiB/s wr, 178 op/s
Oct  1 12:48:40 np0005464891 nova_compute[259907]: 2025-10-01 16:48:40.924 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:40 np0005464891 nova_compute[259907]: 2025-10-01 16:48:40.925 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:40 np0005464891 nova_compute[259907]: 2025-10-01 16:48:40.952 2 DEBUG nova.compute.manager [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:48:41 np0005464891 nova_compute[259907]: 2025-10-01 16:48:41.047 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:41 np0005464891 nova_compute[259907]: 2025-10-01 16:48:41.048 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:41 np0005464891 nova_compute[259907]: 2025-10-01 16:48:41.056 2 DEBUG nova.virt.hardware [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:48:41 np0005464891 nova_compute[259907]: 2025-10-01 16:48:41.057 2 INFO nova.compute.claims [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:48:41 np0005464891 nova_compute[259907]: 2025-10-01 16:48:41.213 2 DEBUG oslo_concurrency.processutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:48:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1241643152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:48:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Oct  1 12:48:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Oct  1 12:48:41 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Oct  1 12:48:41 np0005464891 nova_compute[259907]: 2025-10-01 16:48:41.729 2 DEBUG oslo_concurrency.processutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:41 np0005464891 nova_compute[259907]: 2025-10-01 16:48:41.740 2 DEBUG nova.compute.provider_tree [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:48:41 np0005464891 nova_compute[259907]: 2025-10-01 16:48:41.758 2 DEBUG nova.scheduler.client.report [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:48:41 np0005464891 nova_compute[259907]: 2025-10-01 16:48:41.779 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:41 np0005464891 nova_compute[259907]: 2025-10-01 16:48:41.780 2 DEBUG nova.compute.manager [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:48:41 np0005464891 nova_compute[259907]: 2025-10-01 16:48:41.840 2 DEBUG nova.compute.manager [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:48:41 np0005464891 nova_compute[259907]: 2025-10-01 16:48:41.840 2 DEBUG nova.network.neutron [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.035 2 INFO nova.virt.libvirt.driver [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.053 2 DEBUG nova.compute.manager [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:48:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:48:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:48:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:48:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:48:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:48:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.148 2 DEBUG nova.policy [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0a821557545f49ad9c15eee1cf0bd82b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f395084b84f48d182c3be9d7961475e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.152 2 DEBUG nova.compute.manager [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.154 2 DEBUG nova.virt.libvirt.driver [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.154 2 INFO nova.virt.libvirt.driver [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Creating image(s)#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.185 2 DEBUG nova.storage.rbd_utils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] rbd image b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.211 2 DEBUG nova.storage.rbd_utils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] rbd image b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.238 2 DEBUG nova.storage.rbd_utils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] rbd image b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.242 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "900d68f214b4c3bac1de95cf3c7cdce7bcb370f5" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.243 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "900d68f214b4c3bac1de95cf3c7cdce7bcb370f5" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.463 2 DEBUG nova.virt.libvirt.imagebackend [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Image locations are: [{'url': 'rbd://6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5/images/e120b782-f4fe-48ea-9d54-439be8e800a6/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5/images/e120b782-f4fe-48ea-9d54-439be8e800a6/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.526 2 DEBUG nova.virt.libvirt.imagebackend [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Selected location: {'url': 'rbd://6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5/images/e120b782-f4fe-48ea-9d54-439be8e800a6/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.527 2 DEBUG nova.storage.rbd_utils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] cloning images/e120b782-f4fe-48ea-9d54-439be8e800a6@snap to None/b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.626 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "900d68f214b4c3bac1de95cf3c7cdce7bcb370f5" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.723 2 DEBUG nova.objects.instance [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lazy-loading 'migration_context' on Instance uuid b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.744 2 DEBUG nova.virt.libvirt.driver [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.745 2 DEBUG nova.virt.libvirt.driver [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Ensure instance console log exists: /var/lib/nova/instances/b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.745 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.745 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.746 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:42 np0005464891 nova_compute[259907]: 2025-10-01 16:48:42.806 2 DEBUG nova.network.neutron [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Successfully created port: 5ef93ed9-65fa-4d0e-a510-20023ab7144f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:48:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 374 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.3 MiB/s wr, 191 op/s
Oct  1 12:48:43 np0005464891 nova_compute[259907]: 2025-10-01 16:48:43.432 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:43 np0005464891 nova_compute[259907]: 2025-10-01 16:48:43.432 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:43 np0005464891 nova_compute[259907]: 2025-10-01 16:48:43.461 2 DEBUG nova.compute.manager [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:48:43 np0005464891 nova_compute[259907]: 2025-10-01 16:48:43.542 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:43 np0005464891 nova_compute[259907]: 2025-10-01 16:48:43.543 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:43 np0005464891 nova_compute[259907]: 2025-10-01 16:48:43.550 2 DEBUG nova.virt.hardware [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:48:43 np0005464891 nova_compute[259907]: 2025-10-01 16:48:43.551 2 INFO nova.compute.claims [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:48:43 np0005464891 nova_compute[259907]: 2025-10-01 16:48:43.570 2 DEBUG nova.network.neutron [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Successfully updated port: 5ef93ed9-65fa-4d0e-a510-20023ab7144f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:48:43 np0005464891 nova_compute[259907]: 2025-10-01 16:48:43.593 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "refresh_cache-b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:48:43 np0005464891 nova_compute[259907]: 2025-10-01 16:48:43.593 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquired lock "refresh_cache-b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:48:43 np0005464891 nova_compute[259907]: 2025-10-01 16:48:43.594 2 DEBUG nova.network.neutron [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:48:43 np0005464891 nova_compute[259907]: 2025-10-01 16:48:43.692 2 DEBUG nova.compute.manager [req-42c8127b-536a-4514-af4a-82beff051031 req-7b8dd41a-807c-49c9-83ac-7aa38338055b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Received event network-changed-5ef93ed9-65fa-4d0e-a510-20023ab7144f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:43 np0005464891 nova_compute[259907]: 2025-10-01 16:48:43.693 2 DEBUG nova.compute.manager [req-42c8127b-536a-4514-af4a-82beff051031 req-7b8dd41a-807c-49c9-83ac-7aa38338055b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Refreshing instance network info cache due to event network-changed-5ef93ed9-65fa-4d0e-a510-20023ab7144f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:48:43 np0005464891 nova_compute[259907]: 2025-10-01 16:48:43.693 2 DEBUG oslo_concurrency.lockutils [req-42c8127b-536a-4514-af4a-82beff051031 req-7b8dd41a-807c-49c9-83ac-7aa38338055b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:48:43 np0005464891 nova_compute[259907]: 2025-10-01 16:48:43.704 2 DEBUG oslo_concurrency.processutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Oct  1 12:48:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Oct  1 12:48:43 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Oct  1 12:48:43 np0005464891 nova_compute[259907]: 2025-10-01 16:48:43.906 2 DEBUG nova.network.neutron [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:48:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:48:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/421106194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.234 2 DEBUG oslo_concurrency.processutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.241 2 DEBUG nova.compute.provider_tree [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.269 2 DEBUG nova.scheduler.client.report [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.288 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.289 2 DEBUG nova.compute.manager [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.385 2 DEBUG nova.compute.manager [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.385 2 DEBUG nova.network.neutron [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.404 2 INFO nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.424 2 DEBUG nova.compute.manager [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.505 2 DEBUG nova.compute.manager [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.506 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.506 2 INFO nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Creating image(s)#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.528 2 DEBUG nova.storage.rbd_utils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] rbd image 347eacbc-b9bd-4163-bc2e-a49a19a833c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.553 2 DEBUG nova.storage.rbd_utils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] rbd image 347eacbc-b9bd-4163-bc2e-a49a19a833c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.573 2 DEBUG nova.storage.rbd_utils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] rbd image 347eacbc-b9bd-4163-bc2e-a49a19a833c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.577 2 DEBUG oslo_concurrency.processutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.636 2 DEBUG oslo_concurrency.processutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.637 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.637 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.637 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.658 2 DEBUG nova.storage.rbd_utils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] rbd image 347eacbc-b9bd-4163-bc2e-a49a19a833c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.661 2 DEBUG oslo_concurrency.processutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 347eacbc-b9bd-4163-bc2e-a49a19a833c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.686 2 DEBUG oslo_concurrency.lockutils [None req-c67d4279-43b1-495b-be2d-8b3b1cd328e7 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.687 2 DEBUG oslo_concurrency.lockutils [None req-c67d4279-43b1-495b-be2d-8b3b1cd328e7 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.705 2 INFO nova.compute.manager [None req-c67d4279-43b1-495b-be2d-8b3b1cd328e7 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Detaching volume ce80aa53-4d70-4d07-9413-64fd0af6dd95#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.788 2 DEBUG nova.policy [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '825e1f460cae49ad9834c4d7d67e24fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '19100b7dd5c9420db1d7f374559a9498', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.792 2 DEBUG oslo_concurrency.lockutils [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.874 2 INFO nova.virt.block_device [None req-c67d4279-43b1-495b-be2d-8b3b1cd328e7 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Attempting to driver detach volume ce80aa53-4d70-4d07-9413-64fd0af6dd95 from mountpoint /dev/vdb#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.886 2 DEBUG nova.virt.libvirt.driver [None req-c67d4279-43b1-495b-be2d-8b3b1cd328e7 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Attempting to detach device vdb from instance 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.888 2 DEBUG nova.virt.libvirt.guest [None req-c67d4279-43b1-495b-be2d-8b3b1cd328e7 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:48:44 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:48:44 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-ce80aa53-4d70-4d07-9413-64fd0af6dd95">
Oct  1 12:48:44 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:48:44 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:48:44 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:48:44 np0005464891 nova_compute[259907]:  <serial>ce80aa53-4d70-4d07-9413-64fd0af6dd95</serial>
Oct  1 12:48:44 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:48:44 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:48:44 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:48:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 374 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 11 KiB/s wr, 161 op/s
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.913 2 INFO nova.virt.libvirt.driver [None req-c67d4279-43b1-495b-be2d-8b3b1cd328e7 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Successfully detached device vdb from instance 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec from the persistent domain config.#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.914 2 DEBUG nova.virt.libvirt.driver [None req-c67d4279-43b1-495b-be2d-8b3b1cd328e7 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.915 2 DEBUG nova.virt.libvirt.guest [None req-c67d4279-43b1-495b-be2d-8b3b1cd328e7 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:48:44 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:48:44 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-ce80aa53-4d70-4d07-9413-64fd0af6dd95">
Oct  1 12:48:44 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:48:44 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:48:44 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:48:44 np0005464891 nova_compute[259907]:  <serial>ce80aa53-4d70-4d07-9413-64fd0af6dd95</serial>
Oct  1 12:48:44 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:48:44 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:48:44 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.941 2 DEBUG nova.network.neutron [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Updating instance_info_cache with network_info: [{"id": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "address": "fa:16:3e:73:57:c3", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ef93ed9-65", "ovs_interfaceid": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.958 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Releasing lock "refresh_cache-b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.959 2 DEBUG nova.compute.manager [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Instance network_info: |[{"id": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "address": "fa:16:3e:73:57:c3", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ef93ed9-65", "ovs_interfaceid": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.959 2 DEBUG oslo_concurrency.lockutils [req-42c8127b-536a-4514-af4a-82beff051031 req-7b8dd41a-807c-49c9-83ac-7aa38338055b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.960 2 DEBUG nova.network.neutron [req-42c8127b-536a-4514-af4a-82beff051031 req-7b8dd41a-807c-49c9-83ac-7aa38338055b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Refreshing network info cache for port 5ef93ed9-65fa-4d0e-a510-20023ab7144f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.962 2 DEBUG nova.virt.libvirt.driver [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Start _get_guest_xml network_info=[{"id": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "address": "fa:16:3e:73:57:c3", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ef93ed9-65", "ovs_interfaceid": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-01T16:48:30Z,direct_url=<?>,disk_format='raw',id=e120b782-f4fe-48ea-9d54-439be8e800a6,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1749777019',owner='1f395084b84f48d182c3be9d7961475e',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-01T16:48:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'image_id': 'e120b782-f4fe-48ea-9d54-439be8e800a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.967 2 WARNING nova.virt.libvirt.driver [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.971 2 DEBUG nova.virt.libvirt.host [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.972 2 DEBUG nova.virt.libvirt.host [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:48:44 np0005464891 nova_compute[259907]: 2025-10-01 16:48:44.974 2 DEBUG oslo_concurrency.processutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 347eacbc-b9bd-4163-bc2e-a49a19a833c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.002 2 DEBUG nova.virt.libvirt.host [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.003 2 DEBUG nova.virt.libvirt.host [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.004 2 DEBUG nova.virt.libvirt.driver [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.004 2 DEBUG nova.virt.hardware [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-01T16:48:30Z,direct_url=<?>,disk_format='raw',id=e120b782-f4fe-48ea-9d54-439be8e800a6,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1749777019',owner='1f395084b84f48d182c3be9d7961475e',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-01T16:48:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.005 2 DEBUG nova.virt.hardware [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.005 2 DEBUG nova.virt.hardware [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.005 2 DEBUG nova.virt.hardware [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.005 2 DEBUG nova.virt.hardware [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.006 2 DEBUG nova.virt.hardware [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.006 2 DEBUG nova.virt.hardware [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.006 2 DEBUG nova.virt.hardware [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.006 2 DEBUG nova.virt.hardware [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.007 2 DEBUG nova.virt.hardware [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.007 2 DEBUG nova.virt.hardware [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.012 2 DEBUG oslo_concurrency.processutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.040 2 DEBUG nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Received event <DeviceRemovedEvent: 1759337325.0233855, 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.044 2 DEBUG nova.virt.libvirt.driver [None req-c67d4279-43b1-495b-be2d-8b3b1cd328e7 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.090 2 INFO nova.virt.libvirt.driver [None req-c67d4279-43b1-495b-be2d-8b3b1cd328e7 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Successfully detached device vdb from instance 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec from the live domain config.#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.096 2 DEBUG nova.storage.rbd_utils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] resizing rbd image 347eacbc-b9bd-4163-bc2e-a49a19a833c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.211 2 DEBUG nova.objects.instance [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lazy-loading 'migration_context' on Instance uuid 347eacbc-b9bd-4163-bc2e-a49a19a833c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.228 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.228 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Ensure instance console log exists: /var/lib/nova/instances/347eacbc-b9bd-4163-bc2e-a49a19a833c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.229 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.229 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.230 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.280 2 DEBUG nova.objects.instance [None req-c67d4279-43b1-495b-be2d-8b3b1cd328e7 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lazy-loading 'flavor' on Instance uuid 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.316 2 DEBUG oslo_concurrency.lockutils [None req-c67d4279-43b1-495b-be2d-8b3b1cd328e7 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.318 2 DEBUG oslo_concurrency.lockutils [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.318 2 DEBUG oslo_concurrency.lockutils [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.319 2 DEBUG oslo_concurrency.lockutils [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.319 2 DEBUG oslo_concurrency.lockutils [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.321 2 INFO nova.compute.manager [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Terminating instance#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.323 2 DEBUG nova.compute.manager [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:48:45 np0005464891 kernel: tap845fe902-04 (unregistering): left promiscuous mode
Oct  1 12:48:45 np0005464891 NetworkManager[44940]: <info>  [1759337325.3779] device (tap845fe902-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:48:45 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:45Z|00074|binding|INFO|Releasing lport 845fe902-041f-4c80-897c-0bc9525fbeaf from this chassis (sb_readonly=0)
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:45 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:45Z|00075|binding|INFO|Setting lport 845fe902-041f-4c80-897c-0bc9525fbeaf down in Southbound
Oct  1 12:48:45 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:45Z|00076|binding|INFO|Removing iface tap845fe902-04 ovn-installed in OVS
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:45 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:45.457 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:01:af 10.100.0.11'], port_security=['fa:16:3e:d5:01:af 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3401e30b-97c6-4012-a9d4-0114c56bacd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '69d5fb4f7a0b4337a1b8774e04c97b9a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f86ffdec-54c4-4f1d-8b56-111fa7d84206', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6048fd95-db94-4f1d-be7e-ff0b5269a1e3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=845fe902-041f-4c80-897c-0bc9525fbeaf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:48:45 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:45.458 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 845fe902-041f-4c80-897c-0bc9525fbeaf in datapath 3401e30b-97c6-4012-a9d4-0114c56bacd5 unbound from our chassis#033[00m
Oct  1 12:48:45 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:45.460 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3401e30b-97c6-4012-a9d4-0114c56bacd5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:48:45 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:45.462 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[40af58ac-c451-4f61-8b5b-1a6c2fb43aaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:45 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:45.463 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5 namespace which is not needed anymore#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:48:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/322414138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:48:45 np0005464891 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct  1 12:48:45 np0005464891 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 14.727s CPU time.
Oct  1 12:48:45 np0005464891 systemd-machined[214891]: Machine qemu-7-instance-00000007 terminated.
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.529 2 DEBUG oslo_concurrency.processutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.564 2 DEBUG nova.storage.rbd_utils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] rbd image b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.569 2 DEBUG oslo_concurrency.processutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.599 2 DEBUG nova.network.neutron [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Successfully created port: c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.609 2 INFO nova.virt.libvirt.driver [-] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Instance destroyed successfully.#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.609 2 DEBUG nova.objects.instance [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lazy-loading 'resources' on Instance uuid 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:48:45 np0005464891 neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5[279557]: [NOTICE]   (279561) : haproxy version is 2.8.14-c23fe91
Oct  1 12:48:45 np0005464891 neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5[279557]: [NOTICE]   (279561) : path to executable is /usr/sbin/haproxy
Oct  1 12:48:45 np0005464891 neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5[279557]: [ALERT]    (279561) : Current worker (279563) exited with code 143 (Terminated)
Oct  1 12:48:45 np0005464891 neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5[279557]: [WARNING]  (279561) : All workers exited. Exiting... (0)
Oct  1 12:48:45 np0005464891 systemd[1]: libpod-64a57215c9274a26dfe28573cef36324d4859a5d349186ff9af5ff16877239ba.scope: Deactivated successfully.
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.625 2 DEBUG nova.virt.libvirt.vif [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:48:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-187997207',display_name='tempest-VolumesSnapshotTestJSON-instance-187997207',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-187997207',id=7,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIgJ5lnYrRG9Tvahx/0tSLtZSVgD2INhdzXfpcSBqcZL49XMXL/YBTjYN8RCIxGRhkQRdTJRtEGxUzu5k5Idy0y3T1S4/yeZcsyqD5M4i4aaygdIFOkEu6aldQewsVqs7w==',key_name='tempest-keypair-1059486929',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:48:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='69d5fb4f7a0b4337a1b8774e04c97b9a',ramdisk_id='',reservation_id='r-m9pan7uv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-1941074907',owner_user_name='tempest-VolumesSnapshotTestJSON-1941074907-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:48:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3517dc72472c436aaf2fe65b5ce2f240',uuid=7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "845fe902-041f-4c80-897c-0bc9525fbeaf", "address": "fa:16:3e:d5:01:af", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap845fe902-04", "ovs_interfaceid": "845fe902-041f-4c80-897c-0bc9525fbeaf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.626 2 DEBUG nova.network.os_vif_util [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Converting VIF {"id": "845fe902-041f-4c80-897c-0bc9525fbeaf", "address": "fa:16:3e:d5:01:af", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap845fe902-04", "ovs_interfaceid": "845fe902-041f-4c80-897c-0bc9525fbeaf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.626 2 DEBUG nova.network.os_vif_util [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:01:af,bridge_name='br-int',has_traffic_filtering=True,id=845fe902-041f-4c80-897c-0bc9525fbeaf,network=Network(3401e30b-97c6-4012-a9d4-0114c56bacd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap845fe902-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.627 2 DEBUG os_vif [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:01:af,bridge_name='br-int',has_traffic_filtering=True,id=845fe902-041f-4c80-897c-0bc9525fbeaf,network=Network(3401e30b-97c6-4012-a9d4-0114c56bacd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap845fe902-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.630 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap845fe902-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:45 np0005464891 podman[281248]: 2025-10-01 16:48:45.631009762 +0000 UTC m=+0.052954549 container died 64a57215c9274a26dfe28573cef36324d4859a5d349186ff9af5ff16877239ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.637 2 INFO os_vif [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:01:af,bridge_name='br-int',has_traffic_filtering=True,id=845fe902-041f-4c80-897c-0bc9525fbeaf,network=Network(3401e30b-97c6-4012-a9d4-0114c56bacd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap845fe902-04')#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:45 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64a57215c9274a26dfe28573cef36324d4859a5d349186ff9af5ff16877239ba-userdata-shm.mount: Deactivated successfully.
Oct  1 12:48:45 np0005464891 systemd[1]: var-lib-containers-storage-overlay-cb678adcecee5bbc6402527ea1a8ebab3a2e00e3d33c8411b8876bc8501fd7fa-merged.mount: Deactivated successfully.
Oct  1 12:48:45 np0005464891 podman[281248]: 2025-10-01 16:48:45.683835429 +0000 UTC m=+0.105780216 container cleanup 64a57215c9274a26dfe28573cef36324d4859a5d349186ff9af5ff16877239ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:48:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:48:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Oct  1 12:48:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Oct  1 12:48:45 np0005464891 systemd[1]: libpod-conmon-64a57215c9274a26dfe28573cef36324d4859a5d349186ff9af5ff16877239ba.scope: Deactivated successfully.
Oct  1 12:48:45 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Oct  1 12:48:45 np0005464891 podman[281299]: 2025-10-01 16:48:45.76586096 +0000 UTC m=+0.059162062 container remove 64a57215c9274a26dfe28573cef36324d4859a5d349186ff9af5ff16877239ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.769 2 DEBUG nova.compute.manager [req-50fc0f58-d451-41cb-800e-785b49f5c3d4 req-9867c373-ce58-45b0-afcd-57dec329fba4 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Received event network-vif-unplugged-845fe902-041f-4c80-897c-0bc9525fbeaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.769 2 DEBUG oslo_concurrency.lockutils [req-50fc0f58-d451-41cb-800e-785b49f5c3d4 req-9867c373-ce58-45b0-afcd-57dec329fba4 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.770 2 DEBUG oslo_concurrency.lockutils [req-50fc0f58-d451-41cb-800e-785b49f5c3d4 req-9867c373-ce58-45b0-afcd-57dec329fba4 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.770 2 DEBUG oslo_concurrency.lockutils [req-50fc0f58-d451-41cb-800e-785b49f5c3d4 req-9867c373-ce58-45b0-afcd-57dec329fba4 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.770 2 DEBUG nova.compute.manager [req-50fc0f58-d451-41cb-800e-785b49f5c3d4 req-9867c373-ce58-45b0-afcd-57dec329fba4 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] No waiting events found dispatching network-vif-unplugged-845fe902-041f-4c80-897c-0bc9525fbeaf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.770 2 DEBUG nova.compute.manager [req-50fc0f58-d451-41cb-800e-785b49f5c3d4 req-9867c373-ce58-45b0-afcd-57dec329fba4 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Received event network-vif-unplugged-845fe902-041f-4c80-897c-0bc9525fbeaf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:48:45 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:45.771 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[97827022-7f86-4ea7-8fd7-5c4543f90e58]: (4, ('Wed Oct  1 04:48:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5 (64a57215c9274a26dfe28573cef36324d4859a5d349186ff9af5ff16877239ba)\n64a57215c9274a26dfe28573cef36324d4859a5d349186ff9af5ff16877239ba\nWed Oct  1 04:48:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5 (64a57215c9274a26dfe28573cef36324d4859a5d349186ff9af5ff16877239ba)\n64a57215c9274a26dfe28573cef36324d4859a5d349186ff9af5ff16877239ba\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:45 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:45.772 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5bc103-6393-4bfd-adc0-2fc0726ddec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:45 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:45.773 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3401e30b-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:45 np0005464891 kernel: tap3401e30b-90: left promiscuous mode
Oct  1 12:48:45 np0005464891 nova_compute[259907]: 2025-10-01 16:48:45.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:45 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:45.800 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8015bd50-3aab-4deb-889a-ece490152026]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:45 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:45.827 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[93f23d92-273d-4347-b8c4-77bffd68e80b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:45 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:45.830 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ecbeec78-727c-4b6d-946b-588068cc431f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:45 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:45.855 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b31a99ef-425b-42c3-a65b-65cb4162bc62]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423703, 'reachable_time': 27377, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281333, 'error': None, 'target': 'ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:45 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:45.859 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:48:45 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:45.859 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[9f65bc8c-328b-4df1-ae0f-6b4b476cc0e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:45 np0005464891 systemd[1]: run-netns-ovnmeta\x2d3401e30b\x2d97c6\x2d4012\x2da9d4\x2d0114c56bacd5.mount: Deactivated successfully.
Oct  1 12:48:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:48:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2048279091' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.039 2 DEBUG oslo_concurrency.processutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.041 2 DEBUG nova.virt.libvirt.vif [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:48:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-320628910',display_name='tempest-TestStampPattern-server-320628910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-320628910',id=8,image_ref='e120b782-f4fe-48ea-9d54-439be8e800a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCck7nxcoGk0qQMqmOkhPfker9ncjX3MedwZy1gvsVFGYBG7D5wvyJC+lFiT/6un7wQpds+bs1FRdVcdDnlHzQimOGzqeJBoWgRzI2+A/i117tgAu+tGkXiUBUgSD0X9yA==',key_name='tempest-TestStampPattern-1388282123',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f395084b84f48d182c3be9d7961475e',ramdisk_id='',reservation_id='r-ebcybhva',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83',image_min_disk='1',image_min_ram='0',image_owner_id='1f395084b84f48d182c3be9d7961475e',image_owner_project_name='tempest-TestStampPattern-305826503',image_owner_user_name='tempest-TestStampPattern-305826503-project-member',image_user_id='0a821557545f49ad9c15eee1cf0bd82b',network_allocated='True',owner_project_name='tempest-TestStampPattern-305826503',owner_user_name='tempest-TestStampPattern-305826503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:48:42Z,user_data=None,user_id='0a821557545f49ad9c15eee1cf0bd82b',uuid=b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "address": "fa:16:3e:73:57:c3", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ef93ed9-65", "ovs_interfaceid": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.042 2 DEBUG nova.network.os_vif_util [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Converting VIF {"id": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "address": "fa:16:3e:73:57:c3", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ef93ed9-65", "ovs_interfaceid": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.043 2 DEBUG nova.network.os_vif_util [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:57:c3,bridge_name='br-int',has_traffic_filtering=True,id=5ef93ed9-65fa-4d0e-a510-20023ab7144f,network=Network(0b8d6144-4eec-41cd-aaa9-d3e718f03c5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ef93ed9-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.045 2 DEBUG nova.objects.instance [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lazy-loading 'pci_devices' on Instance uuid b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.073 2 DEBUG nova.virt.libvirt.driver [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  <uuid>b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183</uuid>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  <name>instance-00000008</name>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <nova:name>tempest-TestStampPattern-server-320628910</nova:name>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:48:44</nova:creationTime>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:        <nova:user uuid="0a821557545f49ad9c15eee1cf0bd82b">tempest-TestStampPattern-305826503-project-member</nova:user>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:        <nova:project uuid="1f395084b84f48d182c3be9d7961475e">tempest-TestStampPattern-305826503</nova:project>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="e120b782-f4fe-48ea-9d54-439be8e800a6"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:        <nova:port uuid="5ef93ed9-65fa-4d0e-a510-20023ab7144f">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <entry name="serial">b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183</entry>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <entry name="uuid">b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183</entry>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183_disk">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183_disk.config">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:73:57:c3"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <target dev="tap5ef93ed9-65"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183/console.log" append="off"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <input type="keyboard" bus="usb"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:48:46 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:48:46 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:48:46 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:48:46 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.074 2 DEBUG nova.compute.manager [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Preparing to wait for external event network-vif-plugged-5ef93ed9-65fa-4d0e-a510-20023ab7144f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.075 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.075 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.075 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.077 2 DEBUG nova.virt.libvirt.vif [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:48:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-320628910',display_name='tempest-TestStampPattern-server-320628910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-320628910',id=8,image_ref='e120b782-f4fe-48ea-9d54-439be8e800a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCck7nxcoGk0qQMqmOkhPfker9ncjX3MedwZy1gvsVFGYBG7D5wvyJC+lFiT/6un7wQpds+bs1FRdVcdDnlHzQimOGzqeJBoWgRzI2+A/i117tgAu+tGkXiUBUgSD0X9yA==',key_name='tempest-TestStampPattern-1388282123',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f395084b84f48d182c3be9d7961475e',ramdisk_id='',reservation_id='r-ebcybhva',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83',image_min_disk='1',image_min_ram='0',image_owner_id='1f395084b84f48d182c3be9d7961475e',image_owner_project_name='tempest-TestStampPattern-305826503',image_owner_user_name='tempest-TestStampPattern-305826503-project-member',image_user_id='0a821557545f49ad9c15eee1cf0bd82b',network_allocated='True',owner_project_name='tempest-TestStampPattern-305826503',owner_user_name='tempest-TestStampPattern-305826503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:48:42Z,user_data=None,user_id='0a821557545f49ad9c15eee1cf0bd82b',uuid=b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "address": "fa:16:3e:73:57:c3", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ef93ed9-65", "ovs_interfaceid": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.077 2 DEBUG nova.network.os_vif_util [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Converting VIF {"id": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "address": "fa:16:3e:73:57:c3", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ef93ed9-65", "ovs_interfaceid": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.078 2 DEBUG nova.network.os_vif_util [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:57:c3,bridge_name='br-int',has_traffic_filtering=True,id=5ef93ed9-65fa-4d0e-a510-20023ab7144f,network=Network(0b8d6144-4eec-41cd-aaa9-d3e718f03c5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ef93ed9-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.079 2 DEBUG os_vif [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:57:c3,bridge_name='br-int',has_traffic_filtering=True,id=5ef93ed9-65fa-4d0e-a510-20023ab7144f,network=Network(0b8d6144-4eec-41cd-aaa9-d3e718f03c5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ef93ed9-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.080 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.081 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.085 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ef93ed9-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.086 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ef93ed9-65, col_values=(('external_ids', {'iface-id': '5ef93ed9-65fa-4d0e-a510-20023ab7144f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:57:c3', 'vm-uuid': 'b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:46 np0005464891 NetworkManager[44940]: <info>  [1759337326.0891] manager: (tap5ef93ed9-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.094 2 INFO os_vif [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:57:c3,bridge_name='br-int',has_traffic_filtering=True,id=5ef93ed9-65fa-4d0e-a510-20023ab7144f,network=Network(0b8d6144-4eec-41cd-aaa9-d3e718f03c5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ef93ed9-65')#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.233 2 DEBUG nova.virt.libvirt.driver [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.234 2 DEBUG nova.virt.libvirt.driver [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.234 2 DEBUG nova.virt.libvirt.driver [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] No VIF found with MAC fa:16:3e:73:57:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.235 2 INFO nova.virt.libvirt.driver [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Using config drive#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.261 2 DEBUG nova.storage.rbd_utils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] rbd image b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.360 2 INFO nova.virt.libvirt.driver [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Deleting instance files /var/lib/nova/instances/7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec_del#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.361 2 INFO nova.virt.libvirt.driver [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Deletion of /var/lib/nova/instances/7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec_del complete#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.425 2 INFO nova.compute.manager [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.426 2 DEBUG oslo.service.loopingcall [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.427 2 DEBUG nova.compute.manager [-] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.427 2 DEBUG nova.network.neutron [-] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.583 2 DEBUG nova.network.neutron [req-42c8127b-536a-4514-af4a-82beff051031 req-7b8dd41a-807c-49c9-83ac-7aa38338055b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Updated VIF entry in instance network info cache for port 5ef93ed9-65fa-4d0e-a510-20023ab7144f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.584 2 DEBUG nova.network.neutron [req-42c8127b-536a-4514-af4a-82beff051031 req-7b8dd41a-807c-49c9-83ac-7aa38338055b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Updating instance_info_cache with network_info: [{"id": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "address": "fa:16:3e:73:57:c3", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ef93ed9-65", "ovs_interfaceid": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.599 2 DEBUG oslo_concurrency.lockutils [req-42c8127b-536a-4514-af4a-82beff051031 req-7b8dd41a-807c-49c9-83ac-7aa38338055b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.666 2 INFO nova.virt.libvirt.driver [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Creating config drive at /var/lib/nova/instances/b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183/disk.config#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.670 2 DEBUG oslo_concurrency.processutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv9cmf2kk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.692 2 DEBUG nova.network.neutron [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Successfully updated port: c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.711 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "refresh_cache-347eacbc-b9bd-4163-bc2e-a49a19a833c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.711 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquired lock "refresh_cache-347eacbc-b9bd-4163-bc2e-a49a19a833c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.711 2 DEBUG nova.network.neutron [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.798 2 DEBUG oslo_concurrency.processutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv9cmf2kk" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.818 2 DEBUG nova.storage.rbd_utils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] rbd image b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.822 2 DEBUG oslo_concurrency.processutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183/disk.config b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 387 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 1.0 MiB/s wr, 111 op/s
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.978 2 DEBUG oslo_concurrency.processutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183/disk.config b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.979 2 INFO nova.virt.libvirt.driver [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Deleting local config drive /var/lib/nova/instances/b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183/disk.config because it was imported into RBD.#033[00m
Oct  1 12:48:46 np0005464891 nova_compute[259907]: 2025-10-01 16:48:46.982 2 DEBUG nova.network.neutron [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:48:47 np0005464891 kernel: tap5ef93ed9-65: entered promiscuous mode
Oct  1 12:48:47 np0005464891 systemd-udevd[281196]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:48:47 np0005464891 NetworkManager[44940]: <info>  [1759337327.0392] manager: (tap5ef93ed9-65): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:47 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:47Z|00077|binding|INFO|Claiming lport 5ef93ed9-65fa-4d0e-a510-20023ab7144f for this chassis.
Oct  1 12:48:47 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:47Z|00078|binding|INFO|5ef93ed9-65fa-4d0e-a510-20023ab7144f: Claiming fa:16:3e:73:57:c3 10.100.0.5
Oct  1 12:48:47 np0005464891 NetworkManager[44940]: <info>  [1759337327.0530] device (tap5ef93ed9-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:48:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:47.051 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:57:c3 10.100.0.5'], port_security=['fa:16:3e:73:57:c3 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f395084b84f48d182c3be9d7961475e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a473cde3-a378-4504-81c4-9d8fada1bc14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a03153c4-51cb-49a4-a16a-ed6a97c8c003, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=5ef93ed9-65fa-4d0e-a510-20023ab7144f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:48:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:47.052 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 5ef93ed9-65fa-4d0e-a510-20023ab7144f in datapath 0b8d6144-4eec-41cd-aaa9-d3e718f03c5c bound to our chassis#033[00m
Oct  1 12:48:47 np0005464891 NetworkManager[44940]: <info>  [1759337327.0548] device (tap5ef93ed9-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:48:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:47.054 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b8d6144-4eec-41cd-aaa9-d3e718f03c5c#033[00m
Oct  1 12:48:47 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:47Z|00079|binding|INFO|Setting lport 5ef93ed9-65fa-4d0e-a510-20023ab7144f ovn-installed in OVS
Oct  1 12:48:47 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:47Z|00080|binding|INFO|Setting lport 5ef93ed9-65fa-4d0e-a510-20023ab7144f up in Southbound
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:47.070 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[695155fc-7ae0-458f-8da9-2d4214737d28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:47 np0005464891 systemd-machined[214891]: New machine qemu-8-instance-00000008.
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:47 np0005464891 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Oct  1 12:48:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:47.105 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[7617c942-3d55-4184-a9ac-a008642295a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:47.108 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[44ff33ee-27ed-4734-855c-ed4973e1b37a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:47.138 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[ba21bd6d-81f5-437a-89c7-59cd9e912af8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:47.153 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7e733a-f368-4e8a-af92-11f4a656a743]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b8d6144-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:55:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423083, 'reachable_time': 37210, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281420, 'error': None, 'target': 'ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:47.167 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[83d5412d-dc8e-46a0-817a-ef9b38d93840]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0b8d6144-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 423100, 'tstamp': 423100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281422, 'error': None, 'target': 'ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0b8d6144-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 423105, 'tstamp': 423105}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281422, 'error': None, 'target': 'ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:47.169 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b8d6144-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:47.171 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b8d6144-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:47.171 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:48:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:47.172 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b8d6144-40, col_values=(('external_ids', {'iface-id': 'c2ef6608-b2db-40dc-8fde-a94b501b7f75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:47.172 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.360 2 DEBUG nova.compute.manager [req-34da1923-706c-47ba-a976-9fa2fa6dca91 req-b6f80e78-207b-4927-b4f8-953d121ada6e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Received event network-vif-plugged-5ef93ed9-65fa-4d0e-a510-20023ab7144f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.361 2 DEBUG oslo_concurrency.lockutils [req-34da1923-706c-47ba-a976-9fa2fa6dca91 req-b6f80e78-207b-4927-b4f8-953d121ada6e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.361 2 DEBUG oslo_concurrency.lockutils [req-34da1923-706c-47ba-a976-9fa2fa6dca91 req-b6f80e78-207b-4927-b4f8-953d121ada6e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.361 2 DEBUG oslo_concurrency.lockutils [req-34da1923-706c-47ba-a976-9fa2fa6dca91 req-b6f80e78-207b-4927-b4f8-953d121ada6e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.361 2 DEBUG nova.compute.manager [req-34da1923-706c-47ba-a976-9fa2fa6dca91 req-b6f80e78-207b-4927-b4f8-953d121ada6e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Processing event network-vif-plugged-5ef93ed9-65fa-4d0e-a510-20023ab7144f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.742 2 DEBUG nova.network.neutron [-] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.759 2 INFO nova.compute.manager [-] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Took 1.33 seconds to deallocate network for instance.#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.792 2 DEBUG nova.network.neutron [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Updating instance_info_cache with network_info: [{"id": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "address": "fa:16:3e:03:7f:de", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8ba4648-dd", "ovs_interfaceid": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.815 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Releasing lock "refresh_cache-347eacbc-b9bd-4163-bc2e-a49a19a833c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.816 2 DEBUG nova.compute.manager [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Instance network_info: |[{"id": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "address": "fa:16:3e:03:7f:de", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8ba4648-dd", "ovs_interfaceid": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.818 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Start _get_guest_xml network_info=[{"id": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "address": "fa:16:3e:03:7f:de", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8ba4648-dd", "ovs_interfaceid": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'image_id': 'f01c1e7c-fea3-4433-a44a-d71153552c78'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.822 2 WARNING nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.828 2 DEBUG nova.virt.libvirt.host [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.829 2 DEBUG nova.virt.libvirt.host [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.832 2 DEBUG nova.virt.libvirt.host [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.833 2 DEBUG nova.virt.libvirt.host [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.833 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.833 2 DEBUG nova.virt.hardware [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.834 2 DEBUG nova.virt.hardware [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.834 2 DEBUG nova.virt.hardware [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.834 2 DEBUG nova.virt.hardware [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.834 2 DEBUG nova.virt.hardware [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.834 2 DEBUG nova.virt.hardware [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.835 2 DEBUG nova.virt.hardware [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.835 2 DEBUG nova.virt.hardware [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.835 2 DEBUG nova.virt.hardware [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.835 2 DEBUG nova.virt.hardware [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.835 2 DEBUG nova.virt.hardware [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.838 2 DEBUG oslo_concurrency.processutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.871 2 DEBUG nova.compute.manager [req-68dd1d3d-3a98-43db-bd5d-fb6644ba3f95 req-ff6ef5a3-9b91-4b1c-8b46-e4c307bed875 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Received event network-vif-plugged-845fe902-041f-4c80-897c-0bc9525fbeaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.872 2 DEBUG oslo_concurrency.lockutils [req-68dd1d3d-3a98-43db-bd5d-fb6644ba3f95 req-ff6ef5a3-9b91-4b1c-8b46-e4c307bed875 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.872 2 DEBUG oslo_concurrency.lockutils [req-68dd1d3d-3a98-43db-bd5d-fb6644ba3f95 req-ff6ef5a3-9b91-4b1c-8b46-e4c307bed875 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.873 2 DEBUG oslo_concurrency.lockutils [req-68dd1d3d-3a98-43db-bd5d-fb6644ba3f95 req-ff6ef5a3-9b91-4b1c-8b46-e4c307bed875 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.873 2 DEBUG nova.compute.manager [req-68dd1d3d-3a98-43db-bd5d-fb6644ba3f95 req-ff6ef5a3-9b91-4b1c-8b46-e4c307bed875 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] No waiting events found dispatching network-vif-plugged-845fe902-041f-4c80-897c-0bc9525fbeaf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.873 2 WARNING nova.compute.manager [req-68dd1d3d-3a98-43db-bd5d-fb6644ba3f95 req-ff6ef5a3-9b91-4b1c-8b46-e4c307bed875 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Received unexpected event network-vif-plugged-845fe902-041f-4c80-897c-0bc9525fbeaf for instance with vm_state active and task_state deleting.#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.873 2 DEBUG nova.compute.manager [req-68dd1d3d-3a98-43db-bd5d-fb6644ba3f95 req-ff6ef5a3-9b91-4b1c-8b46-e4c307bed875 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Received event network-changed-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.873 2 DEBUG nova.compute.manager [req-68dd1d3d-3a98-43db-bd5d-fb6644ba3f95 req-ff6ef5a3-9b91-4b1c-8b46-e4c307bed875 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Refreshing instance network info cache due to event network-changed-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.874 2 DEBUG oslo_concurrency.lockutils [req-68dd1d3d-3a98-43db-bd5d-fb6644ba3f95 req-ff6ef5a3-9b91-4b1c-8b46-e4c307bed875 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-347eacbc-b9bd-4163-bc2e-a49a19a833c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.874 2 DEBUG oslo_concurrency.lockutils [req-68dd1d3d-3a98-43db-bd5d-fb6644ba3f95 req-ff6ef5a3-9b91-4b1c-8b46-e4c307bed875 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-347eacbc-b9bd-4163-bc2e-a49a19a833c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.874 2 DEBUG nova.network.neutron [req-68dd1d3d-3a98-43db-bd5d-fb6644ba3f95 req-ff6ef5a3-9b91-4b1c-8b46-e4c307bed875 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Refreshing network info cache for port c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.897 2 WARNING nova.volume.cinder [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Attachment a5a0ce8f-98d4-404b-bacb-f46a47c51674 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = a5a0ce8f-98d4-404b-bacb-f46a47c51674. (HTTP 404) (Request-ID: req-f86de7bb-4b17-466a-8ccd-17271ba61eea)#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.897 2 INFO nova.compute.manager [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Took 0.14 seconds to detach 1 volumes for instance.#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.945 2 DEBUG oslo_concurrency.lockutils [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:47 np0005464891 nova_compute[259907]: 2025-10-01 16:48:47.945 2 DEBUG oslo_concurrency.lockutils [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.028 2 DEBUG oslo_concurrency.processutils [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:48:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913925652' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.323 2 DEBUG oslo_concurrency.processutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.348 2 DEBUG nova.storage.rbd_utils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] rbd image 347eacbc-b9bd-4163-bc2e-a49a19a833c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.353 2 DEBUG oslo_concurrency.processutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:48:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2035024932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.461 2 DEBUG oslo_concurrency.processutils [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.469 2 DEBUG nova.compute.provider_tree [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.484 2 DEBUG nova.scheduler.client.report [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.505 2 DEBUG oslo_concurrency.lockutils [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.527 2 INFO nova.scheduler.client.report [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Deleted allocations for instance 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.582 2 DEBUG oslo_concurrency.lockutils [None req-d9562480-7568-42e9-9d69-1d213a43f1df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.584 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337328.583791, b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.584 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] VM Started (Lifecycle Event)#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.585 2 DEBUG nova.compute.manager [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.591 2 DEBUG nova.virt.libvirt.driver [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.607 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.609 2 INFO nova.virt.libvirt.driver [-] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Instance spawned successfully.#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.609 2 INFO nova.compute.manager [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Took 6.46 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.610 2 DEBUG nova.compute.manager [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.614 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.644 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.645 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337328.584598, b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.645 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.668 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.672 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337328.589107, b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.673 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.681 2 INFO nova.compute.manager [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Took 7.66 seconds to build instance.#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.694 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.698 2 DEBUG oslo_concurrency.lockutils [None req-2ce24b75-60ff-4869-adff-0e4d408a52f9 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.699 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:48:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:48:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/803994911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.780 2 DEBUG nova.network.neutron [req-68dd1d3d-3a98-43db-bd5d-fb6644ba3f95 req-ff6ef5a3-9b91-4b1c-8b46-e4c307bed875 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Updated VIF entry in instance network info cache for port c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.781 2 DEBUG nova.network.neutron [req-68dd1d3d-3a98-43db-bd5d-fb6644ba3f95 req-ff6ef5a3-9b91-4b1c-8b46-e4c307bed875 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Updating instance_info_cache with network_info: [{"id": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "address": "fa:16:3e:03:7f:de", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8ba4648-dd", "ovs_interfaceid": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.791 2 DEBUG oslo_concurrency.processutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.794 2 DEBUG nova.virt.libvirt.vif [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1331818960',display_name='tempest-VolumesBackupsTest-instance-1331818960',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1331818960',id=9,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAPQQVojhJmPeHreHOpE1NwY/4UtWWRv4SAlPgzy1F3CWqazhpKL2xf3seWaZRvPBIAZ0VzPUhxEc+sHYdkt+pa1HEVHjjWGMkeJSuJsMOCEUFX/xndVtuOCLh6+Rpgh6w==',key_name='tempest-keypair-405709585',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19100b7dd5c9420db1d7f374559a9498',ramdisk_id='',reservation_id='r-sijc9bji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1599024574',owner_user_name='tempest-VolumesBackupsTest-1599024574-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:48:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='825e1f460cae49ad9834c4d7d67e24fe',uuid=347eacbc-b9bd-4163-bc2e-a49a19a833c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "address": "fa:16:3e:03:7f:de", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8ba4648-dd", "ovs_interfaceid": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.794 2 DEBUG nova.network.os_vif_util [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Converting VIF {"id": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "address": "fa:16:3e:03:7f:de", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8ba4648-dd", "ovs_interfaceid": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.795 2 DEBUG nova.network.os_vif_util [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:7f:de,bridge_name='br-int',has_traffic_filtering=True,id=c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd,network=Network(9217a609-3f35-4647-87cd-e08d95dd1da1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8ba4648-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.796 2 DEBUG nova.objects.instance [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lazy-loading 'pci_devices' on Instance uuid 347eacbc-b9bd-4163-bc2e-a49a19a833c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.799 2 DEBUG oslo_concurrency.lockutils [req-68dd1d3d-3a98-43db-bd5d-fb6644ba3f95 req-ff6ef5a3-9b91-4b1c-8b46-e4c307bed875 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-347eacbc-b9bd-4163-bc2e-a49a19a833c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.814 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  <uuid>347eacbc-b9bd-4163-bc2e-a49a19a833c3</uuid>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  <name>instance-00000009</name>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <nova:name>tempest-VolumesBackupsTest-instance-1331818960</nova:name>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:48:47</nova:creationTime>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:        <nova:user uuid="825e1f460cae49ad9834c4d7d67e24fe">tempest-VolumesBackupsTest-1599024574-project-member</nova:user>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:        <nova:project uuid="19100b7dd5c9420db1d7f374559a9498">tempest-VolumesBackupsTest-1599024574</nova:project>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="f01c1e7c-fea3-4433-a44a-d71153552c78"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:        <nova:port uuid="c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <entry name="serial">347eacbc-b9bd-4163-bc2e-a49a19a833c3</entry>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <entry name="uuid">347eacbc-b9bd-4163-bc2e-a49a19a833c3</entry>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/347eacbc-b9bd-4163-bc2e-a49a19a833c3_disk">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/347eacbc-b9bd-4163-bc2e-a49a19a833c3_disk.config">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:03:7f:de"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <target dev="tapc8ba4648-dd"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/347eacbc-b9bd-4163-bc2e-a49a19a833c3/console.log" append="off"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:48:48 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:48:48 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:48:48 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:48:48 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.820 2 DEBUG nova.compute.manager [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Preparing to wait for external event network-vif-plugged-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.820 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.821 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.821 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.822 2 DEBUG nova.virt.libvirt.vif [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1331818960',display_name='tempest-VolumesBackupsTest-instance-1331818960',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1331818960',id=9,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAPQQVojhJmPeHreHOpE1NwY/4UtWWRv4SAlPgzy1F3CWqazhpKL2xf3seWaZRvPBIAZ0VzPUhxEc+sHYdkt+pa1HEVHjjWGMkeJSuJsMOCEUFX/xndVtuOCLh6+Rpgh6w==',key_name='tempest-keypair-405709585',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19100b7dd5c9420db1d7f374559a9498',ramdisk_id='',reservation_id='r-sijc9bji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1599024574',owner_user_name='tempest-VolumesBackupsTest-1599024574-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:48:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='825e1f460cae49ad9834c4d7d67e24fe',uuid=347eacbc-b9bd-4163-bc2e-a49a19a833c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "address": "fa:16:3e:03:7f:de", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8ba4648-dd", "ovs_interfaceid": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.822 2 DEBUG nova.network.os_vif_util [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Converting VIF {"id": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "address": "fa:16:3e:03:7f:de", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8ba4648-dd", "ovs_interfaceid": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.823 2 DEBUG nova.network.os_vif_util [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:7f:de,bridge_name='br-int',has_traffic_filtering=True,id=c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd,network=Network(9217a609-3f35-4647-87cd-e08d95dd1da1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8ba4648-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.824 2 DEBUG os_vif [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:7f:de,bridge_name='br-int',has_traffic_filtering=True,id=c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd,network=Network(9217a609-3f35-4647-87cd-e08d95dd1da1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8ba4648-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.825 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.826 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.828 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8ba4648-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.829 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc8ba4648-dd, col_values=(('external_ids', {'iface-id': 'c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:7f:de', 'vm-uuid': '347eacbc-b9bd-4163-bc2e-a49a19a833c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:48 np0005464891 NetworkManager[44940]: <info>  [1759337328.8792] manager: (tapc8ba4648-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.885 2 INFO os_vif [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:7f:de,bridge_name='br-int',has_traffic_filtering=True,id=c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd,network=Network(9217a609-3f35-4647-87cd-e08d95dd1da1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8ba4648-dd')#033[00m
Oct  1 12:48:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 387 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 1.8 MiB/s wr, 145 op/s
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.930 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.930 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.930 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] No VIF found with MAC fa:16:3e:03:7f:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.931 2 INFO nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Using config drive#033[00m
Oct  1 12:48:48 np0005464891 nova_compute[259907]: 2025-10-01 16:48:48.952 2 DEBUG nova.storage.rbd_utils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] rbd image 347eacbc-b9bd-4163-bc2e-a49a19a833c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.211 2 INFO nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Creating config drive at /var/lib/nova/instances/347eacbc-b9bd-4163-bc2e-a49a19a833c3/disk.config#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.222 2 DEBUG oslo_concurrency.processutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/347eacbc-b9bd-4163-bc2e-a49a19a833c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpru0zgjdq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.355 2 DEBUG oslo_concurrency.processutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/347eacbc-b9bd-4163-bc2e-a49a19a833c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpru0zgjdq" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.372 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.373 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.388 2 DEBUG nova.storage.rbd_utils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] rbd image 347eacbc-b9bd-4163-bc2e-a49a19a833c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.392 2 DEBUG oslo_concurrency.processutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/347eacbc-b9bd-4163-bc2e-a49a19a833c3/disk.config 347eacbc-b9bd-4163-bc2e-a49a19a833c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.450 2 DEBUG nova.compute.manager [req-37daecae-c432-48c2-baa1-f6a9dc9e7e3a req-70bff4ca-6df5-42a7-a884-ce597cdbd3d8 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Received event network-vif-plugged-5ef93ed9-65fa-4d0e-a510-20023ab7144f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.451 2 DEBUG oslo_concurrency.lockutils [req-37daecae-c432-48c2-baa1-f6a9dc9e7e3a req-70bff4ca-6df5-42a7-a884-ce597cdbd3d8 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.451 2 DEBUG oslo_concurrency.lockutils [req-37daecae-c432-48c2-baa1-f6a9dc9e7e3a req-70bff4ca-6df5-42a7-a884-ce597cdbd3d8 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.451 2 DEBUG oslo_concurrency.lockutils [req-37daecae-c432-48c2-baa1-f6a9dc9e7e3a req-70bff4ca-6df5-42a7-a884-ce597cdbd3d8 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.452 2 DEBUG nova.compute.manager [req-37daecae-c432-48c2-baa1-f6a9dc9e7e3a req-70bff4ca-6df5-42a7-a884-ce597cdbd3d8 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] No waiting events found dispatching network-vif-plugged-5ef93ed9-65fa-4d0e-a510-20023ab7144f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.452 2 WARNING nova.compute.manager [req-37daecae-c432-48c2-baa1-f6a9dc9e7e3a req-70bff4ca-6df5-42a7-a884-ce597cdbd3d8 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Received unexpected event network-vif-plugged-5ef93ed9-65fa-4d0e-a510-20023ab7144f for instance with vm_state active and task_state None.#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.452 2 DEBUG nova.compute.manager [req-37daecae-c432-48c2-baa1-f6a9dc9e7e3a req-70bff4ca-6df5-42a7-a884-ce597cdbd3d8 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Received event network-vif-deleted-845fe902-041f-4c80-897c-0bc9525fbeaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.535 2 DEBUG oslo_concurrency.processutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/347eacbc-b9bd-4163-bc2e-a49a19a833c3/disk.config 347eacbc-b9bd-4163-bc2e-a49a19a833c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.536 2 INFO nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Deleting local config drive /var/lib/nova/instances/347eacbc-b9bd-4163-bc2e-a49a19a833c3/disk.config because it was imported into RBD.#033[00m
Oct  1 12:48:49 np0005464891 kernel: tapc8ba4648-dd: entered promiscuous mode
Oct  1 12:48:49 np0005464891 NetworkManager[44940]: <info>  [1759337329.5773] manager: (tapc8ba4648-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:49 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:49Z|00081|binding|INFO|Claiming lport c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd for this chassis.
Oct  1 12:48:49 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:49Z|00082|binding|INFO|c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd: Claiming fa:16:3e:03:7f:de 10.100.0.13
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.587 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:7f:de 10.100.0.13'], port_security=['fa:16:3e:03:7f:de 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '347eacbc-b9bd-4163-bc2e-a49a19a833c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9217a609-3f35-4647-87cd-e08d95dd1da1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19100b7dd5c9420db1d7f374559a9498', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2031271b-1002-4f48-9596-46016bfe5629', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3460a047-44ee-4ad2-938a-c15de55876d0, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.589 162546 INFO neutron.agent.ovn.metadata.agent [-] Port c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd in datapath 9217a609-3f35-4647-87cd-e08d95dd1da1 bound to our chassis#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.590 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9217a609-3f35-4647-87cd-e08d95dd1da1#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.601 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e9cf82-e026-4bdf-b30f-d4b52eeb90a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.603 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9217a609-31 in ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:48:49 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:49Z|00083|binding|INFO|Setting lport c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd ovn-installed in OVS
Oct  1 12:48:49 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:49Z|00084|binding|INFO|Setting lport c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd up in Southbound
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.606 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9217a609-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.606 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8fbc0984-75c9-4fc9-b033-ace01b7f91a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.609 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[6d066306-71e4-47c6-aee0-5bef681873bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:49 np0005464891 systemd-machined[214891]: New machine qemu-9-instance-00000009.
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.622 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[90da1245-24c9-4f07-ac8a-b226a61029c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:49 np0005464891 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Oct  1 12:48:49 np0005464891 systemd-udevd[281633]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.647 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[6cdfae30-43ef-4c9a-92de-e6b6187a1946]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:49 np0005464891 NetworkManager[44940]: <info>  [1759337329.6543] device (tapc8ba4648-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:48:49 np0005464891 NetworkManager[44940]: <info>  [1759337329.6556] device (tapc8ba4648-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.685 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[053d7532-625b-43db-9342-6b20192090dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:49 np0005464891 podman[281619]: 2025-10-01 16:48:49.688249627 +0000 UTC m=+0.080760608 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  1 12:48:49 np0005464891 systemd-udevd[281640]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:48:49 np0005464891 NetworkManager[44940]: <info>  [1759337329.6950] manager: (tap9217a609-30): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.693 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[97e6c79b-252a-4768-85a1-10a3664d8430]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.727 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[b81da208-5388-4c22-a84e-90001a51809c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.732 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[73b3d6da-e413-4ccd-afc7-ce090cabc992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:49 np0005464891 NetworkManager[44940]: <info>  [1759337329.7558] device (tap9217a609-30): carrier: link connected
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.763 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[7498520c-5bb9-44f0-b0ff-9ee60e771a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.783 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b5789434-4c16-4146-abd7-abc0ee5299ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9217a609-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:b8:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427832, 'reachable_time': 24633, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281674, 'error': None, 'target': 'ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.798 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[904efef5-95ee-41e1-a3b6-1c412acad516]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:b815'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427832, 'tstamp': 427832}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281675, 'error': None, 'target': 'ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.816 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[4a936051-b047-492c-9538-2bcc089d98dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9217a609-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:b8:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427832, 'reachable_time': 24633, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281676, 'error': None, 'target': 'ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.845 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[efaf9874-efbb-492a-93ef-f2fe38226eba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.927 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[cc417be9-c857-42b0-bb78-cdf87d902331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.929 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9217a609-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.929 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.930 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9217a609-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.948 2 DEBUG nova.compute.manager [req-511644ac-b88a-4cfb-a5df-6f650e7606cf req-66487402-79af-422a-8113-a241a6a420dd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Received event network-vif-plugged-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.949 2 DEBUG oslo_concurrency.lockutils [req-511644ac-b88a-4cfb-a5df-6f650e7606cf req-66487402-79af-422a-8113-a241a6a420dd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.949 2 DEBUG oslo_concurrency.lockutils [req-511644ac-b88a-4cfb-a5df-6f650e7606cf req-66487402-79af-422a-8113-a241a6a420dd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.949 2 DEBUG oslo_concurrency.lockutils [req-511644ac-b88a-4cfb-a5df-6f650e7606cf req-66487402-79af-422a-8113-a241a6a420dd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.949 2 DEBUG nova.compute.manager [req-511644ac-b88a-4cfb-a5df-6f650e7606cf req-66487402-79af-422a-8113-a241a6a420dd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Processing event network-vif-plugged-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:48:49 np0005464891 NetworkManager[44940]: <info>  [1759337329.9880] manager: (tap9217a609-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Oct  1 12:48:49 np0005464891 kernel: tap9217a609-30: entered promiscuous mode
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.991 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9217a609-30, col_values=(('external_ids', {'iface-id': '5558844a-e29a-46f0-b86d-8940a2f4c4de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:49 np0005464891 nova_compute[259907]: 2025-10-01 16:48:49.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.995 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9217a609-3f35-4647-87cd-e08d95dd1da1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9217a609-3f35-4647-87cd-e08d95dd1da1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:48:49 np0005464891 ovn_controller[152409]: 2025-10-01T16:48:49Z|00085|binding|INFO|Releasing lport 5558844a-e29a-46f0-b86d-8940a2f4c4de from this chassis (sb_readonly=0)
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.996 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[7f7c7eb2-4b0b-4b20-9203-cb2657b8573f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.997 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-9217a609-3f35-4647-87cd-e08d95dd1da1
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/9217a609-3f35-4647-87cd-e08d95dd1da1.pid.haproxy
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID 9217a609-3f35-4647-87cd-e08d95dd1da1
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:48:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:49.998 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1', 'env', 'PROCESS_TAG=haproxy-9217a609-3f35-4647-87cd-e08d95dd1da1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9217a609-3f35-4647-87cd-e08d95dd1da1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:50 np0005464891 podman[281750]: 2025-10-01 16:48:50.436336027 +0000 UTC m=+0.086650660 container create cc73d1c707790453448e56de06040ce78bc8a00d1b11caefa1530bfbdc67f96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:48:50 np0005464891 podman[281750]: 2025-10-01 16:48:50.374187084 +0000 UTC m=+0.024501727 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:48:50 np0005464891 systemd[1]: Started libpod-conmon-cc73d1c707790453448e56de06040ce78bc8a00d1b11caefa1530bfbdc67f96b.scope.
Oct  1 12:48:50 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:48:50 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b821fa82a06ad31d65d2884076ea5eb6bff36936adaf2daf6cf87f087b3e30/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:48:50 np0005464891 podman[281750]: 2025-10-01 16:48:50.520127646 +0000 UTC m=+0.170442299 container init cc73d1c707790453448e56de06040ce78bc8a00d1b11caefa1530bfbdc67f96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  1 12:48:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:48:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1566520119' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:48:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:48:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1566520119' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:48:50 np0005464891 podman[281750]: 2025-10-01 16:48:50.529464834 +0000 UTC m=+0.179779467 container start cc73d1c707790453448e56de06040ce78bc8a00d1b11caefa1530bfbdc67f96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  1 12:48:50 np0005464891 neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1[281765]: [NOTICE]   (281769) : New worker (281771) forked
Oct  1 12:48:50 np0005464891 neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1[281765]: [NOTICE]   (281769) : Loading success.
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.630 2 DEBUG nova.compute.manager [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.631 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337330.630083, 347eacbc-b9bd-4163-bc2e-a49a19a833c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.631 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] VM Started (Lifecycle Event)#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.636 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.640 2 INFO nova.virt.libvirt.driver [-] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Instance spawned successfully.#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.640 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.656 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.664 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.667 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.668 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.668 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.669 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.669 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.670 2 DEBUG nova.virt.libvirt.driver [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:48:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:48:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Oct  1 12:48:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Oct  1 12:48:50 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.712 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.712 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337330.6310081, 347eacbc-b9bd-4163-bc2e-a49a19a833c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.712 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.749 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.752 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337330.6352618, 347eacbc-b9bd-4163-bc2e-a49a19a833c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.753 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.756 2 INFO nova.compute.manager [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Took 6.25 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.757 2 DEBUG nova.compute.manager [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.771 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.773 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.798 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.887 2 INFO nova.compute.manager [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Took 7.36 seconds to build instance.#033[00m
Oct  1 12:48:50 np0005464891 nova_compute[259907]: 2025-10-01 16:48:50.911 2 DEBUG oslo_concurrency.lockutils [None req-bb6c63c7-c0bd-4880-82b3-d81e5690e86a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.478s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 341 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.0 MiB/s wr, 217 op/s
Oct  1 12:48:51 np0005464891 nova_compute[259907]: 2025-10-01 16:48:51.525 2 DEBUG nova.compute.manager [req-9f314542-52e0-4a68-83aa-81b7fba1ceeb req-24a16b8f-e67b-4d1d-85ea-dd2a8cbed842 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Received event network-changed-5ef93ed9-65fa-4d0e-a510-20023ab7144f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:51 np0005464891 nova_compute[259907]: 2025-10-01 16:48:51.525 2 DEBUG nova.compute.manager [req-9f314542-52e0-4a68-83aa-81b7fba1ceeb req-24a16b8f-e67b-4d1d-85ea-dd2a8cbed842 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Refreshing instance network info cache due to event network-changed-5ef93ed9-65fa-4d0e-a510-20023ab7144f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:48:51 np0005464891 nova_compute[259907]: 2025-10-01 16:48:51.525 2 DEBUG oslo_concurrency.lockutils [req-9f314542-52e0-4a68-83aa-81b7fba1ceeb req-24a16b8f-e67b-4d1d-85ea-dd2a8cbed842 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:48:51 np0005464891 nova_compute[259907]: 2025-10-01 16:48:51.525 2 DEBUG oslo_concurrency.lockutils [req-9f314542-52e0-4a68-83aa-81b7fba1ceeb req-24a16b8f-e67b-4d1d-85ea-dd2a8cbed842 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:48:51 np0005464891 nova_compute[259907]: 2025-10-01 16:48:51.526 2 DEBUG nova.network.neutron [req-9f314542-52e0-4a68-83aa-81b7fba1ceeb req-24a16b8f-e67b-4d1d-85ea-dd2a8cbed842 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Refreshing network info cache for port 5ef93ed9-65fa-4d0e-a510-20023ab7144f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:48:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Oct  1 12:48:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Oct  1 12:48:51 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Oct  1 12:48:52 np0005464891 nova_compute[259907]: 2025-10-01 16:48:52.036 2 DEBUG nova.compute.manager [req-4f7016a8-4fda-4e16-9fa6-0818970f6873 req-bd891756-9a3a-47b4-9f12-df16ecc7ab28 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Received event network-vif-plugged-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:52 np0005464891 nova_compute[259907]: 2025-10-01 16:48:52.037 2 DEBUG oslo_concurrency.lockutils [req-4f7016a8-4fda-4e16-9fa6-0818970f6873 req-bd891756-9a3a-47b4-9f12-df16ecc7ab28 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:48:52 np0005464891 nova_compute[259907]: 2025-10-01 16:48:52.037 2 DEBUG oslo_concurrency.lockutils [req-4f7016a8-4fda-4e16-9fa6-0818970f6873 req-bd891756-9a3a-47b4-9f12-df16ecc7ab28 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:48:52 np0005464891 nova_compute[259907]: 2025-10-01 16:48:52.037 2 DEBUG oslo_concurrency.lockutils [req-4f7016a8-4fda-4e16-9fa6-0818970f6873 req-bd891756-9a3a-47b4-9f12-df16ecc7ab28 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:48:52 np0005464891 nova_compute[259907]: 2025-10-01 16:48:52.038 2 DEBUG nova.compute.manager [req-4f7016a8-4fda-4e16-9fa6-0818970f6873 req-bd891756-9a3a-47b4-9f12-df16ecc7ab28 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] No waiting events found dispatching network-vif-plugged-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:48:52 np0005464891 nova_compute[259907]: 2025-10-01 16:48:52.038 2 WARNING nova.compute.manager [req-4f7016a8-4fda-4e16-9fa6-0818970f6873 req-bd891756-9a3a-47b4-9f12-df16ecc7ab28 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Received unexpected event network-vif-plugged-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd for instance with vm_state active and task_state None.#033[00m
Oct  1 12:48:52 np0005464891 nova_compute[259907]: 2025-10-01 16:48:52.909 2 DEBUG nova.network.neutron [req-9f314542-52e0-4a68-83aa-81b7fba1ceeb req-24a16b8f-e67b-4d1d-85ea-dd2a8cbed842 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Updated VIF entry in instance network info cache for port 5ef93ed9-65fa-4d0e-a510-20023ab7144f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:48:52 np0005464891 nova_compute[259907]: 2025-10-01 16:48:52.909 2 DEBUG nova.network.neutron [req-9f314542-52e0-4a68-83aa-81b7fba1ceeb req-24a16b8f-e67b-4d1d-85ea-dd2a8cbed842 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Updating instance_info_cache with network_info: [{"id": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "address": "fa:16:3e:73:57:c3", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ef93ed9-65", "ovs_interfaceid": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:48:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 322 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 233 op/s
Oct  1 12:48:52 np0005464891 nova_compute[259907]: 2025-10-01 16:48:52.939 2 DEBUG oslo_concurrency.lockutils [req-9f314542-52e0-4a68-83aa-81b7fba1ceeb req-24a16b8f-e67b-4d1d-85ea-dd2a8cbed842 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:48:53 np0005464891 nova_compute[259907]: 2025-10-01 16:48:53.582 2 DEBUG nova.compute.manager [req-431ac63f-926c-4c92-892d-0d04deb49acf req-c7d81bea-e28b-4b2f-8901-c537ee9b14b3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Received event network-changed-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:48:53 np0005464891 nova_compute[259907]: 2025-10-01 16:48:53.582 2 DEBUG nova.compute.manager [req-431ac63f-926c-4c92-892d-0d04deb49acf req-c7d81bea-e28b-4b2f-8901-c537ee9b14b3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Refreshing instance network info cache due to event network-changed-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:48:53 np0005464891 nova_compute[259907]: 2025-10-01 16:48:53.582 2 DEBUG oslo_concurrency.lockutils [req-431ac63f-926c-4c92-892d-0d04deb49acf req-c7d81bea-e28b-4b2f-8901-c537ee9b14b3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-347eacbc-b9bd-4163-bc2e-a49a19a833c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:48:53 np0005464891 nova_compute[259907]: 2025-10-01 16:48:53.582 2 DEBUG oslo_concurrency.lockutils [req-431ac63f-926c-4c92-892d-0d04deb49acf req-c7d81bea-e28b-4b2f-8901-c537ee9b14b3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-347eacbc-b9bd-4163-bc2e-a49a19a833c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:48:53 np0005464891 nova_compute[259907]: 2025-10-01 16:48:53.582 2 DEBUG nova.network.neutron [req-431ac63f-926c-4c92-892d-0d04deb49acf req-c7d81bea-e28b-4b2f-8901-c537ee9b14b3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Refreshing network info cache for port c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:48:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Oct  1 12:48:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Oct  1 12:48:53 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Oct  1 12:48:53 np0005464891 nova_compute[259907]: 2025-10-01 16:48:53.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Oct  1 12:48:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Oct  1 12:48:54 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Oct  1 12:48:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 310 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 43 KiB/s wr, 243 op/s
Oct  1 12:48:55 np0005464891 nova_compute[259907]: 2025-10-01 16:48:55.031 2 DEBUG nova.network.neutron [req-431ac63f-926c-4c92-892d-0d04deb49acf req-c7d81bea-e28b-4b2f-8901-c537ee9b14b3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Updated VIF entry in instance network info cache for port c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:48:55 np0005464891 nova_compute[259907]: 2025-10-01 16:48:55.032 2 DEBUG nova.network.neutron [req-431ac63f-926c-4c92-892d-0d04deb49acf req-c7d81bea-e28b-4b2f-8901-c537ee9b14b3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Updating instance_info_cache with network_info: [{"id": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "address": "fa:16:3e:03:7f:de", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8ba4648-dd", "ovs_interfaceid": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:48:55 np0005464891 nova_compute[259907]: 2025-10-01 16:48:55.048 2 DEBUG oslo_concurrency.lockutils [req-431ac63f-926c-4c92-892d-0d04deb49acf req-c7d81bea-e28b-4b2f-8901-c537ee9b14b3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-347eacbc-b9bd-4163-bc2e-a49a19a833c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:48:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:48:55.382 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:48:55 np0005464891 nova_compute[259907]: 2025-10-01 16:48:55.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:48:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 295 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 33 KiB/s wr, 298 op/s
Oct  1 12:48:58 np0005464891 nova_compute[259907]: 2025-10-01 16:48:58.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:48:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 295 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 28 KiB/s wr, 255 op/s
Oct  1 12:49:00 np0005464891 podman[281780]: 2025-10-01 16:49:00.01162424 +0000 UTC m=+0.118766685 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:49:00 np0005464891 nova_compute[259907]: 2025-10-01 16:49:00.604 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337325.5559, 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:49:00 np0005464891 nova_compute[259907]: 2025-10-01 16:49:00.604 2 INFO nova.compute.manager [-] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:49:00 np0005464891 nova_compute[259907]: 2025-10-01 16:49:00.632 2 DEBUG nova.compute.manager [None req-d08aae66-7305-423a-bcbb-1b8918c87879 - - - - - -] [instance: 7b69453f-fbc0-43ba-bf0b-07c11bd3f9ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:49:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:49:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Oct  1 12:49:00 np0005464891 nova_compute[259907]: 2025-10-01 16:49:00.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Oct  1 12:49:00 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Oct  1 12:49:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 299 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 262 KiB/s wr, 187 op/s
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.017 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "01833916-f84a-425e-b28f-d214922d3126" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.018 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.039 2 DEBUG nova.compute.manager [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.127 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.128 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.144 2 DEBUG nova.virt.hardware [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.145 2 INFO nova.compute.claims [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:49:01 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:01Z|00010|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.5
Oct  1 12:49:01 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:01Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:73:57:c3 10.100.0.5
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.288 2 DEBUG oslo_concurrency.processutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:49:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1040563771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.759 2 DEBUG oslo_concurrency.processutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.768 2 DEBUG nova.compute.provider_tree [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.786 2 DEBUG nova.scheduler.client.report [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.814 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.815 2 DEBUG nova.compute.manager [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.867 2 DEBUG nova.compute.manager [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.868 2 DEBUG nova.network.neutron [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.922 2 INFO nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:49:01 np0005464891 nova_compute[259907]: 2025-10-01 16:49:01.965 2 DEBUG nova.compute.manager [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.049 2 DEBUG nova.compute.manager [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.050 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.050 2 INFO nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Creating image(s)#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.072 2 DEBUG nova.storage.rbd_utils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] rbd image 01833916-f84a-425e-b28f-d214922d3126_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.101 2 DEBUG nova.storage.rbd_utils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] rbd image 01833916-f84a-425e-b28f-d214922d3126_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.129 2 DEBUG nova.storage.rbd_utils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] rbd image 01833916-f84a-425e-b28f-d214922d3126_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.135 2 DEBUG oslo_concurrency.processutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.167 2 DEBUG nova.policy [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3517dc72472c436aaf2fe65b5ce2f240', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '69d5fb4f7a0b4337a1b8774e04c97b9a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.228 2 DEBUG oslo_concurrency.processutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.229 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.229 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.230 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.258 2 DEBUG nova.storage.rbd_utils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] rbd image 01833916-f84a-425e-b28f-d214922d3126_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.264 2 DEBUG oslo_concurrency.processutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 01833916-f84a-425e-b28f-d214922d3126_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.588 2 DEBUG oslo_concurrency.processutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 01833916-f84a-425e-b28f-d214922d3126_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.656 2 DEBUG nova.storage.rbd_utils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] resizing rbd image 01833916-f84a-425e-b28f-d214922d3126_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.786 2 DEBUG nova.objects.instance [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lazy-loading 'migration_context' on Instance uuid 01833916-f84a-425e-b28f-d214922d3126 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.805 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.806 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Ensure instance console log exists: /var/lib/nova/instances/01833916-f84a-425e-b28f-d214922d3126/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.806 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.807 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.807 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:02 np0005464891 nova_compute[259907]: 2025-10-01 16:49:02.836 2 DEBUG nova.network.neutron [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Successfully created port: 31dd65ea-0bf2-4c61-a641-bff75a96926d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:49:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 333 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.1 MiB/s wr, 193 op/s
Oct  1 12:49:02 np0005464891 podman[281995]: 2025-10-01 16:49:02.981777169 +0000 UTC m=+0.087804721 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:49:03 np0005464891 nova_compute[259907]: 2025-10-01 16:49:03.740 2 DEBUG nova.network.neutron [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Successfully updated port: 31dd65ea-0bf2-4c61-a641-bff75a96926d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:49:03 np0005464891 nova_compute[259907]: 2025-10-01 16:49:03.761 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "refresh_cache-01833916-f84a-425e-b28f-d214922d3126" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:49:03 np0005464891 nova_compute[259907]: 2025-10-01 16:49:03.762 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquired lock "refresh_cache-01833916-f84a-425e-b28f-d214922d3126" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:49:03 np0005464891 nova_compute[259907]: 2025-10-01 16:49:03.762 2 DEBUG nova.network.neutron [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:49:03 np0005464891 nova_compute[259907]: 2025-10-01 16:49:03.855 2 DEBUG nova.compute.manager [req-ef663938-7682-4ef0-9c15-5e9b858d9a24 req-47a3e494-863b-4f65-ad0d-628e1b7ec03b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Received event network-changed-31dd65ea-0bf2-4c61-a641-bff75a96926d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:03 np0005464891 nova_compute[259907]: 2025-10-01 16:49:03.855 2 DEBUG nova.compute.manager [req-ef663938-7682-4ef0-9c15-5e9b858d9a24 req-47a3e494-863b-4f65-ad0d-628e1b7ec03b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Refreshing instance network info cache due to event network-changed-31dd65ea-0bf2-4c61-a641-bff75a96926d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:49:03 np0005464891 nova_compute[259907]: 2025-10-01 16:49:03.855 2 DEBUG oslo_concurrency.lockutils [req-ef663938-7682-4ef0-9c15-5e9b858d9a24 req-47a3e494-863b-4f65-ad0d-628e1b7ec03b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-01833916-f84a-425e-b28f-d214922d3126" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:49:03 np0005464891 nova_compute[259907]: 2025-10-01 16:49:03.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:03 np0005464891 nova_compute[259907]: 2025-10-01 16:49:03.911 2 DEBUG nova.network.neutron [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:49:04 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:04Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:7f:de 10.100.0.13
Oct  1 12:49:04 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:04Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:7f:de 10.100.0.13
Oct  1 12:49:04 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:04Z|00014|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.5
Oct  1 12:49:04 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:04Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:73:57:c3 10.100.0.5
Oct  1 12:49:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 371 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 200 op/s
Oct  1 12:49:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:49:05 np0005464891 nova_compute[259907]: 2025-10-01 16:49:05.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:06 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:06Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:73:57:c3 10.100.0.5
Oct  1 12:49:06 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:06Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:57:c3 10.100.0.5
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.610 2 DEBUG nova.network.neutron [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Updating instance_info_cache with network_info: [{"id": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "address": "fa:16:3e:61:8b:e9", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31dd65ea-0b", "ovs_interfaceid": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.626 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Releasing lock "refresh_cache-01833916-f84a-425e-b28f-d214922d3126" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.627 2 DEBUG nova.compute.manager [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Instance network_info: |[{"id": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "address": "fa:16:3e:61:8b:e9", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31dd65ea-0b", "ovs_interfaceid": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.627 2 DEBUG oslo_concurrency.lockutils [req-ef663938-7682-4ef0-9c15-5e9b858d9a24 req-47a3e494-863b-4f65-ad0d-628e1b7ec03b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-01833916-f84a-425e-b28f-d214922d3126" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.628 2 DEBUG nova.network.neutron [req-ef663938-7682-4ef0-9c15-5e9b858d9a24 req-47a3e494-863b-4f65-ad0d-628e1b7ec03b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Refreshing network info cache for port 31dd65ea-0bf2-4c61-a641-bff75a96926d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.633 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Start _get_guest_xml network_info=[{"id": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "address": "fa:16:3e:61:8b:e9", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31dd65ea-0b", "ovs_interfaceid": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'image_id': 'f01c1e7c-fea3-4433-a44a-d71153552c78'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.640 2 WARNING nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.646 2 DEBUG nova.virt.libvirt.host [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.646 2 DEBUG nova.virt.libvirt.host [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.650 2 DEBUG nova.virt.libvirt.host [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.651 2 DEBUG nova.virt.libvirt.host [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.652 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.652 2 DEBUG nova.virt.hardware [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.653 2 DEBUG nova.virt.hardware [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.653 2 DEBUG nova.virt.hardware [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.654 2 DEBUG nova.virt.hardware [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.654 2 DEBUG nova.virt.hardware [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.654 2 DEBUG nova.virt.hardware [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.655 2 DEBUG nova.virt.hardware [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.655 2 DEBUG nova.virt.hardware [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.656 2 DEBUG nova.virt.hardware [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.656 2 DEBUG nova.virt.hardware [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.657 2 DEBUG nova.virt.hardware [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:49:06 np0005464891 nova_compute[259907]: 2025-10-01 16:49:06.662 2 DEBUG oslo_concurrency.processutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 386 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 164 op/s
Oct  1 12:49:06 np0005464891 podman[282035]: 2025-10-01 16:49:06.943255684 +0000 UTC m=+0.056986321 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  1 12:49:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:49:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/490432511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.149 2 DEBUG oslo_concurrency.processutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.186 2 DEBUG nova.storage.rbd_utils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] rbd image 01833916-f84a-425e-b28f-d214922d3126_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.194 2 DEBUG oslo_concurrency.processutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:49:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1262911341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.676 2 DEBUG oslo_concurrency.processutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.678 2 DEBUG nova.virt.libvirt.vif [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:49:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-633721332',display_name='tempest-VolumesSnapshotTestJSON-instance-633721332',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-633721332',id=10,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQXtiJojKSjlX+L4+UcMOUZxDjM6YHarO/WRI6PZsXzV57BI1NGaQ5utimUiS/B2m/z/6TZx53P1GuknwcJ4JxYbnNCo1sgJq2vAVD/0YOb5f+MRSQ3HDMnQdqctYUuJw==',key_name='tempest-keypair-120189569',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='69d5fb4f7a0b4337a1b8774e04c97b9a',ramdisk_id='',reservation_id='r-opvs9h41',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-1941074907',owner_user_name='tempest-VolumesSnapshotTestJSON-1941074907-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:49:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3517dc72472c436aaf2fe65b5ce2f240',uuid=01833916-f84a-425e-b28f-d214922d3126,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "address": "fa:16:3e:61:8b:e9", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31dd65ea-0b", "ovs_interfaceid": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.678 2 DEBUG nova.network.os_vif_util [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Converting VIF {"id": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "address": "fa:16:3e:61:8b:e9", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31dd65ea-0b", "ovs_interfaceid": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.679 2 DEBUG nova.network.os_vif_util [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:e9,bridge_name='br-int',has_traffic_filtering=True,id=31dd65ea-0bf2-4c61-a641-bff75a96926d,network=Network(3401e30b-97c6-4012-a9d4-0114c56bacd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31dd65ea-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.680 2 DEBUG nova.objects.instance [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 01833916-f84a-425e-b28f-d214922d3126 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.692 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  <uuid>01833916-f84a-425e-b28f-d214922d3126</uuid>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  <name>instance-0000000a</name>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <nova:name>tempest-VolumesSnapshotTestJSON-instance-633721332</nova:name>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:49:06</nova:creationTime>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:        <nova:user uuid="3517dc72472c436aaf2fe65b5ce2f240">tempest-VolumesSnapshotTestJSON-1941074907-project-member</nova:user>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:        <nova:project uuid="69d5fb4f7a0b4337a1b8774e04c97b9a">tempest-VolumesSnapshotTestJSON-1941074907</nova:project>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="f01c1e7c-fea3-4433-a44a-d71153552c78"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:        <nova:port uuid="31dd65ea-0bf2-4c61-a641-bff75a96926d">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <entry name="serial">01833916-f84a-425e-b28f-d214922d3126</entry>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <entry name="uuid">01833916-f84a-425e-b28f-d214922d3126</entry>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/01833916-f84a-425e-b28f-d214922d3126_disk">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/01833916-f84a-425e-b28f-d214922d3126_disk.config">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:61:8b:e9"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <target dev="tap31dd65ea-0b"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/01833916-f84a-425e-b28f-d214922d3126/console.log" append="off"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:49:07 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:49:07 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:49:07 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:49:07 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.694 2 DEBUG nova.compute.manager [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Preparing to wait for external event network-vif-plugged-31dd65ea-0bf2-4c61-a641-bff75a96926d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.694 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "01833916-f84a-425e-b28f-d214922d3126-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.694 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.694 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.695 2 DEBUG nova.virt.libvirt.vif [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:49:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-633721332',display_name='tempest-VolumesSnapshotTestJSON-instance-633721332',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-633721332',id=10,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQXtiJojKSjlX+L4+UcMOUZxDjM6YHarO/WRI6PZsXzV57BI1NGaQ5utimUiS/B2m/z/6TZx53P1GuknwcJ4JxYbnNCo1sgJq2vAVD/0YOb5f+MRSQ3HDMnQdqctYUuJw==',key_name='tempest-keypair-120189569',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='69d5fb4f7a0b4337a1b8774e04c97b9a',ramdisk_id='',reservation_id='r-opvs9h41',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-1941074907',owner_user_name='tempest-VolumesSnapshotTestJSON-1941074907-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:49:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3517dc72472c436aaf2fe65b5ce2f240',uuid=01833916-f84a-425e-b28f-d214922d3126,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "address": "fa:16:3e:61:8b:e9", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31dd65ea-0b", "ovs_interfaceid": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.695 2 DEBUG nova.network.os_vif_util [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Converting VIF {"id": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "address": "fa:16:3e:61:8b:e9", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31dd65ea-0b", "ovs_interfaceid": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.696 2 DEBUG nova.network.os_vif_util [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:e9,bridge_name='br-int',has_traffic_filtering=True,id=31dd65ea-0bf2-4c61-a641-bff75a96926d,network=Network(3401e30b-97c6-4012-a9d4-0114c56bacd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31dd65ea-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.696 2 DEBUG os_vif [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:e9,bridge_name='br-int',has_traffic_filtering=True,id=31dd65ea-0bf2-4c61-a641-bff75a96926d,network=Network(3401e30b-97c6-4012-a9d4-0114c56bacd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31dd65ea-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.697 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.697 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31dd65ea-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31dd65ea-0b, col_values=(('external_ids', {'iface-id': '31dd65ea-0bf2-4c61-a641-bff75a96926d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:8b:e9', 'vm-uuid': '01833916-f84a-425e-b28f-d214922d3126'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:49:07 np0005464891 NetworkManager[44940]: <info>  [1759337347.7055] manager: (tap31dd65ea-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.711 2 INFO os_vif [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:8b:e9,bridge_name='br-int',has_traffic_filtering=True,id=31dd65ea-0bf2-4c61-a641-bff75a96926d,network=Network(3401e30b-97c6-4012-a9d4-0114c56bacd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31dd65ea-0b')#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.723 2 DEBUG nova.network.neutron [req-ef663938-7682-4ef0-9c15-5e9b858d9a24 req-47a3e494-863b-4f65-ad0d-628e1b7ec03b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Updated VIF entry in instance network info cache for port 31dd65ea-0bf2-4c61-a641-bff75a96926d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.723 2 DEBUG nova.network.neutron [req-ef663938-7682-4ef0-9c15-5e9b858d9a24 req-47a3e494-863b-4f65-ad0d-628e1b7ec03b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Updating instance_info_cache with network_info: [{"id": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "address": "fa:16:3e:61:8b:e9", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31dd65ea-0b", "ovs_interfaceid": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.737 2 DEBUG oslo_concurrency.lockutils [req-ef663938-7682-4ef0-9c15-5e9b858d9a24 req-47a3e494-863b-4f65-ad0d-628e1b7ec03b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-01833916-f84a-425e-b28f-d214922d3126" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.763 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.764 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.764 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] No VIF found with MAC fa:16:3e:61:8b:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.764 2 INFO nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Using config drive#033[00m
Oct  1 12:49:07 np0005464891 nova_compute[259907]: 2025-10-01 16:49:07.787 2 DEBUG nova.storage.rbd_utils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] rbd image 01833916-f84a-425e-b28f-d214922d3126_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:49:08 np0005464891 nova_compute[259907]: 2025-10-01 16:49:08.075 2 INFO nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Creating config drive at /var/lib/nova/instances/01833916-f84a-425e-b28f-d214922d3126/disk.config#033[00m
Oct  1 12:49:08 np0005464891 nova_compute[259907]: 2025-10-01 16:49:08.080 2 DEBUG oslo_concurrency.processutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/01833916-f84a-425e-b28f-d214922d3126/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbq2y4eyj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:08 np0005464891 nova_compute[259907]: 2025-10-01 16:49:08.221 2 DEBUG oslo_concurrency.processutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/01833916-f84a-425e-b28f-d214922d3126/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbq2y4eyj" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:08 np0005464891 nova_compute[259907]: 2025-10-01 16:49:08.253 2 DEBUG nova.storage.rbd_utils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] rbd image 01833916-f84a-425e-b28f-d214922d3126_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:49:08 np0005464891 nova_compute[259907]: 2025-10-01 16:49:08.258 2 DEBUG oslo_concurrency.processutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/01833916-f84a-425e-b28f-d214922d3126/disk.config 01833916-f84a-425e-b28f-d214922d3126_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:08 np0005464891 nova_compute[259907]: 2025-10-01 16:49:08.444 2 DEBUG oslo_concurrency.processutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/01833916-f84a-425e-b28f-d214922d3126/disk.config 01833916-f84a-425e-b28f-d214922d3126_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:08 np0005464891 nova_compute[259907]: 2025-10-01 16:49:08.446 2 INFO nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Deleting local config drive /var/lib/nova/instances/01833916-f84a-425e-b28f-d214922d3126/disk.config because it was imported into RBD.#033[00m
Oct  1 12:49:08 np0005464891 kernel: tap31dd65ea-0b: entered promiscuous mode
Oct  1 12:49:08 np0005464891 NetworkManager[44940]: <info>  [1759337348.5162] manager: (tap31dd65ea-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Oct  1 12:49:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:08Z|00086|binding|INFO|Claiming lport 31dd65ea-0bf2-4c61-a641-bff75a96926d for this chassis.
Oct  1 12:49:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:08Z|00087|binding|INFO|31dd65ea-0bf2-4c61-a641-bff75a96926d: Claiming fa:16:3e:61:8b:e9 10.100.0.10
Oct  1 12:49:08 np0005464891 nova_compute[259907]: 2025-10-01 16:49:08.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.527 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:8b:e9 10.100.0.10'], port_security=['fa:16:3e:61:8b:e9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '01833916-f84a-425e-b28f-d214922d3126', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3401e30b-97c6-4012-a9d4-0114c56bacd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '69d5fb4f7a0b4337a1b8774e04c97b9a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '337d1ee8-b54a-42da-a113-4004bc12381c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6048fd95-db94-4f1d-be7e-ff0b5269a1e3, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=31dd65ea-0bf2-4c61-a641-bff75a96926d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.528 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 31dd65ea-0bf2-4c61-a641-bff75a96926d in datapath 3401e30b-97c6-4012-a9d4-0114c56bacd5 bound to our chassis#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.530 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3401e30b-97c6-4012-a9d4-0114c56bacd5#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.549 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[19ff143e-7799-4428-b80c-f58847cabce9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.551 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3401e30b-91 in ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:49:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:08Z|00088|binding|INFO|Setting lport 31dd65ea-0bf2-4c61-a641-bff75a96926d ovn-installed in OVS
Oct  1 12:49:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:08Z|00089|binding|INFO|Setting lport 31dd65ea-0bf2-4c61-a641-bff75a96926d up in Southbound
Oct  1 12:49:08 np0005464891 nova_compute[259907]: 2025-10-01 16:49:08.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.557 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3401e30b-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.557 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1e4bc0-e4cf-45f4-b1a5-13fee8ce9408]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.559 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[96632d3c-6a60-475d-9e7d-feb3c5d3cfd7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:08 np0005464891 nova_compute[259907]: 2025-10-01 16:49:08.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:08 np0005464891 systemd-udevd[282170]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:49:08 np0005464891 systemd-machined[214891]: New machine qemu-10-instance-0000000a.
Oct  1 12:49:08 np0005464891 NetworkManager[44940]: <info>  [1759337348.5902] device (tap31dd65ea-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.587 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[99951817-23d5-47b5-a54b-35cc73daaa8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:08 np0005464891 NetworkManager[44940]: <info>  [1759337348.5913] device (tap31dd65ea-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:49:08 np0005464891 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.610 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[9300f9cb-9b28-40a6-b14e-c274edbdf4c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.649 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[cce22218-0102-4158-af6b-01abc90379af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.654 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8024e887-47d2-4132-ae17-662ad2b4e0e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:08 np0005464891 NetworkManager[44940]: <info>  [1759337348.6560] manager: (tap3401e30b-90): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.693 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[5baa7fab-9d52-4909-9717-f188d291a7f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.697 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7cd124-3cb7-412c-9888-c37be36d3ede]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:08 np0005464891 NetworkManager[44940]: <info>  [1759337348.7235] device (tap3401e30b-90): carrier: link connected
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.729 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[a44808bb-6614-4d12-9d74-264ef5e61a9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.751 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[801a2f33-3455-48a5-a016-a821d03627e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3401e30b-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:b8:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429729, 'reachable_time': 29420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282203, 'error': None, 'target': 'ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.778 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf365fc-c760-465b-8a73-75c7b8d7a673]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:b811'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429729, 'tstamp': 429729}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282204, 'error': None, 'target': 'ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.798 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[0151d173-0f3b-4b21-8ceb-b9f6e1edfc63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3401e30b-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:b8:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429729, 'reachable_time': 29420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282205, 'error': None, 'target': 'ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.850 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f734acbb-b338-4325-8563-b92efd877256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 388 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 172 op/s
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.934 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[749b0489-9f1b-4768-9232-816d7909de94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.935 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3401e30b-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.935 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.936 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3401e30b-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:08 np0005464891 nova_compute[259907]: 2025-10-01 16:49:08.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:08 np0005464891 NetworkManager[44940]: <info>  [1759337348.9399] manager: (tap3401e30b-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Oct  1 12:49:08 np0005464891 kernel: tap3401e30b-90: entered promiscuous mode
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.945 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3401e30b-90, col_values=(('external_ids', {'iface-id': '72585314-0d9f-4f28-bd98-a3592b2b3241'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:08Z|00090|binding|INFO|Releasing lport 72585314-0d9f-4f28-bd98-a3592b2b3241 from this chassis (sb_readonly=0)
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.984 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3401e30b-97c6-4012-a9d4-0114c56bacd5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3401e30b-97c6-4012-a9d4-0114c56bacd5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:49:08 np0005464891 nova_compute[259907]: 2025-10-01 16:49:08.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.986 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[eb632f72-de1c-4454-891f-d8ae65335f7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.987 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-3401e30b-97c6-4012-a9d4-0114c56bacd5
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/3401e30b-97c6-4012-a9d4-0114c56bacd5.pid.haproxy
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID 3401e30b-97c6-4012-a9d4-0114c56bacd5
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:49:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:08.988 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5', 'env', 'PROCESS_TAG=haproxy-3401e30b-97c6-4012-a9d4-0114c56bacd5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3401e30b-97c6-4012-a9d4-0114c56bacd5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.205 2 DEBUG nova.compute.manager [req-91e4176d-f069-455b-9c8c-7d812f66e04e req-f540d759-23c2-4293-a873-55e9f8680429 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Received event network-vif-plugged-31dd65ea-0bf2-4c61-a641-bff75a96926d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.207 2 DEBUG oslo_concurrency.lockutils [req-91e4176d-f069-455b-9c8c-7d812f66e04e req-f540d759-23c2-4293-a873-55e9f8680429 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "01833916-f84a-425e-b28f-d214922d3126-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.207 2 DEBUG oslo_concurrency.lockutils [req-91e4176d-f069-455b-9c8c-7d812f66e04e req-f540d759-23c2-4293-a873-55e9f8680429 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.208 2 DEBUG oslo_concurrency.lockutils [req-91e4176d-f069-455b-9c8c-7d812f66e04e req-f540d759-23c2-4293-a873-55e9f8680429 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.208 2 DEBUG nova.compute.manager [req-91e4176d-f069-455b-9c8c-7d812f66e04e req-f540d759-23c2-4293-a873-55e9f8680429 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Processing event network-vif-plugged-31dd65ea-0bf2-4c61-a641-bff75a96926d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:49:09 np0005464891 podman[282279]: 2025-10-01 16:49:09.501729816 +0000 UTC m=+0.139165387 container create 3d6d3484c7d2317d0ca610af34594a1d3acf261b4854016fd9ff82236ed51339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:49:09 np0005464891 podman[282279]: 2025-10-01 16:49:09.40934684 +0000 UTC m=+0.046782411 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:49:09 np0005464891 systemd[1]: Started libpod-conmon-3d6d3484c7d2317d0ca610af34594a1d3acf261b4854016fd9ff82236ed51339.scope.
Oct  1 12:49:09 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:49:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86dcdadaf3ab277f3fa50b8d50f8b37e268df14c1557a500dfbd0f320497110e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:49:09 np0005464891 podman[282279]: 2025-10-01 16:49:09.625236201 +0000 UTC m=+0.262671762 container init 3d6d3484c7d2317d0ca610af34594a1d3acf261b4854016fd9ff82236ed51339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:49:09 np0005464891 podman[282279]: 2025-10-01 16:49:09.631819782 +0000 UTC m=+0.269255343 container start 3d6d3484c7d2317d0ca610af34594a1d3acf261b4854016fd9ff82236ed51339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:49:09 np0005464891 neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5[282294]: [NOTICE]   (282298) : New worker (282300) forked
Oct  1 12:49:09 np0005464891 neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5[282294]: [NOTICE]   (282298) : Loading success.
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.698 2 DEBUG nova.compute.manager [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.699 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337349.6983728, 01833916-f84a-425e-b28f-d214922d3126 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.700 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 01833916-f84a-425e-b28f-d214922d3126] VM Started (Lifecycle Event)#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.704 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.708 2 INFO nova.virt.libvirt.driver [-] [instance: 01833916-f84a-425e-b28f-d214922d3126] Instance spawned successfully.#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.709 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.727 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 01833916-f84a-425e-b28f-d214922d3126] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.734 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 01833916-f84a-425e-b28f-d214922d3126] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.738 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.738 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.739 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.739 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.740 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.740 2 DEBUG nova.virt.libvirt.driver [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.762 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 01833916-f84a-425e-b28f-d214922d3126] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.763 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337349.6985312, 01833916-f84a-425e-b28f-d214922d3126 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.763 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 01833916-f84a-425e-b28f-d214922d3126] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.786 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 01833916-f84a-425e-b28f-d214922d3126] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.790 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337349.7039156, 01833916-f84a-425e-b28f-d214922d3126 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.790 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 01833916-f84a-425e-b28f-d214922d3126] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.795 2 INFO nova.compute.manager [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Took 7.75 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.796 2 DEBUG nova.compute.manager [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.807 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 01833916-f84a-425e-b28f-d214922d3126] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.810 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 01833916-f84a-425e-b28f-d214922d3126] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.838 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 01833916-f84a-425e-b28f-d214922d3126] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.853 2 INFO nova.compute.manager [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Took 8.76 seconds to build instance.#033[00m
Oct  1 12:49:09 np0005464891 nova_compute[259907]: 2025-10-01 16:49:09.870 2 DEBUG oslo_concurrency.lockutils [None req-4fd17ad8-d61e-44a0-b50f-11800307fdff 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:49:10 np0005464891 nova_compute[259907]: 2025-10-01 16:49:10.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 392 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.1 MiB/s wr, 171 op/s
Oct  1 12:49:11 np0005464891 nova_compute[259907]: 2025-10-01 16:49:11.298 2 DEBUG nova.compute.manager [req-d1e616fa-e40b-456d-94ae-26bbbe79d3aa req-e392694f-72fe-4cb5-9d43-f74c96026fdd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Received event network-vif-plugged-31dd65ea-0bf2-4c61-a641-bff75a96926d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:11 np0005464891 nova_compute[259907]: 2025-10-01 16:49:11.298 2 DEBUG oslo_concurrency.lockutils [req-d1e616fa-e40b-456d-94ae-26bbbe79d3aa req-e392694f-72fe-4cb5-9d43-f74c96026fdd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "01833916-f84a-425e-b28f-d214922d3126-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:11 np0005464891 nova_compute[259907]: 2025-10-01 16:49:11.299 2 DEBUG oslo_concurrency.lockutils [req-d1e616fa-e40b-456d-94ae-26bbbe79d3aa req-e392694f-72fe-4cb5-9d43-f74c96026fdd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:11 np0005464891 nova_compute[259907]: 2025-10-01 16:49:11.299 2 DEBUG oslo_concurrency.lockutils [req-d1e616fa-e40b-456d-94ae-26bbbe79d3aa req-e392694f-72fe-4cb5-9d43-f74c96026fdd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:11 np0005464891 nova_compute[259907]: 2025-10-01 16:49:11.299 2 DEBUG nova.compute.manager [req-d1e616fa-e40b-456d-94ae-26bbbe79d3aa req-e392694f-72fe-4cb5-9d43-f74c96026fdd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] No waiting events found dispatching network-vif-plugged-31dd65ea-0bf2-4c61-a641-bff75a96926d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:49:11 np0005464891 nova_compute[259907]: 2025-10-01 16:49:11.299 2 WARNING nova.compute.manager [req-d1e616fa-e40b-456d-94ae-26bbbe79d3aa req-e392694f-72fe-4cb5-9d43-f74c96026fdd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Received unexpected event network-vif-plugged-31dd65ea-0bf2-4c61-a641-bff75a96926d for instance with vm_state active and task_state None.#033[00m
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:49:12
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['volumes', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log']
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:49:12 np0005464891 nova_compute[259907]: 2025-10-01 16:49:12.402 2 DEBUG nova.compute.manager [req-2056c5b3-f1cc-4535-94da-6ac25ebe2223 req-a6f43c85-9bd1-4ae3-9946-24eb1ede7e47 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Received event network-changed-31dd65ea-0bf2-4c61-a641-bff75a96926d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:12 np0005464891 nova_compute[259907]: 2025-10-01 16:49:12.402 2 DEBUG nova.compute.manager [req-2056c5b3-f1cc-4535-94da-6ac25ebe2223 req-a6f43c85-9bd1-4ae3-9946-24eb1ede7e47 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Refreshing instance network info cache due to event network-changed-31dd65ea-0bf2-4c61-a641-bff75a96926d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:49:12 np0005464891 nova_compute[259907]: 2025-10-01 16:49:12.403 2 DEBUG oslo_concurrency.lockutils [req-2056c5b3-f1cc-4535-94da-6ac25ebe2223 req-a6f43c85-9bd1-4ae3-9946-24eb1ede7e47 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-01833916-f84a-425e-b28f-d214922d3126" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:49:12 np0005464891 nova_compute[259907]: 2025-10-01 16:49:12.403 2 DEBUG oslo_concurrency.lockutils [req-2056c5b3-f1cc-4535-94da-6ac25ebe2223 req-a6f43c85-9bd1-4ae3-9946-24eb1ede7e47 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-01833916-f84a-425e-b28f-d214922d3126" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:49:12 np0005464891 nova_compute[259907]: 2025-10-01 16:49:12.403 2 DEBUG nova.network.neutron [req-2056c5b3-f1cc-4535-94da-6ac25ebe2223 req-a6f43c85-9bd1-4ae3-9946-24eb1ede7e47 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Refreshing network info cache for port 31dd65ea-0bf2-4c61-a641-bff75a96926d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:49:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:12.451 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:12.452 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:12.453 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:12 np0005464891 nova_compute[259907]: 2025-10-01 16:49:12.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 392 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 170 op/s
Oct  1 12:49:14 np0005464891 nova_compute[259907]: 2025-10-01 16:49:14.115 2 DEBUG nova.network.neutron [req-2056c5b3-f1cc-4535-94da-6ac25ebe2223 req-a6f43c85-9bd1-4ae3-9946-24eb1ede7e47 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Updated VIF entry in instance network info cache for port 31dd65ea-0bf2-4c61-a641-bff75a96926d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:49:14 np0005464891 nova_compute[259907]: 2025-10-01 16:49:14.116 2 DEBUG nova.network.neutron [req-2056c5b3-f1cc-4535-94da-6ac25ebe2223 req-a6f43c85-9bd1-4ae3-9946-24eb1ede7e47 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Updating instance_info_cache with network_info: [{"id": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "address": "fa:16:3e:61:8b:e9", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31dd65ea-0b", "ovs_interfaceid": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:49:14 np0005464891 nova_compute[259907]: 2025-10-01 16:49:14.139 2 DEBUG oslo_concurrency.lockutils [req-2056c5b3-f1cc-4535-94da-6ac25ebe2223 req-a6f43c85-9bd1-4ae3-9946-24eb1ede7e47 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-01833916-f84a-425e-b28f-d214922d3126" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:49:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 392 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 150 op/s
Oct  1 12:49:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:49:15 np0005464891 nova_compute[259907]: 2025-10-01 16:49:15.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 392 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 597 KiB/s wr, 118 op/s
Oct  1 12:49:17 np0005464891 nova_compute[259907]: 2025-10-01 16:49:17.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 392 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 80 KiB/s wr, 84 op/s
Oct  1 12:49:20 np0005464891 podman[282312]: 2025-10-01 16:49:20.001316598 +0000 UTC m=+0.094118386 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 12:49:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:49:20 np0005464891 nova_compute[259907]: 2025-10-01 16:49:20.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 392 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 67 KiB/s wr, 75 op/s
Oct  1 12:49:21 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:21Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:8b:e9 10.100.0.10
Oct  1 12:49:21 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:21Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:8b:e9 10.100.0.10
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001973346845086931 of space, bias 1.0, pg target 0.5920040535260793 quantized to 32 (current 32)
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003799544792277137 of space, bias 1.0, pg target 0.11398634376831411 quantized to 32 (current 32)
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014239231295174839 of space, bias 1.0, pg target 0.42717693885524516 quantized to 32 (current 32)
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:49:22.090717) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337362090765, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1683, "num_deletes": 508, "total_data_size": 1940155, "memory_usage": 1972352, "flush_reason": "Manual Compaction"}
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337362107908, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1664551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24526, "largest_seqno": 26208, "table_properties": {"data_size": 1657655, "index_size": 3394, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19366, "raw_average_key_size": 20, "raw_value_size": 1641274, "raw_average_value_size": 1718, "num_data_blocks": 150, "num_entries": 955, "num_filter_entries": 955, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759337256, "oldest_key_time": 1759337256, "file_creation_time": 1759337362, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 17245 microseconds, and 8231 cpu microseconds.
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:49:22.107962) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1664551 bytes OK
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:49:22.107988) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:49:22.112790) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:49:22.112813) EVENT_LOG_v1 {"time_micros": 1759337362112806, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:49:22.112834) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1931646, prev total WAL file size 1931646, number of live WAL files 2.
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:49:22.114252) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1625KB)], [56(10MB)]
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337362114388, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12188819, "oldest_snapshot_seqno": -1}
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4981 keys, 7384995 bytes, temperature: kUnknown
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337362194835, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7384995, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7350932, "index_size": 20513, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 125256, "raw_average_key_size": 25, "raw_value_size": 7260169, "raw_average_value_size": 1457, "num_data_blocks": 836, "num_entries": 4981, "num_filter_entries": 4981, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759337362, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:49:22.195168) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7384995 bytes
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:49:22.201108) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.3 rd, 91.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 10.0 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(11.8) write-amplify(4.4) OK, records in: 5991, records dropped: 1010 output_compression: NoCompression
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:49:22.201152) EVENT_LOG_v1 {"time_micros": 1759337362201130, "job": 30, "event": "compaction_finished", "compaction_time_micros": 80548, "compaction_time_cpu_micros": 38997, "output_level": 6, "num_output_files": 1, "total_output_size": 7384995, "num_input_records": 5991, "num_output_records": 4981, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337362201915, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337362205378, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:49:22.113965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:49:22.205504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:49:22.205512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:49:22.205515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:49:22.205519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:49:22 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:49:22.205521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:49:22 np0005464891 nova_compute[259907]: 2025-10-01 16:49:22.829 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:49:22 np0005464891 nova_compute[259907]: 2025-10-01 16:49:22.829 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:49:22 np0005464891 nova_compute[259907]: 2025-10-01 16:49:22.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 406 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 782 KiB/s wr, 92 op/s
Oct  1 12:49:23 np0005464891 nova_compute[259907]: 2025-10-01 16:49:23.568 2 DEBUG oslo_concurrency.lockutils [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:23 np0005464891 nova_compute[259907]: 2025-10-01 16:49:23.568 2 DEBUG oslo_concurrency.lockutils [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:23 np0005464891 nova_compute[259907]: 2025-10-01 16:49:23.587 2 DEBUG nova.objects.instance [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lazy-loading 'flavor' on Instance uuid b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:49:23 np0005464891 nova_compute[259907]: 2025-10-01 16:49:23.620 2 DEBUG oslo_concurrency.lockutils [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:23 np0005464891 nova_compute[259907]: 2025-10-01 16:49:23.800 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:49:23 np0005464891 nova_compute[259907]: 2025-10-01 16:49:23.846 2 DEBUG oslo_concurrency.lockutils [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:23 np0005464891 nova_compute[259907]: 2025-10-01 16:49:23.847 2 DEBUG oslo_concurrency.lockutils [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:23 np0005464891 nova_compute[259907]: 2025-10-01 16:49:23.847 2 INFO nova.compute.manager [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Attaching volume b6488e97-078a-41d3-bed0-e2d4b3f0997e to /dev/vdb#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.002 2 DEBUG os_brick.utils [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.004 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.018 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.018 741 DEBUG oslo.privsep.daemon [-] privsep: reply[2853adcf-67d5-4bc7-b082-8d6e00e5c88a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.020 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.027 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.028 741 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e2c3a7-668c-4081-8469-d2255aaada78]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.029 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.038 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.039 741 DEBUG oslo.privsep.daemon [-] privsep: reply[12dff133-3882-445f-babc-f6855fa76175]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.040 741 DEBUG oslo.privsep.daemon [-] privsep: reply[1a45f294-6deb-443c-afa8-2f4c4ff204f9]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.041 2 DEBUG oslo_concurrency.processutils [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.072 2 DEBUG oslo_concurrency.processutils [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.075 2 DEBUG os_brick.initiator.connectors.lightos [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.076 2 DEBUG os_brick.initiator.connectors.lightos [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.076 2 DEBUG os_brick.initiator.connectors.lightos [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.077 2 DEBUG os_brick.utils [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] <== get_connector_properties: return (74ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.077 2 DEBUG nova.virt.block_device [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Updating existing volume attachment record: f5f36dd0-fb57-4572-ad41-bca7c4b9580c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.284 2 DEBUG oslo_concurrency.lockutils [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.284 2 DEBUG oslo_concurrency.lockutils [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.306 2 DEBUG nova.objects.instance [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lazy-loading 'flavor' on Instance uuid 347eacbc-b9bd-4163-bc2e-a49a19a833c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.331 2 INFO nova.virt.libvirt.driver [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Ignoring supplied device name: /dev/vdb#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.347 2 DEBUG oslo_concurrency.lockutils [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.544 2 DEBUG oslo_concurrency.lockutils [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.544 2 DEBUG oslo_concurrency.lockutils [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.544 2 INFO nova.compute.manager [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Attaching volume 66560206-6c70-4a2c-8504-a5aebf3ee561 to /dev/vdb#033[00m
Oct  1 12:49:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:49:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/772668130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.684 2 DEBUG os_brick.utils [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.686 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.697 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.697 741 DEBUG oslo.privsep.daemon [-] privsep: reply[4abec567-09da-42a5-a71e-c114ff3a4376]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.698 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.706 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.707 741 DEBUG oslo.privsep.daemon [-] privsep: reply[ae7c9c49-f897-44a9-8fbd-e57f15a7d5c2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.708 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.717 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.718 741 DEBUG oslo.privsep.daemon [-] privsep: reply[44532adb-d554-4bff-851b-a0bde1f8b130]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.718 741 DEBUG oslo.privsep.daemon [-] privsep: reply[480e0876-cc60-48f8-9425-8c591f49f26d]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.719 2 DEBUG oslo_concurrency.processutils [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.750 2 DEBUG oslo_concurrency.processutils [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.754 2 DEBUG os_brick.initiator.connectors.lightos [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.755 2 DEBUG os_brick.initiator.connectors.lightos [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.755 2 DEBUG os_brick.initiator.connectors.lightos [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.756 2 DEBUG os_brick.utils [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.757 2 DEBUG nova.virt.block_device [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Updating existing volume attachment record: 2242e5e7-dcf7-4fa1-a1b6-d474ad3d2329 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.771 2 DEBUG nova.objects.instance [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lazy-loading 'flavor' on Instance uuid b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.793 2 DEBUG nova.virt.libvirt.driver [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Attempting to attach volume b6488e97-078a-41d3-bed0-e2d4b3f0997e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.796 2 DEBUG nova.virt.libvirt.guest [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] attach device xml: <disk type="network" device="disk">
Oct  1 12:49:24 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:49:24 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-b6488e97-078a-41d3-bed0-e2d4b3f0997e">
Oct  1 12:49:24 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:49:24 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:49:24 np0005464891 nova_compute[259907]:  <auth username="openstack">
Oct  1 12:49:24 np0005464891 nova_compute[259907]:    <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:49:24 np0005464891 nova_compute[259907]:  </auth>
Oct  1 12:49:24 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:49:24 np0005464891 nova_compute[259907]:  <serial>b6488e97-078a-41d3-bed0-e2d4b3f0997e</serial>
Oct  1 12:49:24 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:49:24 np0005464891 nova_compute[259907]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.806 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.922 2 DEBUG nova.virt.libvirt.driver [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.923 2 DEBUG nova.virt.libvirt.driver [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.923 2 DEBUG nova.virt.libvirt.driver [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:49:24 np0005464891 nova_compute[259907]: 2025-10-01 16:49:24.923 2 DEBUG nova.virt.libvirt.driver [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] No VIF found with MAC fa:16:3e:73:57:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:49:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 412 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 92 op/s
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.120 2 DEBUG oslo_concurrency.lockutils [None req-3677f9c4-cad6-4979-9db1-728f19785bdd 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:49:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4096285716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.460 2 DEBUG nova.objects.instance [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lazy-loading 'flavor' on Instance uuid 347eacbc-b9bd-4163-bc2e-a49a19a833c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.479 2 DEBUG nova.virt.libvirt.driver [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Attempting to attach volume 66560206-6c70-4a2c-8504-a5aebf3ee561 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.481 2 DEBUG nova.virt.libvirt.guest [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] attach device xml: <disk type="network" device="disk">
Oct  1 12:49:25 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:49:25 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-66560206-6c70-4a2c-8504-a5aebf3ee561">
Oct  1 12:49:25 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:49:25 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:49:25 np0005464891 nova_compute[259907]:  <auth username="openstack">
Oct  1 12:49:25 np0005464891 nova_compute[259907]:    <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:49:25 np0005464891 nova_compute[259907]:  </auth>
Oct  1 12:49:25 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:49:25 np0005464891 nova_compute[259907]:  <serial>66560206-6c70-4a2c-8504-a5aebf3ee561</serial>
Oct  1 12:49:25 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:49:25 np0005464891 nova_compute[259907]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.598 2 DEBUG nova.virt.libvirt.driver [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.598 2 DEBUG nova.virt.libvirt.driver [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.598 2 DEBUG nova.virt.libvirt.driver [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.598 2 DEBUG nova.virt.libvirt.driver [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] No VIF found with MAC fa:16:3e:03:7f:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:49:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.785 2 DEBUG oslo_concurrency.lockutils [None req-0581e475-fe27-4580-96ae-3bc2645233a2 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.804 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.837 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.838 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.859 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.860 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.861 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.861 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:49:25 np0005464891 nova_compute[259907]: 2025-10-01 16:49:25.862 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:49:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3064662387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.343 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.436 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.436 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.442 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.442 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.448 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.448 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.449 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.455 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.456 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.456 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.665 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.666 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3735MB free_disk=59.85560607910156GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.666 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.667 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.742 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.743 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.743 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 347eacbc-b9bd-4163-bc2e-a49a19a833c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.744 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 01833916-f84a-425e-b28f-d214922d3126 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.745 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.745 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:49:26 np0005464891 nova_compute[259907]: 2025-10-01 16:49:26.821 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 425 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 441 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct  1 12:49:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:49:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1892449552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.347 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.356 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.372 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.416 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.416 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:49:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/227444105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.770 2 DEBUG oslo_concurrency.lockutils [None req-1f63db85-a7e7-463e-bc72-b6e55220c54b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.770 2 DEBUG oslo_concurrency.lockutils [None req-1f63db85-a7e7-463e-bc72-b6e55220c54b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.814 2 INFO nova.compute.manager [None req-1f63db85-a7e7-463e-bc72-b6e55220c54b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Detaching volume b6488e97-078a-41d3-bed0-e2d4b3f0997e#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.881 2 DEBUG oslo_concurrency.lockutils [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "01833916-f84a-425e-b28f-d214922d3126" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.882 2 DEBUG oslo_concurrency.lockutils [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.902 2 DEBUG nova.objects.instance [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lazy-loading 'flavor' on Instance uuid 01833916-f84a-425e-b28f-d214922d3126 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.929 2 INFO nova.virt.libvirt.driver [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Ignoring supplied device name: /dev/vdb#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.946 2 INFO nova.virt.block_device [None req-1f63db85-a7e7-463e-bc72-b6e55220c54b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Attempting to driver detach volume b6488e97-078a-41d3-bed0-e2d4b3f0997e from mountpoint /dev/vdb#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.948 2 DEBUG oslo_concurrency.lockutils [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.956 2 DEBUG nova.virt.libvirt.driver [None req-1f63db85-a7e7-463e-bc72-b6e55220c54b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Attempting to detach device vdb from instance b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.956 2 DEBUG nova.virt.libvirt.guest [None req-1f63db85-a7e7-463e-bc72-b6e55220c54b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:49:27 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:49:27 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-b6488e97-078a-41d3-bed0-e2d4b3f0997e">
Oct  1 12:49:27 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:49:27 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:49:27 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:49:27 np0005464891 nova_compute[259907]:  <serial>b6488e97-078a-41d3-bed0-e2d4b3f0997e</serial>
Oct  1 12:49:27 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:49:27 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:49:27 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.962 2 INFO nova.virt.libvirt.driver [None req-1f63db85-a7e7-463e-bc72-b6e55220c54b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Successfully detached device vdb from instance b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 from the persistent domain config.#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.962 2 DEBUG nova.virt.libvirt.driver [None req-1f63db85-a7e7-463e-bc72-b6e55220c54b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  1 12:49:27 np0005464891 nova_compute[259907]: 2025-10-01 16:49:27.962 2 DEBUG nova.virt.libvirt.guest [None req-1f63db85-a7e7-463e-bc72-b6e55220c54b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:49:27 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:49:27 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-b6488e97-078a-41d3-bed0-e2d4b3f0997e">
Oct  1 12:49:27 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:49:27 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:49:27 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:49:27 np0005464891 nova_compute[259907]:  <serial>b6488e97-078a-41d3-bed0-e2d4b3f0997e</serial>
Oct  1 12:49:27 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:49:27 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:49:27 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.074 2 DEBUG nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Received event <DeviceRemovedEvent: 1759337368.0738182, b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.076 2 DEBUG nova.virt.libvirt.driver [None req-1f63db85-a7e7-463e-bc72-b6e55220c54b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.078 2 INFO nova.virt.libvirt.driver [None req-1f63db85-a7e7-463e-bc72-b6e55220c54b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Successfully detached device vdb from instance b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 from the live domain config.#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.132 2 DEBUG oslo_concurrency.lockutils [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "01833916-f84a-425e-b28f-d214922d3126" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.133 2 DEBUG oslo_concurrency.lockutils [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.134 2 INFO nova.compute.manager [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Attaching volume 42fb3cce-4d4c-44d5-b9f6-4892ebb7b5a4 to /dev/vdb#033[00m
Oct  1 12:49:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Oct  1 12:49:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Oct  1 12:49:28 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.252 2 DEBUG os_brick.utils [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.253 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.255 2 DEBUG nova.objects.instance [None req-1f63db85-a7e7-463e-bc72-b6e55220c54b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lazy-loading 'flavor' on Instance uuid b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.267 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.267 741 DEBUG oslo.privsep.daemon [-] privsep: reply[dad41060-b992-4baa-b733-138b63e1fad6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.268 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.276 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.276 741 DEBUG oslo.privsep.daemon [-] privsep: reply[bd5e52b8-9100-4fdb-a27d-782450ebf62b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.278 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.287 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.287 741 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb25bdf-31d0-4dba-b7f6-1b6311ee49a9]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.288 741 DEBUG oslo.privsep.daemon [-] privsep: reply[2f545eff-0f21-4aa5-83fb-4dd2020d7f48]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.289 2 DEBUG oslo_concurrency.processutils [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.312 2 DEBUG oslo_concurrency.lockutils [None req-1f63db85-a7e7-463e-bc72-b6e55220c54b 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.315 2 DEBUG oslo_concurrency.processutils [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.317 2 DEBUG os_brick.initiator.connectors.lightos [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.317 2 DEBUG os_brick.initiator.connectors.lightos [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.318 2 DEBUG os_brick.initiator.connectors.lightos [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.318 2 DEBUG os_brick.utils [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] <== get_connector_properties: return (64ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.318 2 DEBUG nova.virt.block_device [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Updating existing volume attachment record: 476ac6e2-50e3-41b4-8b89-1339f0d9052d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.383 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.384 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.800 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:49:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:49:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3677154773' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:49:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 427 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 689 KiB/s rd, 2.8 MiB/s wr, 87 op/s
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.960 2 DEBUG nova.objects.instance [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lazy-loading 'flavor' on Instance uuid 01833916-f84a-425e-b28f-d214922d3126 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.983 2 DEBUG nova.virt.libvirt.driver [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Attempting to attach volume 42fb3cce-4d4c-44d5-b9f6-4892ebb7b5a4 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  1 12:49:28 np0005464891 nova_compute[259907]: 2025-10-01 16:49:28.986 2 DEBUG nova.virt.libvirt.guest [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] attach device xml: <disk type="network" device="disk">
Oct  1 12:49:28 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:49:28 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-42fb3cce-4d4c-44d5-b9f6-4892ebb7b5a4">
Oct  1 12:49:28 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:49:28 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:49:28 np0005464891 nova_compute[259907]:  <auth username="openstack">
Oct  1 12:49:28 np0005464891 nova_compute[259907]:    <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:49:28 np0005464891 nova_compute[259907]:  </auth>
Oct  1 12:49:28 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:49:28 np0005464891 nova_compute[259907]:  <serial>42fb3cce-4d4c-44d5-b9f6-4892ebb7b5a4</serial>
Oct  1 12:49:28 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:49:28 np0005464891 nova_compute[259907]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.086 2 DEBUG nova.virt.libvirt.driver [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.087 2 DEBUG nova.virt.libvirt.driver [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.087 2 DEBUG nova.virt.libvirt.driver [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.087 2 DEBUG nova.virt.libvirt.driver [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] No VIF found with MAC fa:16:3e:61:8b:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:49:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Oct  1 12:49:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Oct  1 12:49:29 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.441 2 DEBUG oslo_concurrency.lockutils [None req-7274ddb8-7cde-4704-b615-c418af586f65 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.743 2 DEBUG nova.compute.manager [req-a036d4a7-92ab-4f6b-b51c-14b7f049c1bc req-064f5acd-0704-4e96-8027-20ce7d355c0e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Received event network-changed-5ef93ed9-65fa-4d0e-a510-20023ab7144f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.743 2 DEBUG nova.compute.manager [req-a036d4a7-92ab-4f6b-b51c-14b7f049c1bc req-064f5acd-0704-4e96-8027-20ce7d355c0e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Refreshing instance network info cache due to event network-changed-5ef93ed9-65fa-4d0e-a510-20023ab7144f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.744 2 DEBUG oslo_concurrency.lockutils [req-a036d4a7-92ab-4f6b-b51c-14b7f049c1bc req-064f5acd-0704-4e96-8027-20ce7d355c0e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.744 2 DEBUG oslo_concurrency.lockutils [req-a036d4a7-92ab-4f6b-b51c-14b7f049c1bc req-064f5acd-0704-4e96-8027-20ce7d355c0e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.745 2 DEBUG nova.network.neutron [req-a036d4a7-92ab-4f6b-b51c-14b7f049c1bc req-064f5acd-0704-4e96-8027-20ce7d355c0e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Refreshing network info cache for port 5ef93ed9-65fa-4d0e-a510-20023ab7144f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.803 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.808 2 DEBUG oslo_concurrency.lockutils [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.809 2 DEBUG oslo_concurrency.lockutils [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.809 2 DEBUG oslo_concurrency.lockutils [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.810 2 DEBUG oslo_concurrency.lockutils [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.810 2 DEBUG oslo_concurrency.lockutils [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.812 2 INFO nova.compute.manager [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Terminating instance#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.813 2 DEBUG nova.compute.manager [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:49:29 np0005464891 kernel: tap5ef93ed9-65 (unregistering): left promiscuous mode
Oct  1 12:49:29 np0005464891 NetworkManager[44940]: <info>  [1759337369.8690] device (tap5ef93ed9-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:49:29 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:29Z|00091|binding|INFO|Releasing lport 5ef93ed9-65fa-4d0e-a510-20023ab7144f from this chassis (sb_readonly=0)
Oct  1 12:49:29 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:29Z|00092|binding|INFO|Setting lport 5ef93ed9-65fa-4d0e-a510-20023ab7144f down in Southbound
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:29 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:29Z|00093|binding|INFO|Removing iface tap5ef93ed9-65 ovn-installed in OVS
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:29.887 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:57:c3 10.100.0.5'], port_security=['fa:16:3e:73:57:c3 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f395084b84f48d182c3be9d7961475e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a473cde3-a378-4504-81c4-9d8fada1bc14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a03153c4-51cb-49a4-a16a-ed6a97c8c003, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=5ef93ed9-65fa-4d0e-a510-20023ab7144f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:49:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:29.889 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 5ef93ed9-65fa-4d0e-a510-20023ab7144f in datapath 0b8d6144-4eec-41cd-aaa9-d3e718f03c5c unbound from our chassis#033[00m
Oct  1 12:49:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:29.892 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b8d6144-4eec-41cd-aaa9-d3e718f03c5c#033[00m
Oct  1 12:49:29 np0005464891 nova_compute[259907]: 2025-10-01 16:49:29.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:29.914 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[631a5de8-5379-4b89-98af-aac460ebbf94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:29 np0005464891 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct  1 12:49:29 np0005464891 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 16.011s CPU time.
Oct  1 12:49:29 np0005464891 systemd-machined[214891]: Machine qemu-8-instance-00000008 terminated.
Oct  1 12:49:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:29.954 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[f40a00b7-d1c9-47a5-9489-1ff1e896043d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:29.957 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[b52ad306-e886-473b-916f-a4efb5afb2f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:29.986 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[3f43ad92-628f-49f9-8fea-382dfa078b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:30.000 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b2bb820d-e78f-490b-b6d3-6f42db9fe419]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b8d6144-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:55:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423083, 'reachable_time': 37210, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282477, 'error': None, 'target': 'ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:30.020 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[db43d655-3c47-4cda-a14e-78124bfb1884]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0b8d6144-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 423100, 'tstamp': 423100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282478, 'error': None, 'target': 'ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0b8d6144-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 423105, 'tstamp': 423105}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282478, 'error': None, 'target': 'ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:30.023 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b8d6144-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:30.033 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b8d6144-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:30.036 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:49:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:30.037 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b8d6144-40, col_values=(('external_ids', {'iface-id': 'c2ef6608-b2db-40dc-8fde-a94b501b7f75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:30.038 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.052 2 INFO nova.virt.libvirt.driver [-] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Instance destroyed successfully.#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.053 2 DEBUG nova.objects.instance [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lazy-loading 'resources' on Instance uuid b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.173 2 DEBUG nova.virt.libvirt.vif [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:48:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-320628910',display_name='tempest-TestStampPattern-server-320628910',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-320628910',id=8,image_ref='e120b782-f4fe-48ea-9d54-439be8e800a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCck7nxcoGk0qQMqmOkhPfker9ncjX3MedwZy1gvsVFGYBG7D5wvyJC+lFiT/6un7wQpds+bs1FRdVcdDnlHzQimOGzqeJBoWgRzI2+A/i117tgAu+tGkXiUBUgSD0X9yA==',key_name='tempest-TestStampPattern-1388282123',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:48:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f395084b84f48d182c3be9d7961475e',ramdisk_id='',reservation_id='r-ebcybhva',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83',image_min_disk='1',image_min_ram='0',image_owner_id='1f395084b84f48d182c3be9d7961475e',image_owner_project_name='tempest-TestStampPattern-305826503',image_owner_user_name='tempest-TestStampPattern-305826503-project-member',image_user_id='0a821557545f49ad9c15eee1cf0bd82b',owner_project_name='tempest-TestStampPattern-305826503',owner_user_name='tempest-TestStampPattern-305826503-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:48:48Z,user_data=None,user_id='0a821557545f49ad9c15eee1cf0bd82b',uuid=b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "address": "fa:16:3e:73:57:c3", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ef93ed9-65", "ovs_interfaceid": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.175 2 DEBUG nova.network.os_vif_util [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Converting VIF {"id": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "address": "fa:16:3e:73:57:c3", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ef93ed9-65", "ovs_interfaceid": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.176 2 DEBUG nova.network.os_vif_util [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:73:57:c3,bridge_name='br-int',has_traffic_filtering=True,id=5ef93ed9-65fa-4d0e-a510-20023ab7144f,network=Network(0b8d6144-4eec-41cd-aaa9-d3e718f03c5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ef93ed9-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.177 2 DEBUG os_vif [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:57:c3,bridge_name='br-int',has_traffic_filtering=True,id=5ef93ed9-65fa-4d0e-a510-20023ab7144f,network=Network(0b8d6144-4eec-41cd-aaa9-d3e718f03c5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ef93ed9-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.182 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ef93ed9-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.192 2 INFO os_vif [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:57:c3,bridge_name='br-int',has_traffic_filtering=True,id=5ef93ed9-65fa-4d0e-a510-20023ab7144f,network=Network(0b8d6144-4eec-41cd-aaa9-d3e718f03c5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ef93ed9-65')#033[00m
Oct  1 12:49:30 np0005464891 podman[282491]: 2025-10-01 16:49:30.215087993 +0000 UTC m=+0.114973380 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:49:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.709 2 INFO nova.virt.libvirt.driver [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Deleting instance files /var/lib/nova/instances/b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183_del#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.710 2 INFO nova.virt.libvirt.driver [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Deletion of /var/lib/nova/instances/b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183_del complete#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.766 2 INFO nova.compute.manager [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.767 2 DEBUG oslo.service.loopingcall [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.767 2 DEBUG nova.compute.manager [-] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.767 2 DEBUG nova.network.neutron [-] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.897 2 DEBUG nova.network.neutron [req-a036d4a7-92ab-4f6b-b51c-14b7f049c1bc req-064f5acd-0704-4e96-8027-20ce7d355c0e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Updated VIF entry in instance network info cache for port 5ef93ed9-65fa-4d0e-a510-20023ab7144f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.898 2 DEBUG nova.network.neutron [req-a036d4a7-92ab-4f6b-b51c-14b7f049c1bc req-064f5acd-0704-4e96-8027-20ce7d355c0e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Updating instance_info_cache with network_info: [{"id": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "address": "fa:16:3e:73:57:c3", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ef93ed9-65", "ovs_interfaceid": "5ef93ed9-65fa-4d0e-a510-20023ab7144f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:49:30 np0005464891 nova_compute[259907]: 2025-10-01 16:49:30.922 2 DEBUG oslo_concurrency.lockutils [req-a036d4a7-92ab-4f6b-b51c-14b7f049c1bc req-064f5acd-0704-4e96-8027-20ce7d355c0e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:49:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 427 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 807 KiB/s rd, 2.3 MiB/s wr, 113 op/s
Oct  1 12:49:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Oct  1 12:49:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Oct  1 12:49:31 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.464 2 DEBUG nova.network.neutron [-] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.480 2 INFO nova.compute.manager [-] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Took 0.71 seconds to deallocate network for instance.#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.534 2 DEBUG oslo_concurrency.lockutils [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.535 2 DEBUG oslo_concurrency.lockutils [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.660 2 DEBUG oslo_concurrency.processutils [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.822 2 DEBUG nova.compute.manager [req-b0ce42e3-021f-4b72-b6a2-6af4a71ddadc req-df764efd-b40f-4ab3-b045-a8f56e7c27cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Received event network-vif-unplugged-5ef93ed9-65fa-4d0e-a510-20023ab7144f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.823 2 DEBUG oslo_concurrency.lockutils [req-b0ce42e3-021f-4b72-b6a2-6af4a71ddadc req-df764efd-b40f-4ab3-b045-a8f56e7c27cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.824 2 DEBUG oslo_concurrency.lockutils [req-b0ce42e3-021f-4b72-b6a2-6af4a71ddadc req-df764efd-b40f-4ab3-b045-a8f56e7c27cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.824 2 DEBUG oslo_concurrency.lockutils [req-b0ce42e3-021f-4b72-b6a2-6af4a71ddadc req-df764efd-b40f-4ab3-b045-a8f56e7c27cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.825 2 DEBUG nova.compute.manager [req-b0ce42e3-021f-4b72-b6a2-6af4a71ddadc req-df764efd-b40f-4ab3-b045-a8f56e7c27cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] No waiting events found dispatching network-vif-unplugged-5ef93ed9-65fa-4d0e-a510-20023ab7144f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.825 2 WARNING nova.compute.manager [req-b0ce42e3-021f-4b72-b6a2-6af4a71ddadc req-df764efd-b40f-4ab3-b045-a8f56e7c27cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Received unexpected event network-vif-unplugged-5ef93ed9-65fa-4d0e-a510-20023ab7144f for instance with vm_state deleted and task_state None.#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.825 2 DEBUG nova.compute.manager [req-b0ce42e3-021f-4b72-b6a2-6af4a71ddadc req-df764efd-b40f-4ab3-b045-a8f56e7c27cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Received event network-vif-plugged-5ef93ed9-65fa-4d0e-a510-20023ab7144f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.826 2 DEBUG oslo_concurrency.lockutils [req-b0ce42e3-021f-4b72-b6a2-6af4a71ddadc req-df764efd-b40f-4ab3-b045-a8f56e7c27cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.826 2 DEBUG oslo_concurrency.lockutils [req-b0ce42e3-021f-4b72-b6a2-6af4a71ddadc req-df764efd-b40f-4ab3-b045-a8f56e7c27cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.827 2 DEBUG oslo_concurrency.lockutils [req-b0ce42e3-021f-4b72-b6a2-6af4a71ddadc req-df764efd-b40f-4ab3-b045-a8f56e7c27cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.827 2 DEBUG nova.compute.manager [req-b0ce42e3-021f-4b72-b6a2-6af4a71ddadc req-df764efd-b40f-4ab3-b045-a8f56e7c27cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] No waiting events found dispatching network-vif-plugged-5ef93ed9-65fa-4d0e-a510-20023ab7144f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.828 2 WARNING nova.compute.manager [req-b0ce42e3-021f-4b72-b6a2-6af4a71ddadc req-df764efd-b40f-4ab3-b045-a8f56e7c27cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Received unexpected event network-vif-plugged-5ef93ed9-65fa-4d0e-a510-20023ab7144f for instance with vm_state deleted and task_state None.#033[00m
Oct  1 12:49:31 np0005464891 nova_compute[259907]: 2025-10-01 16:49:31.828 2 DEBUG nova.compute.manager [req-b0ce42e3-021f-4b72-b6a2-6af4a71ddadc req-df764efd-b40f-4ab3-b045-a8f56e7c27cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Received event network-vif-deleted-5ef93ed9-65fa-4d0e-a510-20023ab7144f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:49:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1418986327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:49:32 np0005464891 nova_compute[259907]: 2025-10-01 16:49:32.147 2 DEBUG oslo_concurrency.processutils [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:32 np0005464891 nova_compute[259907]: 2025-10-01 16:49:32.155 2 DEBUG nova.compute.provider_tree [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:49:32 np0005464891 nova_compute[259907]: 2025-10-01 16:49:32.180 2 DEBUG nova.scheduler.client.report [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:49:32 np0005464891 nova_compute[259907]: 2025-10-01 16:49:32.243 2 DEBUG oslo_concurrency.lockutils [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Oct  1 12:49:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Oct  1 12:49:32 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Oct  1 12:49:32 np0005464891 nova_compute[259907]: 2025-10-01 16:49:32.277 2 INFO nova.scheduler.client.report [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Deleted allocations for instance b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183#033[00m
Oct  1 12:49:32 np0005464891 nova_compute[259907]: 2025-10-01 16:49:32.364 2 DEBUG oslo_concurrency.lockutils [None req-ef8a40d6-029d-45a7-a2dc-060985632cc0 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 417 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 428 KiB/s rd, 45 KiB/s wr, 115 op/s
Oct  1 12:49:33 np0005464891 nova_compute[259907]: 2025-10-01 16:49:33.056 2 DEBUG oslo_concurrency.lockutils [None req-37fb466f-7bed-4f65-b798-6e224ded9f2a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:33 np0005464891 nova_compute[259907]: 2025-10-01 16:49:33.057 2 DEBUG oslo_concurrency.lockutils [None req-37fb466f-7bed-4f65-b798-6e224ded9f2a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:33 np0005464891 nova_compute[259907]: 2025-10-01 16:49:33.078 2 INFO nova.compute.manager [None req-37fb466f-7bed-4f65-b798-6e224ded9f2a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Detaching volume 66560206-6c70-4a2c-8504-a5aebf3ee561#033[00m
Oct  1 12:49:33 np0005464891 nova_compute[259907]: 2025-10-01 16:49:33.185 2 INFO nova.virt.block_device [None req-37fb466f-7bed-4f65-b798-6e224ded9f2a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Attempting to driver detach volume 66560206-6c70-4a2c-8504-a5aebf3ee561 from mountpoint /dev/vdb#033[00m
Oct  1 12:49:33 np0005464891 nova_compute[259907]: 2025-10-01 16:49:33.203 2 DEBUG nova.virt.libvirt.driver [None req-37fb466f-7bed-4f65-b798-6e224ded9f2a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Attempting to detach device vdb from instance 347eacbc-b9bd-4163-bc2e-a49a19a833c3 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  1 12:49:33 np0005464891 nova_compute[259907]: 2025-10-01 16:49:33.204 2 DEBUG nova.virt.libvirt.guest [None req-37fb466f-7bed-4f65-b798-6e224ded9f2a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:49:33 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:49:33 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-66560206-6c70-4a2c-8504-a5aebf3ee561">
Oct  1 12:49:33 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:49:33 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:49:33 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:49:33 np0005464891 nova_compute[259907]:  <serial>66560206-6c70-4a2c-8504-a5aebf3ee561</serial>
Oct  1 12:49:33 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:49:33 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:49:33 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:49:33 np0005464891 nova_compute[259907]: 2025-10-01 16:49:33.219 2 INFO nova.virt.libvirt.driver [None req-37fb466f-7bed-4f65-b798-6e224ded9f2a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Successfully detached device vdb from instance 347eacbc-b9bd-4163-bc2e-a49a19a833c3 from the persistent domain config.#033[00m
Oct  1 12:49:33 np0005464891 nova_compute[259907]: 2025-10-01 16:49:33.220 2 DEBUG nova.virt.libvirt.driver [None req-37fb466f-7bed-4f65-b798-6e224ded9f2a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 347eacbc-b9bd-4163-bc2e-a49a19a833c3 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  1 12:49:33 np0005464891 nova_compute[259907]: 2025-10-01 16:49:33.221 2 DEBUG nova.virt.libvirt.guest [None req-37fb466f-7bed-4f65-b798-6e224ded9f2a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:49:33 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:49:33 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-66560206-6c70-4a2c-8504-a5aebf3ee561">
Oct  1 12:49:33 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:49:33 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:49:33 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:49:33 np0005464891 nova_compute[259907]:  <serial>66560206-6c70-4a2c-8504-a5aebf3ee561</serial>
Oct  1 12:49:33 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:49:33 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:49:33 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:49:33 np0005464891 podman[282734]: 2025-10-01 16:49:33.25662083 +0000 UTC m=+0.130828588 container exec 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:49:33 np0005464891 nova_compute[259907]: 2025-10-01 16:49:33.305 2 DEBUG nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Received event <DeviceRemovedEvent: 1759337373.3048968, 347eacbc-b9bd-4163-bc2e-a49a19a833c3 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  1 12:49:33 np0005464891 nova_compute[259907]: 2025-10-01 16:49:33.308 2 DEBUG nova.virt.libvirt.driver [None req-37fb466f-7bed-4f65-b798-6e224ded9f2a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 347eacbc-b9bd-4163-bc2e-a49a19a833c3 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  1 12:49:33 np0005464891 nova_compute[259907]: 2025-10-01 16:49:33.311 2 INFO nova.virt.libvirt.driver [None req-37fb466f-7bed-4f65-b798-6e224ded9f2a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Successfully detached device vdb from instance 347eacbc-b9bd-4163-bc2e-a49a19a833c3 from the live domain config.#033[00m
Oct  1 12:49:33 np0005464891 podman[282734]: 2025-10-01 16:49:33.399753205 +0000 UTC m=+0.273960903 container exec_died 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:49:33 np0005464891 nova_compute[259907]: 2025-10-01 16:49:33.463 2 DEBUG nova.objects.instance [None req-37fb466f-7bed-4f65-b798-6e224ded9f2a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lazy-loading 'flavor' on Instance uuid 347eacbc-b9bd-4163-bc2e-a49a19a833c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:49:33 np0005464891 nova_compute[259907]: 2025-10-01 16:49:33.513 2 DEBUG oslo_concurrency.lockutils [None req-37fb466f-7bed-4f65-b798-6e224ded9f2a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:33 np0005464891 podman[282772]: 2025-10-01 16:49:33.595265974 +0000 UTC m=+0.090147485 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 12:49:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:49:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/653081179' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:49:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:49:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/653081179' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:49:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:49:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:49:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:49:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Oct  1 12:49:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:49:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Oct  1 12:49:34 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:49:34 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Oct  1 12:49:34 np0005464891 nova_compute[259907]: 2025-10-01 16:49:34.708 2 DEBUG oslo_concurrency.lockutils [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:34 np0005464891 nova_compute[259907]: 2025-10-01 16:49:34.709 2 DEBUG oslo_concurrency.lockutils [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:34 np0005464891 nova_compute[259907]: 2025-10-01 16:49:34.710 2 DEBUG oslo_concurrency.lockutils [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:34 np0005464891 nova_compute[259907]: 2025-10-01 16:49:34.710 2 DEBUG oslo_concurrency.lockutils [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:34 np0005464891 nova_compute[259907]: 2025-10-01 16:49:34.710 2 DEBUG oslo_concurrency.lockutils [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:34 np0005464891 nova_compute[259907]: 2025-10-01 16:49:34.711 2 INFO nova.compute.manager [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Terminating instance#033[00m
Oct  1 12:49:34 np0005464891 nova_compute[259907]: 2025-10-01 16:49:34.712 2 DEBUG nova.compute.manager [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:49:34 np0005464891 kernel: tapc8ba4648-dd (unregistering): left promiscuous mode
Oct  1 12:49:34 np0005464891 NetworkManager[44940]: <info>  [1759337374.8895] device (tapc8ba4648-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:49:34 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:34Z|00094|binding|INFO|Releasing lport c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd from this chassis (sb_readonly=0)
Oct  1 12:49:34 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:34Z|00095|binding|INFO|Setting lport c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd down in Southbound
Oct  1 12:49:34 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:34Z|00096|binding|INFO|Removing iface tapc8ba4648-dd ovn-installed in OVS
Oct  1 12:49:34 np0005464891 nova_compute[259907]: 2025-10-01 16:49:34.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:34 np0005464891 nova_compute[259907]: 2025-10-01 16:49:34.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 413 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 404 KiB/s rd, 41 KiB/s wr, 160 op/s
Oct  1 12:49:34 np0005464891 nova_compute[259907]: 2025-10-01 16:49:34.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:34 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:34.963 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:7f:de 10.100.0.13'], port_security=['fa:16:3e:03:7f:de 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '347eacbc-b9bd-4163-bc2e-a49a19a833c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9217a609-3f35-4647-87cd-e08d95dd1da1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19100b7dd5c9420db1d7f374559a9498', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2031271b-1002-4f48-9596-46016bfe5629', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.211'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3460a047-44ee-4ad2-938a-c15de55876d0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:49:34 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:34.964 162546 INFO neutron.agent.ovn.metadata.agent [-] Port c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd in datapath 9217a609-3f35-4647-87cd-e08d95dd1da1 unbound from our chassis#033[00m
Oct  1 12:49:34 np0005464891 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct  1 12:49:34 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:34.967 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9217a609-3f35-4647-87cd-e08d95dd1da1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:49:34 np0005464891 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 14.390s CPU time.
Oct  1 12:49:34 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:34.968 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[20c935a8-f43e-47c2-9c0b-8d74521d7c06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:34 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:34.968 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1 namespace which is not needed anymore#033[00m
Oct  1 12:49:34 np0005464891 systemd-machined[214891]: Machine qemu-9-instance-00000009 terminated.
Oct  1 12:49:35 np0005464891 neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1[281765]: [NOTICE]   (281769) : haproxy version is 2.8.14-c23fe91
Oct  1 12:49:35 np0005464891 neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1[281765]: [NOTICE]   (281769) : path to executable is /usr/sbin/haproxy
Oct  1 12:49:35 np0005464891 neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1[281765]: [WARNING]  (281769) : Exiting Master process...
Oct  1 12:49:35 np0005464891 neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1[281765]: [WARNING]  (281769) : Exiting Master process...
Oct  1 12:49:35 np0005464891 neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1[281765]: [ALERT]    (281769) : Current worker (281771) exited with code 143 (Terminated)
Oct  1 12:49:35 np0005464891 neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1[281765]: [WARNING]  (281769) : All workers exited. Exiting... (0)
Oct  1 12:49:35 np0005464891 systemd[1]: libpod-cc73d1c707790453448e56de06040ce78bc8a00d1b11caefa1530bfbdc67f96b.scope: Deactivated successfully.
Oct  1 12:49:35 np0005464891 podman[283054]: 2025-10-01 16:49:35.120538718 +0000 UTC m=+0.053150556 container died cc73d1c707790453448e56de06040ce78bc8a00d1b11caefa1530bfbdc67f96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  1 12:49:35 np0005464891 nova_compute[259907]: 2025-10-01 16:49:35.149 2 INFO nova.virt.libvirt.driver [-] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Instance destroyed successfully.#033[00m
Oct  1 12:49:35 np0005464891 nova_compute[259907]: 2025-10-01 16:49:35.150 2 DEBUG nova.objects.instance [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lazy-loading 'resources' on Instance uuid 347eacbc-b9bd-4163-bc2e-a49a19a833c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:49:35 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cc73d1c707790453448e56de06040ce78bc8a00d1b11caefa1530bfbdc67f96b-userdata-shm.mount: Deactivated successfully.
Oct  1 12:49:35 np0005464891 nova_compute[259907]: 2025-10-01 16:49:35.166 2 DEBUG nova.virt.libvirt.vif [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:48:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1331818960',display_name='tempest-VolumesBackupsTest-instance-1331818960',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1331818960',id=9,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAPQQVojhJmPeHreHOpE1NwY/4UtWWRv4SAlPgzy1F3CWqazhpKL2xf3seWaZRvPBIAZ0VzPUhxEc+sHYdkt+pa1HEVHjjWGMkeJSuJsMOCEUFX/xndVtuOCLh6+Rpgh6w==',key_name='tempest-keypair-405709585',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:48:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='19100b7dd5c9420db1d7f374559a9498',ramdisk_id='',reservation_id='r-sijc9bji',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-1599024574',owner_user_name='tempest-VolumesBackupsTest-1599024574-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:48:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='825e1f460cae49ad9834c4d7d67e24fe',uuid=347eacbc-b9bd-4163-bc2e-a49a19a833c3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "address": "fa:16:3e:03:7f:de", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8ba4648-dd", "ovs_interfaceid": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:49:35 np0005464891 nova_compute[259907]: 2025-10-01 16:49:35.168 2 DEBUG nova.network.os_vif_util [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Converting VIF {"id": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "address": "fa:16:3e:03:7f:de", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8ba4648-dd", "ovs_interfaceid": "c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:49:35 np0005464891 nova_compute[259907]: 2025-10-01 16:49:35.169 2 DEBUG nova.network.os_vif_util [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:7f:de,bridge_name='br-int',has_traffic_filtering=True,id=c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd,network=Network(9217a609-3f35-4647-87cd-e08d95dd1da1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8ba4648-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:49:35 np0005464891 nova_compute[259907]: 2025-10-01 16:49:35.169 2 DEBUG os_vif [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:7f:de,bridge_name='br-int',has_traffic_filtering=True,id=c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd,network=Network(9217a609-3f35-4647-87cd-e08d95dd1da1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8ba4648-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:49:35 np0005464891 systemd[1]: var-lib-containers-storage-overlay-50b821fa82a06ad31d65d2884076ea5eb6bff36936adaf2daf6cf87f087b3e30-merged.mount: Deactivated successfully.
Oct  1 12:49:35 np0005464891 nova_compute[259907]: 2025-10-01 16:49:35.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:35 np0005464891 nova_compute[259907]: 2025-10-01 16:49:35.175 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8ba4648-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:35 np0005464891 nova_compute[259907]: 2025-10-01 16:49:35.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:35 np0005464891 nova_compute[259907]: 2025-10-01 16:49:35.179 2 INFO os_vif [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:7f:de,bridge_name='br-int',has_traffic_filtering=True,id=c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd,network=Network(9217a609-3f35-4647-87cd-e08d95dd1da1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8ba4648-dd')#033[00m
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Oct  1 12:49:35 np0005464891 podman[283054]: 2025-10-01 16:49:35.389347887 +0000 UTC m=+0.321959695 container cleanup cc73d1c707790453448e56de06040ce78bc8a00d1b11caefa1530bfbdc67f96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:49:35 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 404e654a-96d4-4838-a8cd-97699816a455 does not exist
Oct  1 12:49:35 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 8f43606b-26e9-4f5f-8010-898f8c5b5d7b does not exist
Oct  1 12:49:35 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev dec350d6-f9ba-428f-9f4d-eb7a1e48b9a1 does not exist
Oct  1 12:49:35 np0005464891 systemd[1]: libpod-conmon-cc73d1c707790453448e56de06040ce78bc8a00d1b11caefa1530bfbdc67f96b.scope: Deactivated successfully.
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Oct  1 12:49:35 np0005464891 podman[283126]: 2025-10-01 16:49:35.635431149 +0000 UTC m=+0.213861365 container remove cc73d1c707790453448e56de06040ce78bc8a00d1b11caefa1530bfbdc67f96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  1 12:49:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:35.643 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5c6430-9719-41a5-848f-ace8b8450dd9]: (4, ('Wed Oct  1 04:49:35 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1 (cc73d1c707790453448e56de06040ce78bc8a00d1b11caefa1530bfbdc67f96b)\ncc73d1c707790453448e56de06040ce78bc8a00d1b11caefa1530bfbdc67f96b\nWed Oct  1 04:49:35 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1 (cc73d1c707790453448e56de06040ce78bc8a00d1b11caefa1530bfbdc67f96b)\ncc73d1c707790453448e56de06040ce78bc8a00d1b11caefa1530bfbdc67f96b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:35.645 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e039c41c-75a6-4672-8e87-d712c6dc6564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:35.646 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9217a609-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:35 np0005464891 kernel: tap9217a609-30: left promiscuous mode
Oct  1 12:49:35 np0005464891 nova_compute[259907]: 2025-10-01 16:49:35.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:35 np0005464891 nova_compute[259907]: 2025-10-01 16:49:35.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:35.690 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[17f69688-da63-4c90-8dc6-01568fa18442]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:35 np0005464891 nova_compute[259907]: 2025-10-01 16:49:35.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:49:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:35.717 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[0e56bc69-be17-461a-9ba7-2859c1e5410d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:35.718 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ee5e021b-c668-45ac-b7a4-3b0debbca3b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:35 np0005464891 nova_compute[259907]: 2025-10-01 16:49:35.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:35.742 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[18b5da53-7283-4314-9768-6385647bbedf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427825, 'reachable_time': 32034, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283233, 'error': None, 'target': 'ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:35.746 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:49:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:35.747 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[84aa472c-0868-4bb3-88f9-4e021dbb887c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:35 np0005464891 systemd[1]: run-netns-ovnmeta\x2d9217a609\x2d3f35\x2d4647\x2d87cd\x2de08d95dd1da1.mount: Deactivated successfully.
Oct  1 12:49:36 np0005464891 nova_compute[259907]: 2025-10-01 16:49:36.023 2 DEBUG nova.compute.manager [req-203af4bb-234f-47c6-90de-827ffdfe1f9c req-7b4de984-8fac-47b6-bb66-5669253193fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Received event network-vif-unplugged-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:36 np0005464891 nova_compute[259907]: 2025-10-01 16:49:36.025 2 DEBUG oslo_concurrency.lockutils [req-203af4bb-234f-47c6-90de-827ffdfe1f9c req-7b4de984-8fac-47b6-bb66-5669253193fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:36 np0005464891 nova_compute[259907]: 2025-10-01 16:49:36.025 2 DEBUG oslo_concurrency.lockutils [req-203af4bb-234f-47c6-90de-827ffdfe1f9c req-7b4de984-8fac-47b6-bb66-5669253193fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:36 np0005464891 nova_compute[259907]: 2025-10-01 16:49:36.026 2 DEBUG oslo_concurrency.lockutils [req-203af4bb-234f-47c6-90de-827ffdfe1f9c req-7b4de984-8fac-47b6-bb66-5669253193fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:36 np0005464891 nova_compute[259907]: 2025-10-01 16:49:36.026 2 DEBUG nova.compute.manager [req-203af4bb-234f-47c6-90de-827ffdfe1f9c req-7b4de984-8fac-47b6-bb66-5669253193fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] No waiting events found dispatching network-vif-unplugged-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:49:36 np0005464891 nova_compute[259907]: 2025-10-01 16:49:36.027 2 DEBUG nova.compute.manager [req-203af4bb-234f-47c6-90de-827ffdfe1f9c req-7b4de984-8fac-47b6-bb66-5669253193fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Received event network-vif-unplugged-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:49:36 np0005464891 podman[283285]: 2025-10-01 16:49:36.260907751 +0000 UTC m=+0.070021511 container create bcea62f0176695568362b554edc83996d4488c07022e075efe2c8fed50665a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cray, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:49:36 np0005464891 systemd[1]: Started libpod-conmon-bcea62f0176695568362b554edc83996d4488c07022e075efe2c8fed50665a63.scope.
Oct  1 12:49:36 np0005464891 podman[283285]: 2025-10-01 16:49:36.235820779 +0000 UTC m=+0.044934579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:49:36 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:49:36 np0005464891 podman[283285]: 2025-10-01 16:49:36.370844841 +0000 UTC m=+0.179958611 container init bcea62f0176695568362b554edc83996d4488c07022e075efe2c8fed50665a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct  1 12:49:36 np0005464891 podman[283285]: 2025-10-01 16:49:36.383703195 +0000 UTC m=+0.192816975 container start bcea62f0176695568362b554edc83996d4488c07022e075efe2c8fed50665a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cray, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 12:49:36 np0005464891 podman[283285]: 2025-10-01 16:49:36.388056435 +0000 UTC m=+0.197170205 container attach bcea62f0176695568362b554edc83996d4488c07022e075efe2c8fed50665a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cray, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:49:36 np0005464891 romantic_cray[283301]: 167 167
Oct  1 12:49:36 np0005464891 systemd[1]: libpod-bcea62f0176695568362b554edc83996d4488c07022e075efe2c8fed50665a63.scope: Deactivated successfully.
Oct  1 12:49:36 np0005464891 conmon[283301]: conmon bcea62f0176695568362 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bcea62f0176695568362b554edc83996d4488c07022e075efe2c8fed50665a63.scope/container/memory.events
Oct  1 12:49:36 np0005464891 podman[283285]: 2025-10-01 16:49:36.394742639 +0000 UTC m=+0.203856399 container died bcea62f0176695568362b554edc83996d4488c07022e075efe2c8fed50665a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cray, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:49:36 np0005464891 nova_compute[259907]: 2025-10-01 16:49:36.402 2 INFO nova.virt.libvirt.driver [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Deleting instance files /var/lib/nova/instances/347eacbc-b9bd-4163-bc2e-a49a19a833c3_del#033[00m
Oct  1 12:49:36 np0005464891 nova_compute[259907]: 2025-10-01 16:49:36.404 2 INFO nova.virt.libvirt.driver [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Deletion of /var/lib/nova/instances/347eacbc-b9bd-4163-bc2e-a49a19a833c3_del complete#033[00m
Oct  1 12:49:36 np0005464891 systemd[1]: var-lib-containers-storage-overlay-8b43b8662178db877af8d29046248c1addb28e1f4bb89091ff194896ef7c0b66-merged.mount: Deactivated successfully.
Oct  1 12:49:36 np0005464891 podman[283285]: 2025-10-01 16:49:36.448047468 +0000 UTC m=+0.257161208 container remove bcea62f0176695568362b554edc83996d4488c07022e075efe2c8fed50665a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cray, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct  1 12:49:36 np0005464891 nova_compute[259907]: 2025-10-01 16:49:36.465 2 INFO nova.compute.manager [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Took 1.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:49:36 np0005464891 systemd[1]: libpod-conmon-bcea62f0176695568362b554edc83996d4488c07022e075efe2c8fed50665a63.scope: Deactivated successfully.
Oct  1 12:49:36 np0005464891 nova_compute[259907]: 2025-10-01 16:49:36.465 2 DEBUG oslo.service.loopingcall [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:49:36 np0005464891 nova_compute[259907]: 2025-10-01 16:49:36.465 2 DEBUG nova.compute.manager [-] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:49:36 np0005464891 nova_compute[259907]: 2025-10-01 16:49:36.466 2 DEBUG nova.network.neutron [-] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:49:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Oct  1 12:49:36 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:49:36 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:49:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Oct  1 12:49:36 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Oct  1 12:49:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:49:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3118587740' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:49:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:49:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3118587740' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:49:36 np0005464891 podman[283325]: 2025-10-01 16:49:36.72053952 +0000 UTC m=+0.076558052 container create c2d02f49f1766f58cce567b57c1641ebfb8f2cf1dc7bdd632024f0e9da482c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:49:36 np0005464891 systemd[1]: Started libpod-conmon-c2d02f49f1766f58cce567b57c1641ebfb8f2cf1dc7bdd632024f0e9da482c7b.scope.
Oct  1 12:49:36 np0005464891 podman[283325]: 2025-10-01 16:49:36.684394174 +0000 UTC m=+0.040412766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:49:36 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:49:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/990e91a80f984e65b55fec6b0a59e105758bb55e04d401f3d1fee2496c89722c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:49:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/990e91a80f984e65b55fec6b0a59e105758bb55e04d401f3d1fee2496c89722c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:49:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/990e91a80f984e65b55fec6b0a59e105758bb55e04d401f3d1fee2496c89722c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:49:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/990e91a80f984e65b55fec6b0a59e105758bb55e04d401f3d1fee2496c89722c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:49:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/990e91a80f984e65b55fec6b0a59e105758bb55e04d401f3d1fee2496c89722c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:49:36 np0005464891 podman[283325]: 2025-10-01 16:49:36.828142775 +0000 UTC m=+0.184161317 container init c2d02f49f1766f58cce567b57c1641ebfb8f2cf1dc7bdd632024f0e9da482c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shamir, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:49:36 np0005464891 podman[283325]: 2025-10-01 16:49:36.843340235 +0000 UTC m=+0.199358767 container start c2d02f49f1766f58cce567b57c1641ebfb8f2cf1dc7bdd632024f0e9da482c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shamir, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:49:36 np0005464891 podman[283325]: 2025-10-01 16:49:36.847079528 +0000 UTC m=+0.203098090 container attach c2d02f49f1766f58cce567b57c1641ebfb8f2cf1dc7bdd632024f0e9da482c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shamir, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:49:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 409 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 18 KiB/s wr, 233 op/s
Oct  1 12:49:37 np0005464891 nova_compute[259907]: 2025-10-01 16:49:37.463 2 DEBUG nova.network.neutron [-] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:49:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Oct  1 12:49:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Oct  1 12:49:37 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Oct  1 12:49:37 np0005464891 nova_compute[259907]: 2025-10-01 16:49:37.500 2 INFO nova.compute.manager [-] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Took 1.03 seconds to deallocate network for instance.#033[00m
Oct  1 12:49:37 np0005464891 nova_compute[259907]: 2025-10-01 16:49:37.567 2 DEBUG oslo_concurrency.lockutils [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:37 np0005464891 nova_compute[259907]: 2025-10-01 16:49:37.568 2 DEBUG oslo_concurrency.lockutils [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:37 np0005464891 nova_compute[259907]: 2025-10-01 16:49:37.676 2 DEBUG oslo_concurrency.processutils [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:37 np0005464891 sweet_shamir[283341]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:49:37 np0005464891 sweet_shamir[283341]: --> relative data size: 1.0
Oct  1 12:49:37 np0005464891 sweet_shamir[283341]: --> All data devices are unavailable
Oct  1 12:49:37 np0005464891 podman[283388]: 2025-10-01 16:49:37.951729916 +0000 UTC m=+0.064301044 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:49:37 np0005464891 systemd[1]: libpod-c2d02f49f1766f58cce567b57c1641ebfb8f2cf1dc7bdd632024f0e9da482c7b.scope: Deactivated successfully.
Oct  1 12:49:37 np0005464891 systemd[1]: libpod-c2d02f49f1766f58cce567b57c1641ebfb8f2cf1dc7bdd632024f0e9da482c7b.scope: Consumed 1.028s CPU time.
Oct  1 12:49:37 np0005464891 podman[283325]: 2025-10-01 16:49:37.974952946 +0000 UTC m=+1.330971498 container died c2d02f49f1766f58cce567b57c1641ebfb8f2cf1dc7bdd632024f0e9da482c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shamir, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:49:38 np0005464891 systemd[1]: var-lib-containers-storage-overlay-990e91a80f984e65b55fec6b0a59e105758bb55e04d401f3d1fee2496c89722c-merged.mount: Deactivated successfully.
Oct  1 12:49:38 np0005464891 podman[283325]: 2025-10-01 16:49:38.063274531 +0000 UTC m=+1.419293073 container remove c2d02f49f1766f58cce567b57c1641ebfb8f2cf1dc7bdd632024f0e9da482c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shamir, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:49:38 np0005464891 systemd[1]: libpod-conmon-c2d02f49f1766f58cce567b57c1641ebfb8f2cf1dc7bdd632024f0e9da482c7b.scope: Deactivated successfully.
Oct  1 12:49:38 np0005464891 nova_compute[259907]: 2025-10-01 16:49:38.100 2 DEBUG nova.compute.manager [req-34d544bb-e9ae-4466-a768-05307eb6f9e3 req-91a72df7-3b48-4178-9ae7-5ba1a9a573d5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Received event network-vif-plugged-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:38 np0005464891 nova_compute[259907]: 2025-10-01 16:49:38.100 2 DEBUG oslo_concurrency.lockutils [req-34d544bb-e9ae-4466-a768-05307eb6f9e3 req-91a72df7-3b48-4178-9ae7-5ba1a9a573d5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:38 np0005464891 nova_compute[259907]: 2025-10-01 16:49:38.101 2 DEBUG oslo_concurrency.lockutils [req-34d544bb-e9ae-4466-a768-05307eb6f9e3 req-91a72df7-3b48-4178-9ae7-5ba1a9a573d5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:38 np0005464891 nova_compute[259907]: 2025-10-01 16:49:38.101 2 DEBUG oslo_concurrency.lockutils [req-34d544bb-e9ae-4466-a768-05307eb6f9e3 req-91a72df7-3b48-4178-9ae7-5ba1a9a573d5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:38 np0005464891 nova_compute[259907]: 2025-10-01 16:49:38.101 2 DEBUG nova.compute.manager [req-34d544bb-e9ae-4466-a768-05307eb6f9e3 req-91a72df7-3b48-4178-9ae7-5ba1a9a573d5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] No waiting events found dispatching network-vif-plugged-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:49:38 np0005464891 nova_compute[259907]: 2025-10-01 16:49:38.102 2 WARNING nova.compute.manager [req-34d544bb-e9ae-4466-a768-05307eb6f9e3 req-91a72df7-3b48-4178-9ae7-5ba1a9a573d5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Received unexpected event network-vif-plugged-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd for instance with vm_state deleted and task_state None.#033[00m
Oct  1 12:49:38 np0005464891 nova_compute[259907]: 2025-10-01 16:49:38.102 2 DEBUG nova.compute.manager [req-34d544bb-e9ae-4466-a768-05307eb6f9e3 req-91a72df7-3b48-4178-9ae7-5ba1a9a573d5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Received event network-vif-deleted-c8ba4648-dd71-47cc-8b51-dd2ffc0e72cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:49:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4022861371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:49:38 np0005464891 nova_compute[259907]: 2025-10-01 16:49:38.129 2 DEBUG oslo_concurrency.processutils [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:38 np0005464891 nova_compute[259907]: 2025-10-01 16:49:38.137 2 DEBUG nova.compute.provider_tree [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:49:38 np0005464891 nova_compute[259907]: 2025-10-01 16:49:38.151 2 DEBUG nova.scheduler.client.report [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:49:38 np0005464891 nova_compute[259907]: 2025-10-01 16:49:38.184 2 DEBUG oslo_concurrency.lockutils [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:38 np0005464891 nova_compute[259907]: 2025-10-01 16:49:38.215 2 INFO nova.scheduler.client.report [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Deleted allocations for instance 347eacbc-b9bd-4163-bc2e-a49a19a833c3#033[00m
Oct  1 12:49:38 np0005464891 nova_compute[259907]: 2025-10-01 16:49:38.291 2 DEBUG oslo_concurrency.lockutils [None req-d8802aad-82ad-4658-be78-546a3c50b46a 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "347eacbc-b9bd-4163-bc2e-a49a19a833c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:38 np0005464891 podman[283565]: 2025-10-01 16:49:38.796778339 +0000 UTC m=+0.055473900 container create c7b7decec42e8bab363f8b348a5589fc2a8502700e0de9e8999e0b31d11ea0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 12:49:38 np0005464891 systemd[1]: Started libpod-conmon-c7b7decec42e8bab363f8b348a5589fc2a8502700e0de9e8999e0b31d11ea0ab.scope.
Oct  1 12:49:38 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:49:38 np0005464891 podman[283565]: 2025-10-01 16:49:38.775559074 +0000 UTC m=+0.034254625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:49:38 np0005464891 nova_compute[259907]: 2025-10-01 16:49:38.888 2 DEBUG oslo_concurrency.lockutils [None req-34311c7b-196a-4c42-a5bc-669481cdd7df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "01833916-f84a-425e-b28f-d214922d3126" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:38 np0005464891 nova_compute[259907]: 2025-10-01 16:49:38.889 2 DEBUG oslo_concurrency.lockutils [None req-34311c7b-196a-4c42-a5bc-669481cdd7df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:38 np0005464891 podman[283565]: 2025-10-01 16:49:38.889824354 +0000 UTC m=+0.148519925 container init c7b7decec42e8bab363f8b348a5589fc2a8502700e0de9e8999e0b31d11ea0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 12:49:38 np0005464891 podman[283565]: 2025-10-01 16:49:38.898050471 +0000 UTC m=+0.156746002 container start c7b7decec42e8bab363f8b348a5589fc2a8502700e0de9e8999e0b31d11ea0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:49:38 np0005464891 romantic_merkle[283581]: 167 167
Oct  1 12:49:38 np0005464891 podman[283565]: 2025-10-01 16:49:38.906172574 +0000 UTC m=+0.164868125 container attach c7b7decec42e8bab363f8b348a5589fc2a8502700e0de9e8999e0b31d11ea0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:49:38 np0005464891 systemd[1]: libpod-c7b7decec42e8bab363f8b348a5589fc2a8502700e0de9e8999e0b31d11ea0ab.scope: Deactivated successfully.
Oct  1 12:49:38 np0005464891 podman[283565]: 2025-10-01 16:49:38.907016527 +0000 UTC m=+0.165712058 container died c7b7decec42e8bab363f8b348a5589fc2a8502700e0de9e8999e0b31d11ea0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 12:49:38 np0005464891 nova_compute[259907]: 2025-10-01 16:49:38.908 2 INFO nova.compute.manager [None req-34311c7b-196a-4c42-a5bc-669481cdd7df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Detaching volume 42fb3cce-4d4c-44d5-b9f6-4892ebb7b5a4#033[00m
Oct  1 12:49:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 357 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 214 KiB/s rd, 19 KiB/s wr, 293 op/s
Oct  1 12:49:38 np0005464891 systemd[1]: var-lib-containers-storage-overlay-36c4fa8e665d8e17c248065282b3c059041b0159afba97ee77dbe469afa4e9ec-merged.mount: Deactivated successfully.
Oct  1 12:49:38 np0005464891 podman[283565]: 2025-10-01 16:49:38.956280116 +0000 UTC m=+0.214975647 container remove c7b7decec42e8bab363f8b348a5589fc2a8502700e0de9e8999e0b31d11ea0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 12:49:38 np0005464891 systemd[1]: libpod-conmon-c7b7decec42e8bab363f8b348a5589fc2a8502700e0de9e8999e0b31d11ea0ab.scope: Deactivated successfully.
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.045 2 DEBUG oslo_concurrency.lockutils [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "01833916-f84a-425e-b28f-d214922d3126" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.055 2 INFO nova.virt.block_device [None req-34311c7b-196a-4c42-a5bc-669481cdd7df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Attempting to driver detach volume 42fb3cce-4d4c-44d5-b9f6-4892ebb7b5a4 from mountpoint /dev/vdb#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.070 2 DEBUG nova.virt.libvirt.driver [None req-34311c7b-196a-4c42-a5bc-669481cdd7df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Attempting to detach device vdb from instance 01833916-f84a-425e-b28f-d214922d3126 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.070 2 DEBUG nova.virt.libvirt.guest [None req-34311c7b-196a-4c42-a5bc-669481cdd7df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:49:39 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:49:39 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-42fb3cce-4d4c-44d5-b9f6-4892ebb7b5a4">
Oct  1 12:49:39 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:49:39 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:49:39 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:49:39 np0005464891 nova_compute[259907]:  <serial>42fb3cce-4d4c-44d5-b9f6-4892ebb7b5a4</serial>
Oct  1 12:49:39 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:49:39 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:49:39 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.083 2 INFO nova.virt.libvirt.driver [None req-34311c7b-196a-4c42-a5bc-669481cdd7df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Successfully detached device vdb from instance 01833916-f84a-425e-b28f-d214922d3126 from the persistent domain config.#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.083 2 DEBUG nova.virt.libvirt.driver [None req-34311c7b-196a-4c42-a5bc-669481cdd7df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 01833916-f84a-425e-b28f-d214922d3126 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.084 2 DEBUG nova.virt.libvirt.guest [None req-34311c7b-196a-4c42-a5bc-669481cdd7df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:49:39 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:49:39 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-42fb3cce-4d4c-44d5-b9f6-4892ebb7b5a4">
Oct  1 12:49:39 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:49:39 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:49:39 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:49:39 np0005464891 nova_compute[259907]:  <serial>42fb3cce-4d4c-44d5-b9f6-4892ebb7b5a4</serial>
Oct  1 12:49:39 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:49:39 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:49:39 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.175 2 DEBUG oslo_concurrency.lockutils [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.176 2 DEBUG oslo_concurrency.lockutils [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.176 2 DEBUG oslo_concurrency.lockutils [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.176 2 DEBUG oslo_concurrency.lockutils [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.177 2 DEBUG oslo_concurrency.lockutils [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.178 2 INFO nova.compute.manager [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Terminating instance#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.179 2 DEBUG nova.compute.manager [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.200 2 DEBUG nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Received event <DeviceRemovedEvent: 1759337379.1993718, 01833916-f84a-425e-b28f-d214922d3126 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.205 2 DEBUG nova.virt.libvirt.driver [None req-34311c7b-196a-4c42-a5bc-669481cdd7df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 01833916-f84a-425e-b28f-d214922d3126 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.207 2 INFO nova.virt.libvirt.driver [None req-34311c7b-196a-4c42-a5bc-669481cdd7df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Successfully detached device vdb from instance 01833916-f84a-425e-b28f-d214922d3126 from the live domain config.#033[00m
Oct  1 12:49:39 np0005464891 podman[283603]: 2025-10-01 16:49:39.213541597 +0000 UTC m=+0.079453372 container create e84905725fed7d9851d7361c760d68530964be0f11cdce74a514bd56512495bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 12:49:39 np0005464891 kernel: tap5d498a06-e5 (unregistering): left promiscuous mode
Oct  1 12:49:39 np0005464891 NetworkManager[44940]: <info>  [1759337379.2602] device (tap5d498a06-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:49:39 np0005464891 podman[283603]: 2025-10-01 16:49:39.178530152 +0000 UTC m=+0.044442037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:49:39 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:39Z|00097|binding|INFO|Releasing lport 5d498a06-e5b8-4d33-87a1-cfc873bebe29 from this chassis (sb_readonly=0)
Oct  1 12:49:39 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:39Z|00098|binding|INFO|Setting lport 5d498a06-e5b8-4d33-87a1-cfc873bebe29 down in Southbound
Oct  1 12:49:39 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:39Z|00099|binding|INFO|Removing iface tap5d498a06-e5 ovn-installed in OVS
Oct  1 12:49:39 np0005464891 systemd[1]: Started libpod-conmon-e84905725fed7d9851d7361c760d68530964be0f11cdce74a514bd56512495bd.scope.
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.281 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:ca:d4 10.100.0.6'], port_security=['fa:16:3e:21:ca:d4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f395084b84f48d182c3be9d7961475e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a473cde3-a378-4504-81c4-9d8fada1bc14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a03153c4-51cb-49a4-a16a-ed6a97c8c003, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=5d498a06-e5b8-4d33-87a1-cfc873bebe29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.283 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 5d498a06-e5b8-4d33-87a1-cfc873bebe29 in datapath 0b8d6144-4eec-41cd-aaa9-d3e718f03c5c unbound from our chassis#033[00m
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.285 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0b8d6144-4eec-41cd-aaa9-d3e718f03c5c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.286 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[43edb284-0902-4151-9a99-264aeb99d5fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.287 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c namespace which is not needed anymore#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:39 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:49:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970ee71861a9a249b8b9eea8af04404a0101c35566a9394bd963a83f3da731c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:49:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970ee71861a9a249b8b9eea8af04404a0101c35566a9394bd963a83f3da731c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:49:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970ee71861a9a249b8b9eea8af04404a0101c35566a9394bd963a83f3da731c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:49:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970ee71861a9a249b8b9eea8af04404a0101c35566a9394bd963a83f3da731c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:49:39 np0005464891 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct  1 12:49:39 np0005464891 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 18.196s CPU time.
Oct  1 12:49:39 np0005464891 systemd-machined[214891]: Machine qemu-6-instance-00000006 terminated.
Oct  1 12:49:39 np0005464891 podman[283603]: 2025-10-01 16:49:39.348913098 +0000 UTC m=+0.214824903 container init e84905725fed7d9851d7361c760d68530964be0f11cdce74a514bd56512495bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:49:39 np0005464891 podman[283603]: 2025-10-01 16:49:39.356828357 +0000 UTC m=+0.222740132 container start e84905725fed7d9851d7361c760d68530964be0f11cdce74a514bd56512495bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:49:39 np0005464891 podman[283603]: 2025-10-01 16:49:39.361650059 +0000 UTC m=+0.227561834 container attach e84905725fed7d9851d7361c760d68530964be0f11cdce74a514bd56512495bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.418 2 INFO nova.virt.libvirt.driver [-] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Instance destroyed successfully.#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.419 2 DEBUG nova.objects.instance [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lazy-loading 'resources' on Instance uuid 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.427 2 DEBUG nova.objects.instance [None req-34311c7b-196a-4c42-a5bc-669481cdd7df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lazy-loading 'flavor' on Instance uuid 01833916-f84a-425e-b28f-d214922d3126 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:49:39 np0005464891 neutron-haproxy-ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c[279259]: [NOTICE]   (279264) : haproxy version is 2.8.14-c23fe91
Oct  1 12:49:39 np0005464891 neutron-haproxy-ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c[279259]: [NOTICE]   (279264) : path to executable is /usr/sbin/haproxy
Oct  1 12:49:39 np0005464891 neutron-haproxy-ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c[279259]: [WARNING]  (279264) : Exiting Master process...
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.439 2 DEBUG nova.virt.libvirt.vif [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:47:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-405249637',display_name='tempest-TestStampPattern-server-405249637',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-405249637',id=6,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCck7nxcoGk0qQMqmOkhPfker9ncjX3MedwZy1gvsVFGYBG7D5wvyJC+lFiT/6un7wQpds+bs1FRdVcdDnlHzQimOGzqeJBoWgRzI2+A/i117tgAu+tGkXiUBUgSD0X9yA==',key_name='tempest-TestStampPattern-1388282123',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:48:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f395084b84f48d182c3be9d7961475e',ramdisk_id='',reservation_id='r-8f5t7auv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-305826503',owner_user_name='tempest-TestStampPattern-305826503-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:48:36Z,user_data=None,user_id='0a821557545f49ad9c15eee1cf0bd82b',uuid=4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "address": "fa:16:3e:21:ca:d4", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d498a06-e5", "ovs_interfaceid": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.439 2 DEBUG nova.network.os_vif_util [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Converting VIF {"id": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "address": "fa:16:3e:21:ca:d4", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d498a06-e5", "ovs_interfaceid": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:49:39 np0005464891 neutron-haproxy-ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c[279259]: [WARNING]  (279264) : Exiting Master process...
Oct  1 12:49:39 np0005464891 neutron-haproxy-ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c[279259]: [ALERT]    (279264) : Current worker (279266) exited with code 143 (Terminated)
Oct  1 12:49:39 np0005464891 neutron-haproxy-ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c[279259]: [WARNING]  (279264) : All workers exited. Exiting... (0)
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.441 2 DEBUG nova.network.os_vif_util [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:21:ca:d4,bridge_name='br-int',has_traffic_filtering=True,id=5d498a06-e5b8-4d33-87a1-cfc873bebe29,network=Network(0b8d6144-4eec-41cd-aaa9-d3e718f03c5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d498a06-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.441 2 DEBUG os_vif [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:ca:d4,bridge_name='br-int',has_traffic_filtering=True,id=5d498a06-e5b8-4d33-87a1-cfc873bebe29,network=Network(0b8d6144-4eec-41cd-aaa9-d3e718f03c5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d498a06-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:39 np0005464891 systemd[1]: libpod-d237d4a41decf4140d6c3f50755a403a34240d2dc903873eb50a48ca06995c8f.scope: Deactivated successfully.
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.444 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d498a06-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:39 np0005464891 podman[283649]: 2025-10-01 16:49:39.44804763 +0000 UTC m=+0.058487923 container died d237d4a41decf4140d6c3f50755a403a34240d2dc903873eb50a48ca06995c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.453 2 INFO os_vif [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:ca:d4,bridge_name='br-int',has_traffic_filtering=True,id=5d498a06-e5b8-4d33-87a1-cfc873bebe29,network=Network(0b8d6144-4eec-41cd-aaa9-d3e718f03c5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d498a06-e5')#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.468 2 DEBUG oslo_concurrency.lockutils [None req-34311c7b-196a-4c42-a5bc-669481cdd7df 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.470 2 DEBUG oslo_concurrency.lockutils [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.470 2 DEBUG oslo_concurrency.lockutils [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "01833916-f84a-425e-b28f-d214922d3126-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.470 2 DEBUG oslo_concurrency.lockutils [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.470 2 DEBUG oslo_concurrency.lockutils [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.471 2 INFO nova.compute.manager [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Terminating instance#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.472 2 DEBUG nova.compute.manager [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:49:39 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d237d4a41decf4140d6c3f50755a403a34240d2dc903873eb50a48ca06995c8f-userdata-shm.mount: Deactivated successfully.
Oct  1 12:49:39 np0005464891 systemd[1]: var-lib-containers-storage-overlay-3dfce6f35efeb4f353cb30dd790014d6091e7c67633b3e5cb04a494e31807604-merged.mount: Deactivated successfully.
Oct  1 12:49:39 np0005464891 podman[283649]: 2025-10-01 16:49:39.514233705 +0000 UTC m=+0.124673528 container cleanup d237d4a41decf4140d6c3f50755a403a34240d2dc903873eb50a48ca06995c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  1 12:49:39 np0005464891 systemd[1]: libpod-conmon-d237d4a41decf4140d6c3f50755a403a34240d2dc903873eb50a48ca06995c8f.scope: Deactivated successfully.
Oct  1 12:49:39 np0005464891 kernel: tap31dd65ea-0b (unregistering): left promiscuous mode
Oct  1 12:49:39 np0005464891 NetworkManager[44940]: <info>  [1759337379.5445] device (tap31dd65ea-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:49:39 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:39Z|00100|binding|INFO|Releasing lport 31dd65ea-0bf2-4c61-a641-bff75a96926d from this chassis (sb_readonly=0)
Oct  1 12:49:39 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:39Z|00101|binding|INFO|Setting lport 31dd65ea-0bf2-4c61-a641-bff75a96926d down in Southbound
Oct  1 12:49:39 np0005464891 ovn_controller[152409]: 2025-10-01T16:49:39Z|00102|binding|INFO|Removing iface tap31dd65ea-0b ovn-installed in OVS
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.556 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:8b:e9 10.100.0.10'], port_security=['fa:16:3e:61:8b:e9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '01833916-f84a-425e-b28f-d214922d3126', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3401e30b-97c6-4012-a9d4-0114c56bacd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '69d5fb4f7a0b4337a1b8774e04c97b9a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '337d1ee8-b54a-42da-a113-4004bc12381c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6048fd95-db94-4f1d-be7e-ff0b5269a1e3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=31dd65ea-0bf2-4c61-a641-bff75a96926d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:39 np0005464891 podman[283706]: 2025-10-01 16:49:39.597021487 +0000 UTC m=+0.059825530 container remove d237d4a41decf4140d6c3f50755a403a34240d2dc903873eb50a48ca06995c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  1 12:49:39 np0005464891 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.604 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2a05f31b-d53d-4d87-8fc2-12793910d777]: (4, ('Wed Oct  1 04:49:39 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c (d237d4a41decf4140d6c3f50755a403a34240d2dc903873eb50a48ca06995c8f)\nd237d4a41decf4140d6c3f50755a403a34240d2dc903873eb50a48ca06995c8f\nWed Oct  1 04:49:39 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c (d237d4a41decf4140d6c3f50755a403a34240d2dc903873eb50a48ca06995c8f)\nd237d4a41decf4140d6c3f50755a403a34240d2dc903873eb50a48ca06995c8f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:39 np0005464891 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 14.304s CPU time.
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.606 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[27777c76-77fd-47bb-b901-986b4caa2956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.607 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b8d6144-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:39 np0005464891 systemd-machined[214891]: Machine qemu-10-instance-0000000a terminated.
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:39 np0005464891 kernel: tap0b8d6144-40: left promiscuous mode
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.639 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[4afca587-d13e-4c20-b172-e8a732ba712a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.670 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e25dfd-f946-4f5a-8147-7d58a6c58168]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.671 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[83f8228a-98a0-42a8-bf02-8dbcde431c76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.695 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[23b28f1c-d158-44f4-a7c9-f0700c044a25]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423073, 'reachable_time': 19255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283727, 'error': None, 'target': 'ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.703 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0b8d6144-4eec-41cd-aaa9-d3e718f03c5c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:49:39 np0005464891 systemd[1]: run-netns-ovnmeta\x2d0b8d6144\x2d4eec\x2d41cd\x2daaa9\x2dd3e718f03c5c.mount: Deactivated successfully.
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.704 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[adab58c9-9fa7-4768-bea6-5cfd762bbf4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.706 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 31dd65ea-0bf2-4c61-a641-bff75a96926d in datapath 3401e30b-97c6-4012-a9d4-0114c56bacd5 unbound from our chassis#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.708 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3401e30b-97c6-4012-a9d4-0114c56bacd5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.715 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d48e86db-00f8-4a09-ae9f-240e5909bb13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.718 2 INFO nova.virt.libvirt.driver [-] [instance: 01833916-f84a-425e-b28f-d214922d3126] Instance destroyed successfully.#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.719 2 DEBUG nova.objects.instance [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lazy-loading 'resources' on Instance uuid 01833916-f84a-425e-b28f-d214922d3126 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:49:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:39.719 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5 namespace which is not needed anymore#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.734 2 DEBUG nova.virt.libvirt.vif [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:49:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-633721332',display_name='tempest-VolumesSnapshotTestJSON-instance-633721332',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-633721332',id=10,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQXtiJojKSjlX+L4+UcMOUZxDjM6YHarO/WRI6PZsXzV57BI1NGaQ5utimUiS/B2m/z/6TZx53P1GuknwcJ4JxYbnNCo1sgJq2vAVD/0YOb5f+MRSQ3HDMnQdqctYUuJw==',key_name='tempest-keypair-120189569',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:49:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='69d5fb4f7a0b4337a1b8774e04c97b9a',ramdisk_id='',reservation_id='r-opvs9h41',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-1941074907',owner_user_name='tempest-VolumesSnapshotTestJSON-1941074907-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:49:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3517dc72472c436aaf2fe65b5ce2f240',uuid=01833916-f84a-425e-b28f-d214922d3126,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "address": "fa:16:3e:61:8b:e9", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31dd65ea-0b", "ovs_interfaceid": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.735 2 DEBUG nova.network.os_vif_util [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Converting VIF {"id": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "address": "fa:16:3e:61:8b:e9", "network": {"id": "3401e30b-97c6-4012-a9d4-0114c56bacd5", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1585052793-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69d5fb4f7a0b4337a1b8774e04c97b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31dd65ea-0b", "ovs_interfaceid": "31dd65ea-0bf2-4c61-a641-bff75a96926d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.736 2 DEBUG nova.network.os_vif_util [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:61:8b:e9,bridge_name='br-int',has_traffic_filtering=True,id=31dd65ea-0bf2-4c61-a641-bff75a96926d,network=Network(3401e30b-97c6-4012-a9d4-0114c56bacd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31dd65ea-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.736 2 DEBUG os_vif [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:8b:e9,bridge_name='br-int',has_traffic_filtering=True,id=31dd65ea-0bf2-4c61-a641-bff75a96926d,network=Network(3401e30b-97c6-4012-a9d4-0114c56bacd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31dd65ea-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31dd65ea-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.746 2 INFO os_vif [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:8b:e9,bridge_name='br-int',has_traffic_filtering=True,id=31dd65ea-0bf2-4c61-a641-bff75a96926d,network=Network(3401e30b-97c6-4012-a9d4-0114c56bacd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31dd65ea-0b')#033[00m
Oct  1 12:49:39 np0005464891 neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5[282294]: [NOTICE]   (282298) : haproxy version is 2.8.14-c23fe91
Oct  1 12:49:39 np0005464891 neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5[282294]: [NOTICE]   (282298) : path to executable is /usr/sbin/haproxy
Oct  1 12:49:39 np0005464891 neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5[282294]: [WARNING]  (282298) : Exiting Master process...
Oct  1 12:49:39 np0005464891 neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5[282294]: [ALERT]    (282298) : Current worker (282300) exited with code 143 (Terminated)
Oct  1 12:49:39 np0005464891 neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5[282294]: [WARNING]  (282298) : All workers exited. Exiting... (0)
Oct  1 12:49:39 np0005464891 systemd[1]: libpod-3d6d3484c7d2317d0ca610af34594a1d3acf261b4854016fd9ff82236ed51339.scope: Deactivated successfully.
Oct  1 12:49:39 np0005464891 podman[283771]: 2025-10-01 16:49:39.92076267 +0000 UTC m=+0.077148007 container died 3d6d3484c7d2317d0ca610af34594a1d3acf261b4854016fd9ff82236ed51339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:49:39 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3d6d3484c7d2317d0ca610af34594a1d3acf261b4854016fd9ff82236ed51339-userdata-shm.mount: Deactivated successfully.
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.992 2 INFO nova.virt.libvirt.driver [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Deleting instance files /var/lib/nova/instances/4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_del#033[00m
Oct  1 12:49:39 np0005464891 nova_compute[259907]: 2025-10-01 16:49:39.993 2 INFO nova.virt.libvirt.driver [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Deletion of /var/lib/nova/instances/4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83_del complete#033[00m
Oct  1 12:49:40 np0005464891 podman[283771]: 2025-10-01 16:49:40.015315137 +0000 UTC m=+0.171700444 container cleanup 3d6d3484c7d2317d0ca610af34594a1d3acf261b4854016fd9ff82236ed51339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  1 12:49:40 np0005464891 systemd[1]: libpod-conmon-3d6d3484c7d2317d0ca610af34594a1d3acf261b4854016fd9ff82236ed51339.scope: Deactivated successfully.
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.079 2 INFO nova.compute.manager [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.080 2 DEBUG oslo.service.loopingcall [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.081 2 DEBUG nova.compute.manager [-] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.082 2 DEBUG nova.network.neutron [-] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:49:40 np0005464891 podman[283802]: 2025-10-01 16:49:40.10758 +0000 UTC m=+0.065439775 container remove 3d6d3484c7d2317d0ca610af34594a1d3acf261b4854016fd9ff82236ed51339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 12:49:40 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:40.117 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e29eae97-c2f2-4df9-8163-0fab8f4afbee]: (4, ('Wed Oct  1 04:49:39 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5 (3d6d3484c7d2317d0ca610af34594a1d3acf261b4854016fd9ff82236ed51339)\n3d6d3484c7d2317d0ca610af34594a1d3acf261b4854016fd9ff82236ed51339\nWed Oct  1 04:49:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5 (3d6d3484c7d2317d0ca610af34594a1d3acf261b4854016fd9ff82236ed51339)\n3d6d3484c7d2317d0ca610af34594a1d3acf261b4854016fd9ff82236ed51339\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:40 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:40.120 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b9bc023a-bd8f-45e3-b85f-f789c80c30c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:40 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:40.121 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3401e30b-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:40 np0005464891 kernel: tap3401e30b-90: left promiscuous mode
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:40 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:40.149 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b7fdfef9-733d-4308-88e3-928b3da14f1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:40 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:40.176 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d0296978-8ebc-4831-80fb-50c83f4a8be9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]: {
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:    "0": [
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:        {
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "devices": [
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "/dev/loop3"
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            ],
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "lv_name": "ceph_lv0",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "lv_size": "21470642176",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "name": "ceph_lv0",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "tags": {
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.cluster_name": "ceph",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.crush_device_class": "",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.encrypted": "0",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.osd_id": "0",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.type": "block",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.vdo": "0"
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            },
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "type": "block",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "vg_name": "ceph_vg0"
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:        }
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:    ],
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:    "1": [
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:        {
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "devices": [
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "/dev/loop4"
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            ],
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "lv_name": "ceph_lv1",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "lv_size": "21470642176",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "name": "ceph_lv1",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "tags": {
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.cluster_name": "ceph",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.crush_device_class": "",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.encrypted": "0",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.osd_id": "1",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.type": "block",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.vdo": "0"
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            },
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "type": "block",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "vg_name": "ceph_vg1"
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:        }
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:    ],
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:    "2": [
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:        {
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "devices": [
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "/dev/loop5"
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            ],
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "lv_name": "ceph_lv2",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "lv_size": "21470642176",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "name": "ceph_lv2",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "tags": {
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.cluster_name": "ceph",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.crush_device_class": "",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.encrypted": "0",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.osd_id": "2",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.type": "block",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:                "ceph.vdo": "0"
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            },
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "type": "block",
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:            "vg_name": "ceph_vg2"
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:        }
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]:    ]
Oct  1 12:49:40 np0005464891 recursing_swartz[283623]: }
Oct  1 12:49:40 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:40.177 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5c75c443-f368-4bca-8475-7086306a8db1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:40 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:40.194 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0fc2de-6224-446a-857e-5e297fa34867]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429721, 'reachable_time': 27083, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283819, 'error': None, 'target': 'ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:40 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:40.195 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3401e30b-97c6-4012-a9d4-0114c56bacd5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:49:40 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:40.195 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[84695d49-6b8d-483d-add0-ca3dfd13a6f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:49:40 np0005464891 systemd[1]: libpod-e84905725fed7d9851d7361c760d68530964be0f11cdce74a514bd56512495bd.scope: Deactivated successfully.
Oct  1 12:49:40 np0005464891 podman[283603]: 2025-10-01 16:49:40.205649923 +0000 UTC m=+1.071561708 container died e84905725fed7d9851d7361c760d68530964be0f11cdce74a514bd56512495bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:49:40 np0005464891 systemd[1]: var-lib-containers-storage-overlay-86dcdadaf3ab277f3fa50b8d50f8b37e268df14c1557a500dfbd0f320497110e-merged.mount: Deactivated successfully.
Oct  1 12:49:40 np0005464891 systemd[1]: run-netns-ovnmeta\x2d3401e30b\x2d97c6\x2d4012\x2da9d4\x2d0114c56bacd5.mount: Deactivated successfully.
Oct  1 12:49:40 np0005464891 systemd[1]: var-lib-containers-storage-overlay-970ee71861a9a249b8b9eea8af04404a0101c35566a9394bd963a83f3da731c8-merged.mount: Deactivated successfully.
Oct  1 12:49:40 np0005464891 podman[283603]: 2025-10-01 16:49:40.272862436 +0000 UTC m=+1.138774231 container remove e84905725fed7d9851d7361c760d68530964be0f11cdce74a514bd56512495bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swartz, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:49:40 np0005464891 systemd[1]: libpod-conmon-e84905725fed7d9851d7361c760d68530964be0f11cdce74a514bd56512495bd.scope: Deactivated successfully.
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.288 2 INFO nova.virt.libvirt.driver [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Deleting instance files /var/lib/nova/instances/01833916-f84a-425e-b28f-d214922d3126_del#033[00m
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.288 2 INFO nova.virt.libvirt.driver [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Deletion of /var/lib/nova/instances/01833916-f84a-425e-b28f-d214922d3126_del complete#033[00m
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.316 2 DEBUG nova.compute.manager [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Received event network-changed-5d498a06-e5b8-4d33-87a1-cfc873bebe29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.316 2 DEBUG nova.compute.manager [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Refreshing instance network info cache due to event network-changed-5d498a06-e5b8-4d33-87a1-cfc873bebe29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.317 2 DEBUG oslo_concurrency.lockutils [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.317 2 DEBUG oslo_concurrency.lockutils [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.317 2 DEBUG nova.network.neutron [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Refreshing network info cache for port 5d498a06-e5b8-4d33-87a1-cfc873bebe29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.345 2 INFO nova.compute.manager [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.345 2 DEBUG oslo.service.loopingcall [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.346 2 DEBUG nova.compute.manager [-] [instance: 01833916-f84a-425e-b28f-d214922d3126] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.346 2 DEBUG nova.network.neutron [-] [instance: 01833916-f84a-425e-b28f-d214922d3126] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:49:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Oct  1 12:49:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Oct  1 12:49:40 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Oct  1 12:49:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:49:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Oct  1 12:49:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Oct  1 12:49:40 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Oct  1 12:49:40 np0005464891 nova_compute[259907]: 2025-10-01 16:49:40.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 186 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 185 KiB/s rd, 15 KiB/s wr, 274 op/s
Oct  1 12:49:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:49:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3032493478' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:49:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:49:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3032493478' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.009 2 DEBUG nova.network.neutron [-] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.052 2 INFO nova.compute.manager [-] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Took 0.97 seconds to deallocate network for instance.#033[00m
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.105 2 DEBUG oslo_concurrency.lockutils [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.106 2 DEBUG oslo_concurrency.lockutils [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.172 2 DEBUG oslo_concurrency.processutils [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:41 np0005464891 podman[283974]: 2025-10-01 16:49:41.211900129 +0000 UTC m=+0.067046959 container create 2ecdfe52b559bd87f7ca15828740167d996e238e1e4caef29431f2c26885a4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noether, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:49:41 np0005464891 systemd[1]: Started libpod-conmon-2ecdfe52b559bd87f7ca15828740167d996e238e1e4caef29431f2c26885a4f4.scope.
Oct  1 12:49:41 np0005464891 podman[283974]: 2025-10-01 16:49:41.185307326 +0000 UTC m=+0.040454196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:49:41 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:49:41 np0005464891 podman[283974]: 2025-10-01 16:49:41.31312742 +0000 UTC m=+0.168274240 container init 2ecdfe52b559bd87f7ca15828740167d996e238e1e4caef29431f2c26885a4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noether, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:49:41 np0005464891 podman[283974]: 2025-10-01 16:49:41.320216695 +0000 UTC m=+0.175363485 container start 2ecdfe52b559bd87f7ca15828740167d996e238e1e4caef29431f2c26885a4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noether, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:49:41 np0005464891 podman[283974]: 2025-10-01 16:49:41.323975119 +0000 UTC m=+0.179121929 container attach 2ecdfe52b559bd87f7ca15828740167d996e238e1e4caef29431f2c26885a4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noether, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Oct  1 12:49:41 np0005464891 nervous_noether[283991]: 167 167
Oct  1 12:49:41 np0005464891 systemd[1]: libpod-2ecdfe52b559bd87f7ca15828740167d996e238e1e4caef29431f2c26885a4f4.scope: Deactivated successfully.
Oct  1 12:49:41 np0005464891 podman[283974]: 2025-10-01 16:49:41.326951051 +0000 UTC m=+0.182097871 container died 2ecdfe52b559bd87f7ca15828740167d996e238e1e4caef29431f2c26885a4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noether, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 12:49:41 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ff2319dc4e60e7d16175d9d1f5579843361e9f9ac0522e01d3652fdb2a979dd5-merged.mount: Deactivated successfully.
Oct  1 12:49:41 np0005464891 podman[283974]: 2025-10-01 16:49:41.36755073 +0000 UTC m=+0.222697520 container remove 2ecdfe52b559bd87f7ca15828740167d996e238e1e4caef29431f2c26885a4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noether, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 12:49:41 np0005464891 systemd[1]: libpod-conmon-2ecdfe52b559bd87f7ca15828740167d996e238e1e4caef29431f2c26885a4f4.scope: Deactivated successfully.
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.408 2 DEBUG nova.network.neutron [-] [instance: 01833916-f84a-425e-b28f-d214922d3126] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.429 2 INFO nova.compute.manager [-] [instance: 01833916-f84a-425e-b28f-d214922d3126] Took 1.08 seconds to deallocate network for instance.#033[00m
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.483 2 DEBUG nova.compute.manager [req-712796b0-b888-4db6-8ca8-c660adf38af4 req-80044b71-360e-4f3e-b637-d807c8e15634 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Received event network-vif-deleted-31dd65ea-0bf2-4c61-a641-bff75a96926d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.537 2 WARNING nova.volume.cinder [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Attachment 476ac6e2-50e3-41b4-8b89-1339f0d9052d does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = 476ac6e2-50e3-41b4-8b89-1339f0d9052d. (HTTP 404) (Request-ID: req-cf8095b9-cbb9-48b2-817a-df9eece0c208)#033[00m
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.538 2 INFO nova.compute.manager [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Took 0.11 seconds to detach 1 volumes for instance.#033[00m
Oct  1 12:49:41 np0005464891 podman[284034]: 2025-10-01 16:49:41.50724137 +0000 UTC m=+0.024355072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:49:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:49:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/677103007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:49:41 np0005464891 podman[284034]: 2025-10-01 16:49:41.69773129 +0000 UTC m=+0.214845012 container create 3b7d7d39f6eb5babad143a6d0d14a797f9858c487542e72bb268c75f5d50284c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_varahamihira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.728 2 DEBUG oslo_concurrency.lockutils [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.729 2 DEBUG oslo_concurrency.processutils [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.738 2 DEBUG nova.compute.provider_tree [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:49:41 np0005464891 systemd[1]: Started libpod-conmon-3b7d7d39f6eb5babad143a6d0d14a797f9858c487542e72bb268c75f5d50284c.scope.
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.750 2 DEBUG nova.scheduler.client.report [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.772 2 DEBUG oslo_concurrency.lockutils [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.775 2 DEBUG oslo_concurrency.lockutils [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:41 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:49:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eaa2befb0e14d4c10d6112095ec270e763b65336fc5848f8d770236af90f9d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:49:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eaa2befb0e14d4c10d6112095ec270e763b65336fc5848f8d770236af90f9d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:49:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eaa2befb0e14d4c10d6112095ec270e763b65336fc5848f8d770236af90f9d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:49:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eaa2befb0e14d4c10d6112095ec270e763b65336fc5848f8d770236af90f9d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.800 2 INFO nova.scheduler.client.report [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Deleted allocations for instance 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83#033[00m
Oct  1 12:49:41 np0005464891 podman[284034]: 2025-10-01 16:49:41.820739501 +0000 UTC m=+0.337853223 container init 3b7d7d39f6eb5babad143a6d0d14a797f9858c487542e72bb268c75f5d50284c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:49:41 np0005464891 podman[284034]: 2025-10-01 16:49:41.837254836 +0000 UTC m=+0.354368518 container start 3b7d7d39f6eb5babad143a6d0d14a797f9858c487542e72bb268c75f5d50284c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct  1 12:49:41 np0005464891 podman[284034]: 2025-10-01 16:49:41.840403723 +0000 UTC m=+0.357517495 container attach 3b7d7d39f6eb5babad143a6d0d14a797f9858c487542e72bb268c75f5d50284c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.840 2 DEBUG oslo_concurrency.processutils [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:49:41 np0005464891 nova_compute[259907]: 2025-10-01 16:49:41.901 2 DEBUG oslo_concurrency.lockutils [None req-da111177-a667-4c92-b77d-c83b8b5bbe68 0a821557545f49ad9c15eee1cf0bd82b 1f395084b84f48d182c3be9d7961475e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:49:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:49:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:49:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:49:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:49:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.115 2 DEBUG nova.network.neutron [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Updated VIF entry in instance network info cache for port 5d498a06-e5b8-4d33-87a1-cfc873bebe29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.116 2 DEBUG nova.network.neutron [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Updating instance_info_cache with network_info: [{"id": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "address": "fa:16:3e:21:ca:d4", "network": {"id": "0b8d6144-4eec-41cd-aaa9-d3e718f03c5c", "bridge": "br-int", "label": "tempest-TestStampPattern-1050348466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f395084b84f48d182c3be9d7961475e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d498a06-e5", "ovs_interfaceid": "5d498a06-e5b8-4d33-87a1-cfc873bebe29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.135 2 DEBUG oslo_concurrency.lockutils [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.135 2 DEBUG nova.compute.manager [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Received event network-vif-unplugged-5d498a06-e5b8-4d33-87a1-cfc873bebe29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.136 2 DEBUG oslo_concurrency.lockutils [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.136 2 DEBUG oslo_concurrency.lockutils [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.136 2 DEBUG oslo_concurrency.lockutils [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.137 2 DEBUG nova.compute.manager [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] No waiting events found dispatching network-vif-unplugged-5d498a06-e5b8-4d33-87a1-cfc873bebe29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.137 2 DEBUG nova.compute.manager [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Received event network-vif-unplugged-5d498a06-e5b8-4d33-87a1-cfc873bebe29 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.137 2 DEBUG nova.compute.manager [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Received event network-vif-plugged-5d498a06-e5b8-4d33-87a1-cfc873bebe29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.137 2 DEBUG oslo_concurrency.lockutils [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.138 2 DEBUG oslo_concurrency.lockutils [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.138 2 DEBUG oslo_concurrency.lockutils [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.138 2 DEBUG nova.compute.manager [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] No waiting events found dispatching network-vif-plugged-5d498a06-e5b8-4d33-87a1-cfc873bebe29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.139 2 WARNING nova.compute.manager [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Received unexpected event network-vif-plugged-5d498a06-e5b8-4d33-87a1-cfc873bebe29 for instance with vm_state active and task_state deleting.#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.139 2 DEBUG nova.compute.manager [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Received event network-vif-unplugged-31dd65ea-0bf2-4c61-a641-bff75a96926d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.139 2 DEBUG oslo_concurrency.lockutils [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "01833916-f84a-425e-b28f-d214922d3126-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.139 2 DEBUG oslo_concurrency.lockutils [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.140 2 DEBUG oslo_concurrency.lockutils [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.140 2 DEBUG nova.compute.manager [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] No waiting events found dispatching network-vif-unplugged-31dd65ea-0bf2-4c61-a641-bff75a96926d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.140 2 DEBUG nova.compute.manager [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Received event network-vif-unplugged-31dd65ea-0bf2-4c61-a641-bff75a96926d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.140 2 DEBUG nova.compute.manager [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Received event network-vif-plugged-31dd65ea-0bf2-4c61-a641-bff75a96926d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.141 2 DEBUG oslo_concurrency.lockutils [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "01833916-f84a-425e-b28f-d214922d3126-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.141 2 DEBUG oslo_concurrency.lockutils [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.141 2 DEBUG oslo_concurrency.lockutils [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.142 2 DEBUG nova.compute.manager [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] No waiting events found dispatching network-vif-plugged-31dd65ea-0bf2-4c61-a641-bff75a96926d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.142 2 WARNING nova.compute.manager [req-5d0ada28-22d8-4c99-bafd-d9a9bf866e64 req-f320fe42-7059-4773-9458-d021ae783600 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 01833916-f84a-425e-b28f-d214922d3126] Received unexpected event network-vif-plugged-31dd65ea-0bf2-4c61-a641-bff75a96926d for instance with vm_state active and task_state deleting.#033[00m
Oct  1 12:49:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:49:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1417427142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.332 2 DEBUG oslo_concurrency.processutils [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.339 2 DEBUG nova.compute.provider_tree [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.357 2 DEBUG nova.scheduler.client.report [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.382 2 DEBUG oslo_concurrency.lockutils [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.394 2 DEBUG nova.compute.manager [req-78039e70-73a8-41ba-a948-7bfffc549f10 req-b8502d08-99db-4cc4-91f0-7274a7262557 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Received event network-vif-deleted-5d498a06-e5b8-4d33-87a1-cfc873bebe29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.404 2 INFO nova.scheduler.client.report [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Deleted allocations for instance 01833916-f84a-425e-b28f-d214922d3126#033[00m
Oct  1 12:49:42 np0005464891 nova_compute[259907]: 2025-10-01 16:49:42.476 2 DEBUG oslo_concurrency.lockutils [None req-0e2c1274-cc9b-4d62-8ff7-e8a330d1eaee 3517dc72472c436aaf2fe65b5ce2f240 69d5fb4f7a0b4337a1b8774e04c97b9a - - default default] Lock "01833916-f84a-425e-b28f-d214922d3126" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:49:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:49:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3214012410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:49:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:49:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3214012410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]: {
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:        "osd_id": 2,
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:        "type": "bluestore"
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:    },
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:        "osd_id": 0,
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:        "type": "bluestore"
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:    },
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:        "osd_id": 1,
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:        "type": "bluestore"
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]:    }
Oct  1 12:49:42 np0005464891 competent_varahamihira[284052]: }
Oct  1 12:49:42 np0005464891 systemd[1]: libpod-3b7d7d39f6eb5babad143a6d0d14a797f9858c487542e72bb268c75f5d50284c.scope: Deactivated successfully.
Oct  1 12:49:42 np0005464891 podman[284034]: 2025-10-01 16:49:42.86225201 +0000 UTC m=+1.379365722 container died 3b7d7d39f6eb5babad143a6d0d14a797f9858c487542e72bb268c75f5d50284c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_varahamihira, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 12:49:42 np0005464891 systemd[1]: libpod-3b7d7d39f6eb5babad143a6d0d14a797f9858c487542e72bb268c75f5d50284c.scope: Consumed 1.027s CPU time.
Oct  1 12:49:42 np0005464891 systemd[1]: var-lib-containers-storage-overlay-0eaa2befb0e14d4c10d6112095ec270e763b65336fc5848f8d770236af90f9d5-merged.mount: Deactivated successfully.
Oct  1 12:49:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 137 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 191 KiB/s rd, 14 KiB/s wr, 274 op/s
Oct  1 12:49:42 np0005464891 podman[284034]: 2025-10-01 16:49:42.945412612 +0000 UTC m=+1.462526304 container remove 3b7d7d39f6eb5babad143a6d0d14a797f9858c487542e72bb268c75f5d50284c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_varahamihira, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:49:42 np0005464891 systemd[1]: libpod-conmon-3b7d7d39f6eb5babad143a6d0d14a797f9858c487542e72bb268c75f5d50284c.scope: Deactivated successfully.
Oct  1 12:49:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:49:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:49:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:49:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:49:43 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 6a8aeda5-4957-469a-b53d-6f0f0d57e777 does not exist
Oct  1 12:49:43 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 5d8872f9-1518-4180-98f6-c23d2207b249 does not exist
Oct  1 12:49:43 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:49:43 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:49:44 np0005464891 nova_compute[259907]: 2025-10-01 16:49:44.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 108 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 800 KiB/s rd, 13 KiB/s wr, 272 op/s
Oct  1 12:49:45 np0005464891 nova_compute[259907]: 2025-10-01 16:49:45.050 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337370.049446, b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:49:45 np0005464891 nova_compute[259907]: 2025-10-01 16:49:45.051 2 INFO nova.compute.manager [-] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:49:45 np0005464891 nova_compute[259907]: 2025-10-01 16:49:45.073 2 DEBUG nova.compute.manager [None req-1f8d3360-750c-4047-854f-2519b391a649 - - - - - -] [instance: b51ebb4b-e2f0-41d6-9f11-b5b48ccb7183] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:49:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:49:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Oct  1 12:49:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Oct  1 12:49:45 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Oct  1 12:49:45 np0005464891 nova_compute[259907]: 2025-10-01 16:49:45.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.5 KiB/s wr, 140 op/s
Oct  1 12:49:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Oct  1 12:49:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Oct  1 12:49:47 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Oct  1 12:49:47 np0005464891 nova_compute[259907]: 2025-10-01 16:49:47.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:47 np0005464891 nova_compute[259907]: 2025-10-01 16:49:47.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.9 KiB/s wr, 132 op/s
Oct  1 12:49:49 np0005464891 nova_compute[259907]: 2025-10-01 16:49:49.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:49:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2158218420' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:49:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:49:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2158218420' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:49:50 np0005464891 nova_compute[259907]: 2025-10-01 16:49:50.149 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337375.1467304, 347eacbc-b9bd-4163-bc2e-a49a19a833c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:49:50 np0005464891 nova_compute[259907]: 2025-10-01 16:49:50.149 2 INFO nova.compute.manager [-] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:49:50 np0005464891 nova_compute[259907]: 2025-10-01 16:49:50.176 2 DEBUG nova.compute.manager [None req-6e7ba6ed-97d2-46e4-bfce-93b0ccd4e353 - - - - - -] [instance: 347eacbc-b9bd-4163-bc2e-a49a19a833c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:49:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:49:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1286724466' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:49:50 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:50.624 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:49:50 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:50.625 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:49:50 np0005464891 nova_compute[259907]: 2025-10-01 16:49:50.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:49:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Oct  1 12:49:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Oct  1 12:49:50 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Oct  1 12:49:50 np0005464891 nova_compute[259907]: 2025-10-01 16:49:50.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 117 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.7 MiB/s wr, 141 op/s
Oct  1 12:49:50 np0005464891 podman[284174]: 2025-10-01 16:49:50.9840402 +0000 UTC m=+0.090024752 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  1 12:49:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Oct  1 12:49:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Oct  1 12:49:52 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Oct  1 12:49:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 152 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.5 MiB/s wr, 166 op/s
Oct  1 12:49:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:49:54 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3578376291' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:49:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:49:54 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3578376291' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:49:54 np0005464891 nova_compute[259907]: 2025-10-01 16:49:54.420 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337379.4181266, 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:49:54 np0005464891 nova_compute[259907]: 2025-10-01 16:49:54.420 2 INFO nova.compute.manager [-] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:49:54 np0005464891 nova_compute[259907]: 2025-10-01 16:49:54.436 2 DEBUG nova.compute.manager [None req-c9d0326b-ca47-46f6-8c32-ea60c4547c86 - - - - - -] [instance: 4b6bebc9-6fef-4cfa-a12b-5befb0b9eb83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:49:54 np0005464891 nova_compute[259907]: 2025-10-01 16:49:54.716 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337379.7148452, 01833916-f84a-425e-b28f-d214922d3126 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:49:54 np0005464891 nova_compute[259907]: 2025-10-01 16:49:54.717 2 INFO nova.compute.manager [-] [instance: 01833916-f84a-425e-b28f-d214922d3126] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:49:54 np0005464891 nova_compute[259907]: 2025-10-01 16:49:54.750 2 DEBUG nova.compute.manager [None req-60fe9c3b-94e8-4a6a-9174-4b6d7ac52c17 - - - - - -] [instance: 01833916-f84a-425e-b28f-d214922d3126] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:49:54 np0005464891 nova_compute[259907]: 2025-10-01 16:49:54.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 162 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.8 MiB/s wr, 183 op/s
Oct  1 12:49:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:49:55 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3789319400' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:49:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:49:55 np0005464891 nova_compute[259907]: 2025-10-01 16:49:55.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:49:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Oct  1 12:49:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Oct  1 12:49:56 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Oct  1 12:49:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:49:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2768467551' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:49:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 180 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.2 MiB/s wr, 129 op/s
Oct  1 12:49:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Oct  1 12:49:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Oct  1 12:49:57 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Oct  1 12:49:57 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:49:57.628 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:49:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Oct  1 12:49:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Oct  1 12:49:58 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Oct  1 12:49:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:49:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3847740105' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:49:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:49:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3847740105' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:49:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 188 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.8 MiB/s wr, 124 op/s
Oct  1 12:49:59 np0005464891 nova_compute[259907]: 2025-10-01 16:49:59.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Oct  1 12:50:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Oct  1 12:50:00 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Oct  1 12:50:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3167476825' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3167476825' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:50:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Oct  1 12:50:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Oct  1 12:50:00 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Oct  1 12:50:00 np0005464891 nova_compute[259907]: 2025-10-01 16:50:00.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 188 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 5.3 MiB/s wr, 282 op/s
Oct  1 12:50:01 np0005464891 podman[284193]: 2025-10-01 16:50:01.007903171 +0000 UTC m=+0.111789312 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  1 12:50:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Oct  1 12:50:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Oct  1 12:50:01 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Oct  1 12:50:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Oct  1 12:50:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Oct  1 12:50:02 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Oct  1 12:50:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 162 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.6 MiB/s wr, 386 op/s
Oct  1 12:50:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2444065085' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2444065085' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3075537533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3075537533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2129096837' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2129096837' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:03 np0005464891 podman[284219]: 2025-10-01 16:50:03.993798475 +0000 UTC m=+0.093613531 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd)
Oct  1 12:50:04 np0005464891 nova_compute[259907]: 2025-10-01 16:50:04.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 139 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 147 KiB/s rd, 14 KiB/s wr, 207 op/s
Oct  1 12:50:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:05 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1309594148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:05 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1309594148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:05 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3351871100' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:05 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3351871100' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:50:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Oct  1 12:50:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Oct  1 12:50:05 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Oct  1 12:50:05 np0005464891 nova_compute[259907]: 2025-10-01 16:50:05.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:50:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1584988612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:50:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2786581322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2786581322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Oct  1 12:50:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Oct  1 12:50:06 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Oct  1 12:50:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 105 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 15 KiB/s wr, 252 op/s
Oct  1 12:50:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2439053463' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2439053463' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Oct  1 12:50:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Oct  1 12:50:07 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Oct  1 12:50:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/694237264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/694237264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 7.3 KiB/s wr, 215 op/s
Oct  1 12:50:08 np0005464891 podman[284239]: 2025-10-01 16:50:08.976803428 +0000 UTC m=+0.084215052 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  1 12:50:09 np0005464891 nova_compute[259907]: 2025-10-01 16:50:09.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:50:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Oct  1 12:50:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Oct  1 12:50:10 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Oct  1 12:50:10 np0005464891 nova_compute[259907]: 2025-10-01 16:50:10.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 152 KiB/s rd, 5.9 KiB/s wr, 201 op/s
Oct  1 12:50:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:50:10 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1926144259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:50:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/951808638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/951808638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Oct  1 12:50:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Oct  1 12:50:11 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:50:12
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['images', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', '.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:50:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:12.452 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:50:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:12.453 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:50:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:12.453 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:50:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 158 KiB/s rd, 6.5 KiB/s wr, 210 op/s
Oct  1 12:50:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Oct  1 12:50:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Oct  1 12:50:13 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Oct  1 12:50:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1543367628' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1543367628' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:14 np0005464891 nova_compute[259907]: 2025-10-01 16:50:14.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Oct  1 12:50:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Oct  1 12:50:14 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Oct  1 12:50:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 5.2 KiB/s wr, 145 op/s
Oct  1 12:50:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:50:15 np0005464891 nova_compute[259907]: 2025-10-01 16:50:15.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Oct  1 12:50:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Oct  1 12:50:16 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Oct  1 12:50:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 4.6 KiB/s wr, 147 op/s
Oct  1 12:50:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/914179693' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/914179693' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Oct  1 12:50:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Oct  1 12:50:17 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Oct  1 12:50:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 171 KiB/s rd, 6.9 KiB/s wr, 225 op/s
Oct  1 12:50:19 np0005464891 nova_compute[259907]: 2025-10-01 16:50:19.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:50:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Oct  1 12:50:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Oct  1 12:50:20 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Oct  1 12:50:20 np0005464891 nova_compute[259907]: 2025-10-01 16:50:20.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 6.8 KiB/s wr, 158 op/s
Oct  1 12:50:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4100927490' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4100927490' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Oct  1 12:50:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Oct  1 12:50:21 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Oct  1 12:50:21 np0005464891 podman[284259]: 2025-10-01 16:50:21.963414892 +0000 UTC m=+0.071205604 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034739603682359835 of space, bias 1.0, pg target 0.1042188110470795 quantized to 32 (current 32)
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:50:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Oct  1 12:50:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Oct  1 12:50:22 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Oct  1 12:50:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/200489532' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/200489532' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 10 KiB/s wr, 122 op/s
Oct  1 12:50:23 np0005464891 nova_compute[259907]: 2025-10-01 16:50:23.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:50:23 np0005464891 nova_compute[259907]: 2025-10-01 16:50:23.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:50:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2328806694' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2328806694' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2131636045' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2131636045' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:24 np0005464891 nova_compute[259907]: 2025-10-01 16:50:24.476 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "dc697861-16c7-4baa-8c59-84deb0c0b65c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:50:24 np0005464891 nova_compute[259907]: 2025-10-01 16:50:24.476 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:50:24 np0005464891 nova_compute[259907]: 2025-10-01 16:50:24.502 2 DEBUG nova.compute.manager [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:50:24 np0005464891 nova_compute[259907]: 2025-10-01 16:50:24.604 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:50:24 np0005464891 nova_compute[259907]: 2025-10-01 16:50:24.606 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:50:24 np0005464891 nova_compute[259907]: 2025-10-01 16:50:24.618 2 DEBUG nova.virt.hardware [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:50:24 np0005464891 nova_compute[259907]: 2025-10-01 16:50:24.618 2 INFO nova.compute.claims [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:50:24 np0005464891 nova_compute[259907]: 2025-10-01 16:50:24.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:24 np0005464891 nova_compute[259907]: 2025-10-01 16:50:24.799 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:50:24 np0005464891 nova_compute[259907]: 2025-10-01 16:50:24.803 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:50:24 np0005464891 nova_compute[259907]: 2025-10-01 16:50:24.804 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  1 12:50:24 np0005464891 nova_compute[259907]: 2025-10-01 16:50:24.821 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  1 12:50:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 11 KiB/s wr, 162 op/s
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.011 2 DEBUG oslo_concurrency.processutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:50:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:50:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2300708057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:50:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1044746974' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1044746974' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.464 2 DEBUG oslo_concurrency.processutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.473 2 DEBUG nova.compute.provider_tree [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.491 2 DEBUG nova.scheduler.client.report [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.512 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.513 2 DEBUG nova.compute.manager [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.557 2 DEBUG nova.compute.manager [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.558 2 DEBUG nova.network.neutron [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.580 2 INFO nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.625 2 DEBUG nova.compute.manager [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.709 2 DEBUG nova.compute.manager [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.710 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.711 2 INFO nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Creating image(s)#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.734 2 DEBUG nova.storage.rbd_utils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] rbd image dc697861-16c7-4baa-8c59-84deb0c0b65c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:50:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:50:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Oct  1 12:50:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Oct  1 12:50:25 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.790 2 DEBUG nova.storage.rbd_utils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] rbd image dc697861-16c7-4baa-8c59-84deb0c0b65c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.826 2 DEBUG nova.storage.rbd_utils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] rbd image dc697861-16c7-4baa-8c59-84deb0c0b65c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.829 2 DEBUG oslo_concurrency.processutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.855 2 DEBUG nova.policy [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '825e1f460cae49ad9834c4d7d67e24fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '19100b7dd5c9420db1d7f374559a9498', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.903 2 DEBUG oslo_concurrency.processutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.904 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.905 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.905 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.930 2 DEBUG nova.storage.rbd_utils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] rbd image dc697861-16c7-4baa-8c59-84deb0c0b65c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:50:25 np0005464891 nova_compute[259907]: 2025-10-01 16:50:25.934 2 DEBUG oslo_concurrency.processutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa dc697861-16c7-4baa-8c59-84deb0c0b65c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.417 2 DEBUG oslo_concurrency.processutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa dc697861-16c7-4baa-8c59-84deb0c0b65c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.503 2 DEBUG nova.storage.rbd_utils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] resizing rbd image dc697861-16c7-4baa-8c59-84deb0c0b65c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.638 2 DEBUG nova.objects.instance [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lazy-loading 'migration_context' on Instance uuid dc697861-16c7-4baa-8c59-84deb0c0b65c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.780 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.781 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Ensure instance console log exists: /var/lib/nova/instances/dc697861-16c7-4baa-8c59-84deb0c0b65c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.782 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.783 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.783 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.804 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.821 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.822 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.823 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.824 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.841 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.841 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.841 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.842 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:50:26 np0005464891 nova_compute[259907]: 2025-10-01 16:50:26.842 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:50:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 8.0 KiB/s wr, 135 op/s
Oct  1 12:50:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3076372910' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3076372910' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:27 np0005464891 nova_compute[259907]: 2025-10-01 16:50:27.135 2 DEBUG nova.network.neutron [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Successfully created port: b4aee080-9989-4dcc-af16-952142e561a9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:50:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:50:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/12585293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:50:27 np0005464891 nova_compute[259907]: 2025-10-01 16:50:27.306 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:50:27 np0005464891 nova_compute[259907]: 2025-10-01 16:50:27.513 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:50:27 np0005464891 nova_compute[259907]: 2025-10-01 16:50:27.515 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4592MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:50:27 np0005464891 nova_compute[259907]: 2025-10-01 16:50:27.515 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:50:27 np0005464891 nova_compute[259907]: 2025-10-01 16:50:27.515 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:50:27 np0005464891 nova_compute[259907]: 2025-10-01 16:50:27.602 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance dc697861-16c7-4baa-8c59-84deb0c0b65c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:50:27 np0005464891 nova_compute[259907]: 2025-10-01 16:50:27.603 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:50:27 np0005464891 nova_compute[259907]: 2025-10-01 16:50:27.603 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:50:27 np0005464891 nova_compute[259907]: 2025-10-01 16:50:27.650 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:50:27 np0005464891 nova_compute[259907]: 2025-10-01 16:50:27.971 2 DEBUG nova.network.neutron [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Successfully updated port: b4aee080-9989-4dcc-af16-952142e561a9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:50:27 np0005464891 nova_compute[259907]: 2025-10-01 16:50:27.988 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "refresh_cache-dc697861-16c7-4baa-8c59-84deb0c0b65c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:50:27 np0005464891 nova_compute[259907]: 2025-10-01 16:50:27.989 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquired lock "refresh_cache-dc697861-16c7-4baa-8c59-84deb0c0b65c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:50:27 np0005464891 nova_compute[259907]: 2025-10-01 16:50:27.989 2 DEBUG nova.network.neutron [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.111 2 DEBUG nova.compute.manager [req-1fbc4301-76fb-4c16-a6c8-9a8a3299dd99 req-36b7a142-7bc5-4b0d-8b5d-4dc642eaf417 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Received event network-changed-b4aee080-9989-4dcc-af16-952142e561a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.112 2 DEBUG nova.compute.manager [req-1fbc4301-76fb-4c16-a6c8-9a8a3299dd99 req-36b7a142-7bc5-4b0d-8b5d-4dc642eaf417 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Refreshing instance network info cache due to event network-changed-b4aee080-9989-4dcc-af16-952142e561a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.112 2 DEBUG oslo_concurrency.lockutils [req-1fbc4301-76fb-4c16-a6c8-9a8a3299dd99 req-36b7a142-7bc5-4b0d-8b5d-4dc642eaf417 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-dc697861-16c7-4baa-8c59-84deb0c0b65c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:50:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:50:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4288181992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.139 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.147 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.151 2 DEBUG nova.network.neutron [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.169 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.201 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.202 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.203 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.775 2 DEBUG nova.network.neutron [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Updating instance_info_cache with network_info: [{"id": "b4aee080-9989-4dcc-af16-952142e561a9", "address": "fa:16:3e:a7:26:f6", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4aee080-99", "ovs_interfaceid": "b4aee080-9989-4dcc-af16-952142e561a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.797 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Releasing lock "refresh_cache-dc697861-16c7-4baa-8c59-84deb0c0b65c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.798 2 DEBUG nova.compute.manager [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Instance network_info: |[{"id": "b4aee080-9989-4dcc-af16-952142e561a9", "address": "fa:16:3e:a7:26:f6", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4aee080-99", "ovs_interfaceid": "b4aee080-9989-4dcc-af16-952142e561a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.799 2 DEBUG oslo_concurrency.lockutils [req-1fbc4301-76fb-4c16-a6c8-9a8a3299dd99 req-36b7a142-7bc5-4b0d-8b5d-4dc642eaf417 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-dc697861-16c7-4baa-8c59-84deb0c0b65c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.799 2 DEBUG nova.network.neutron [req-1fbc4301-76fb-4c16-a6c8-9a8a3299dd99 req-36b7a142-7bc5-4b0d-8b5d-4dc642eaf417 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Refreshing network info cache for port b4aee080-9989-4dcc-af16-952142e561a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.802 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Start _get_guest_xml network_info=[{"id": "b4aee080-9989-4dcc-af16-952142e561a9", "address": "fa:16:3e:a7:26:f6", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4aee080-99", "ovs_interfaceid": "b4aee080-9989-4dcc-af16-952142e561a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'image_id': 'f01c1e7c-fea3-4433-a44a-d71153552c78'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.808 2 WARNING nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.813 2 DEBUG nova.virt.libvirt.host [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.814 2 DEBUG nova.virt.libvirt.host [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.820 2 DEBUG nova.virt.libvirt.host [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.821 2 DEBUG nova.virt.libvirt.host [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.821 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.822 2 DEBUG nova.virt.hardware [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.822 2 DEBUG nova.virt.hardware [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.822 2 DEBUG nova.virt.hardware [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.822 2 DEBUG nova.virt.hardware [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.823 2 DEBUG nova.virt.hardware [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.823 2 DEBUG nova.virt.hardware [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.823 2 DEBUG nova.virt.hardware [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.823 2 DEBUG nova.virt.hardware [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.823 2 DEBUG nova.virt.hardware [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.824 2 DEBUG nova.virt.hardware [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.824 2 DEBUG nova.virt.hardware [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:50:28 np0005464891 nova_compute[259907]: 2025-10-01 16:50:28.826 2 DEBUG oslo_concurrency.processutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:50:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 102 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 1008 KiB/s wr, 212 op/s
Oct  1 12:50:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Oct  1 12:50:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Oct  1 12:50:29 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Oct  1 12:50:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:50:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/115571916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.303 2 DEBUG oslo_concurrency.processutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.332 2 DEBUG nova.storage.rbd_utils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] rbd image dc697861-16c7-4baa-8c59-84deb0c0b65c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.338 2 DEBUG oslo_concurrency.processutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.758 2 DEBUG nova.network.neutron [req-1fbc4301-76fb-4c16-a6c8-9a8a3299dd99 req-36b7a142-7bc5-4b0d-8b5d-4dc642eaf417 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Updated VIF entry in instance network info cache for port b4aee080-9989-4dcc-af16-952142e561a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.759 2 DEBUG nova.network.neutron [req-1fbc4301-76fb-4c16-a6c8-9a8a3299dd99 req-36b7a142-7bc5-4b0d-8b5d-4dc642eaf417 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Updating instance_info_cache with network_info: [{"id": "b4aee080-9989-4dcc-af16-952142e561a9", "address": "fa:16:3e:a7:26:f6", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4aee080-99", "ovs_interfaceid": "b4aee080-9989-4dcc-af16-952142e561a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.779 2 DEBUG oslo_concurrency.lockutils [req-1fbc4301-76fb-4c16-a6c8-9a8a3299dd99 req-36b7a142-7bc5-4b0d-8b5d-4dc642eaf417 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-dc697861-16c7-4baa-8c59-84deb0c0b65c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:50:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:50:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3957883739' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.820 2 DEBUG oslo_concurrency.processutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.822 2 DEBUG nova.virt.libvirt.vif [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:50:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-818113866',display_name='tempest-VolumesBackupsTest-instance-818113866',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-818113866',id=11,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDgeFrb1867YBXwjKa/TQ0YYXKREXQsqF/dn32JrvKEOrj/bBiwwtISkB6YnLQq8eW7daoes7oHlqUTk/TbKbHXimSuQtQY8Q+G8dxvoBF1xsi9Pxx4AVYXydkaRNIq/EA==',key_name='tempest-keypair-213857542',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19100b7dd5c9420db1d7f374559a9498',ramdisk_id='',reservation_id='r-9l7ddtdt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1599024574',owner_user_name='tempest-VolumesBackupsTest-1599024574-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:50:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='825e1f460cae49ad9834c4d7d67e24fe',uuid=dc697861-16c7-4baa-8c59-84deb0c0b65c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4aee080-9989-4dcc-af16-952142e561a9", "address": "fa:16:3e:a7:26:f6", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4aee080-99", "ovs_interfaceid": "b4aee080-9989-4dcc-af16-952142e561a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.823 2 DEBUG nova.network.os_vif_util [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Converting VIF {"id": "b4aee080-9989-4dcc-af16-952142e561a9", "address": "fa:16:3e:a7:26:f6", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4aee080-99", "ovs_interfaceid": "b4aee080-9989-4dcc-af16-952142e561a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.824 2 DEBUG nova.network.os_vif_util [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:f6,bridge_name='br-int',has_traffic_filtering=True,id=b4aee080-9989-4dcc-af16-952142e561a9,network=Network(9217a609-3f35-4647-87cd-e08d95dd1da1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4aee080-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.826 2 DEBUG nova.objects.instance [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc697861-16c7-4baa-8c59-84deb0c0b65c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.849 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  <uuid>dc697861-16c7-4baa-8c59-84deb0c0b65c</uuid>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  <name>instance-0000000b</name>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <nova:name>tempest-VolumesBackupsTest-instance-818113866</nova:name>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:50:28</nova:creationTime>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:        <nova:user uuid="825e1f460cae49ad9834c4d7d67e24fe">tempest-VolumesBackupsTest-1599024574-project-member</nova:user>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:        <nova:project uuid="19100b7dd5c9420db1d7f374559a9498">tempest-VolumesBackupsTest-1599024574</nova:project>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="f01c1e7c-fea3-4433-a44a-d71153552c78"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:        <nova:port uuid="b4aee080-9989-4dcc-af16-952142e561a9">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <entry name="serial">dc697861-16c7-4baa-8c59-84deb0c0b65c</entry>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <entry name="uuid">dc697861-16c7-4baa-8c59-84deb0c0b65c</entry>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/dc697861-16c7-4baa-8c59-84deb0c0b65c_disk">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/dc697861-16c7-4baa-8c59-84deb0c0b65c_disk.config">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:a7:26:f6"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <target dev="tapb4aee080-99"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/dc697861-16c7-4baa-8c59-84deb0c0b65c/console.log" append="off"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:50:29 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:50:29 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:50:29 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:50:29 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.850 2 DEBUG nova.compute.manager [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Preparing to wait for external event network-vif-plugged-b4aee080-9989-4dcc-af16-952142e561a9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.851 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.851 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.852 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.853 2 DEBUG nova.virt.libvirt.vif [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:50:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-818113866',display_name='tempest-VolumesBackupsTest-instance-818113866',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-818113866',id=11,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDgeFrb1867YBXwjKa/TQ0YYXKREXQsqF/dn32JrvKEOrj/bBiwwtISkB6YnLQq8eW7daoes7oHlqUTk/TbKbHXimSuQtQY8Q+G8dxvoBF1xsi9Pxx4AVYXydkaRNIq/EA==',key_name='tempest-keypair-213857542',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19100b7dd5c9420db1d7f374559a9498',ramdisk_id='',reservation_id='r-9l7ddtdt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1599024574',owner_user_name='tempest-VolumesBackupsTest-1599024574-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:50:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='825e1f460cae49ad9834c4d7d67e24fe',uuid=dc697861-16c7-4baa-8c59-84deb0c0b65c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4aee080-9989-4dcc-af16-952142e561a9", "address": "fa:16:3e:a7:26:f6", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4aee080-99", "ovs_interfaceid": "b4aee080-9989-4dcc-af16-952142e561a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.853 2 DEBUG nova.network.os_vif_util [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Converting VIF {"id": "b4aee080-9989-4dcc-af16-952142e561a9", "address": "fa:16:3e:a7:26:f6", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4aee080-99", "ovs_interfaceid": "b4aee080-9989-4dcc-af16-952142e561a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.854 2 DEBUG nova.network.os_vif_util [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:f6,bridge_name='br-int',has_traffic_filtering=True,id=b4aee080-9989-4dcc-af16-952142e561a9,network=Network(9217a609-3f35-4647-87cd-e08d95dd1da1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4aee080-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.855 2 DEBUG os_vif [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:f6,bridge_name='br-int',has_traffic_filtering=True,id=b4aee080-9989-4dcc-af16-952142e561a9,network=Network(9217a609-3f35-4647-87cd-e08d95dd1da1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4aee080-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.856 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.857 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4aee080-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.862 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb4aee080-99, col_values=(('external_ids', {'iface-id': 'b4aee080-9989-4dcc-af16-952142e561a9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:26:f6', 'vm-uuid': 'dc697861-16c7-4baa-8c59-84deb0c0b65c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:29 np0005464891 NetworkManager[44940]: <info>  [1759337429.8659] manager: (tapb4aee080-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.875 2 INFO os_vif [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:26:f6,bridge_name='br-int',has_traffic_filtering=True,id=b4aee080-9989-4dcc-af16-952142e561a9,network=Network(9217a609-3f35-4647-87cd-e08d95dd1da1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4aee080-99')#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.924 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.925 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.925 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] No VIF found with MAC fa:16:3e:a7:26:f6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.926 2 INFO nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Using config drive#033[00m
Oct  1 12:50:29 np0005464891 nova_compute[259907]: 2025-10-01 16:50:29.954 2 DEBUG nova.storage.rbd_utils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] rbd image dc697861-16c7-4baa-8c59-84deb0c0b65c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:50:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Oct  1 12:50:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Oct  1 12:50:30 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Oct  1 12:50:30 np0005464891 nova_compute[259907]: 2025-10-01 16:50:30.198 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:50:30 np0005464891 nova_compute[259907]: 2025-10-01 16:50:30.198 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:50:30 np0005464891 nova_compute[259907]: 2025-10-01 16:50:30.199 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:50:30 np0005464891 nova_compute[259907]: 2025-10-01 16:50:30.250 2 INFO nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Creating config drive at /var/lib/nova/instances/dc697861-16c7-4baa-8c59-84deb0c0b65c/disk.config#033[00m
Oct  1 12:50:30 np0005464891 nova_compute[259907]: 2025-10-01 16:50:30.262 2 DEBUG oslo_concurrency.processutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc697861-16c7-4baa-8c59-84deb0c0b65c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapcf_7xh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:50:30 np0005464891 nova_compute[259907]: 2025-10-01 16:50:30.397 2 DEBUG oslo_concurrency.processutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc697861-16c7-4baa-8c59-84deb0c0b65c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapcf_7xh" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:50:30 np0005464891 nova_compute[259907]: 2025-10-01 16:50:30.438 2 DEBUG nova.storage.rbd_utils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] rbd image dc697861-16c7-4baa-8c59-84deb0c0b65c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:50:30 np0005464891 nova_compute[259907]: 2025-10-01 16:50:30.444 2 DEBUG oslo_concurrency.processutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc697861-16c7-4baa-8c59-84deb0c0b65c/disk.config dc697861-16c7-4baa-8c59-84deb0c0b65c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:50:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:50:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Oct  1 12:50:30 np0005464891 nova_compute[259907]: 2025-10-01 16:50:30.744 2 DEBUG oslo_concurrency.processutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc697861-16c7-4baa-8c59-84deb0c0b65c/disk.config dc697861-16c7-4baa-8c59-84deb0c0b65c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:50:30 np0005464891 nova_compute[259907]: 2025-10-01 16:50:30.746 2 INFO nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Deleting local config drive /var/lib/nova/instances/dc697861-16c7-4baa-8c59-84deb0c0b65c/disk.config because it was imported into RBD.#033[00m
Oct  1 12:50:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Oct  1 12:50:30 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Oct  1 12:50:30 np0005464891 kernel: tapb4aee080-99: entered promiscuous mode
Oct  1 12:50:30 np0005464891 NetworkManager[44940]: <info>  [1759337430.8322] manager: (tapb4aee080-99): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Oct  1 12:50:30 np0005464891 systemd-udevd[284642]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:50:30 np0005464891 ovn_controller[152409]: 2025-10-01T16:50:30Z|00103|binding|INFO|Claiming lport b4aee080-9989-4dcc-af16-952142e561a9 for this chassis.
Oct  1 12:50:30 np0005464891 ovn_controller[152409]: 2025-10-01T16:50:30Z|00104|binding|INFO|b4aee080-9989-4dcc-af16-952142e561a9: Claiming fa:16:3e:a7:26:f6 10.100.0.10
Oct  1 12:50:30 np0005464891 nova_compute[259907]: 2025-10-01 16:50:30.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:30.896 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:26:f6 10.100.0.10'], port_security=['fa:16:3e:a7:26:f6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'dc697861-16c7-4baa-8c59-84deb0c0b65c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9217a609-3f35-4647-87cd-e08d95dd1da1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19100b7dd5c9420db1d7f374559a9498', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd382a6a-4351-4841-beca-09ddced00c45', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3460a047-44ee-4ad2-938a-c15de55876d0, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=b4aee080-9989-4dcc-af16-952142e561a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:50:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:30.898 162546 INFO neutron.agent.ovn.metadata.agent [-] Port b4aee080-9989-4dcc-af16-952142e561a9 in datapath 9217a609-3f35-4647-87cd-e08d95dd1da1 bound to our chassis#033[00m
Oct  1 12:50:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:30.901 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9217a609-3f35-4647-87cd-e08d95dd1da1#033[00m
Oct  1 12:50:30 np0005464891 NetworkManager[44940]: <info>  [1759337430.9085] device (tapb4aee080-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:50:30 np0005464891 NetworkManager[44940]: <info>  [1759337430.9098] device (tapb4aee080-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:50:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:30.919 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[13c6ac5a-0e4c-455c-9796-abee4910e688]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:30.921 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9217a609-31 in ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:50:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:30.924 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9217a609-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:50:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:30.924 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8af8046c-970c-4a6e-8040-64426fbeb769]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:30.925 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[9e56909b-bbfd-4c7f-9a9f-a5e48bef1993]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:30 np0005464891 systemd-machined[214891]: New machine qemu-11-instance-0000000b.
Oct  1 12:50:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:30.944 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb10940-6891-42bb-b866-a95e411ecd96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:30 np0005464891 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Oct  1 12:50:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 134 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 4.1 MiB/s wr, 195 op/s
Oct  1 12:50:30 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:30.976 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e666dfb5-a39a-4113-9408-3a2c64f502af]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:30 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1446743256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:30 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1446743256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:30 np0005464891 nova_compute[259907]: 2025-10-01 16:50:30.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:31 np0005464891 ovn_controller[152409]: 2025-10-01T16:50:30Z|00105|binding|INFO|Setting lport b4aee080-9989-4dcc-af16-952142e561a9 ovn-installed in OVS
Oct  1 12:50:31 np0005464891 ovn_controller[152409]: 2025-10-01T16:50:31Z|00106|binding|INFO|Setting lport b4aee080-9989-4dcc-af16-952142e561a9 up in Southbound
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.023 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[883e4360-8510-492f-9b27-dc00933ab55e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:31 np0005464891 NetworkManager[44940]: <info>  [1759337431.0326] manager: (tap9217a609-30): new Veth device (/org/freedesktop/NetworkManager/Devices/65)
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.032 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[853fa635-1377-45cd-90fd-9745e595e92b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.087 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[86e33c25-e566-44cd-bc55-ca67d6633830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.090 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[8f71092e-c3cd-459e-b908-5790d02761be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:31 np0005464891 NetworkManager[44940]: <info>  [1759337431.1128] device (tap9217a609-30): carrier: link connected
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.118 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c8a0f3-1faa-43f6-92d7-410b9179d46e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.133 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[aec5acbf-4336-413a-941c-b0c12e65aa54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9217a609-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:b8:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437968, 'reachable_time': 38934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284690, 'error': None, 'target': 'ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.154 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[218cec60-bb21-4aba-a6a2-4e37568ddcde]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:b815'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437968, 'tstamp': 437968}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284696, 'error': None, 'target': 'ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.168 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c7c5b4-a1d0-4ffd-b841-43ee0dd4d8b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9217a609-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:b8:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437968, 'reachable_time': 38934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284703, 'error': None, 'target': 'ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:31 np0005464891 podman[284664]: 2025-10-01 16:50:31.180434111 +0000 UTC m=+0.094237969 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.195 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e9d333-5bfb-4b35-958a-8169c052e3ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.265 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[45613371-73b2-433a-ab4c-aec13d895a1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.266 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9217a609-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.267 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.267 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9217a609-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:31 np0005464891 kernel: tap9217a609-30: entered promiscuous mode
Oct  1 12:50:31 np0005464891 NetworkManager[44940]: <info>  [1759337431.2712] manager: (tap9217a609-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.271 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9217a609-30, col_values=(('external_ids', {'iface-id': '5558844a-e29a-46f0-b86d-8940a2f4c4de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:31 np0005464891 ovn_controller[152409]: 2025-10-01T16:50:31Z|00107|binding|INFO|Releasing lport 5558844a-e29a-46f0-b86d-8940a2f4c4de from this chassis (sb_readonly=0)
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.292 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9217a609-3f35-4647-87cd-e08d95dd1da1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9217a609-3f35-4647-87cd-e08d95dd1da1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.294 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[02bde353-a0b9-4d6e-b281-41278d4dcdd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.295 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-9217a609-3f35-4647-87cd-e08d95dd1da1
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/9217a609-3f35-4647-87cd-e08d95dd1da1.pid.haproxy
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID 9217a609-3f35-4647-87cd-e08d95dd1da1
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:50:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:50:31.296 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1', 'env', 'PROCESS_TAG=haproxy-9217a609-3f35-4647-87cd-e08d95dd1da1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9217a609-3f35-4647-87cd-e08d95dd1da1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.398 2 DEBUG nova.compute.manager [req-a8f8d8ee-1fb1-46c1-9628-65dd1a409424 req-339c5f54-caad-420d-93e4-e37d64181162 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Received event network-vif-plugged-b4aee080-9989-4dcc-af16-952142e561a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.399 2 DEBUG oslo_concurrency.lockutils [req-a8f8d8ee-1fb1-46c1-9628-65dd1a409424 req-339c5f54-caad-420d-93e4-e37d64181162 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.400 2 DEBUG oslo_concurrency.lockutils [req-a8f8d8ee-1fb1-46c1-9628-65dd1a409424 req-339c5f54-caad-420d-93e4-e37d64181162 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.401 2 DEBUG oslo_concurrency.lockutils [req-a8f8d8ee-1fb1-46c1-9628-65dd1a409424 req-339c5f54-caad-420d-93e4-e37d64181162 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.402 2 DEBUG nova.compute.manager [req-a8f8d8ee-1fb1-46c1-9628-65dd1a409424 req-339c5f54-caad-420d-93e4-e37d64181162 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Processing event network-vif-plugged-b4aee080-9989-4dcc-af16-952142e561a9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:50:31 np0005464891 podman[284781]: 2025-10-01 16:50:31.684028492 +0000 UTC m=+0.060340405 container create f2abd27465463b3e48f66f90b73045ba24ce79f4f2992b5beb07eae994bfec3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:50:31 np0005464891 systemd[1]: Started libpod-conmon-f2abd27465463b3e48f66f90b73045ba24ce79f4f2992b5beb07eae994bfec3d.scope.
Oct  1 12:50:31 np0005464891 podman[284781]: 2025-10-01 16:50:31.652369638 +0000 UTC m=+0.028681561 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:50:31 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:50:31 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de938dbb380afc0e8f32ff4fc3049b4f0c01366dad6acc8666ea10cfc7702995/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:50:31 np0005464891 podman[284781]: 2025-10-01 16:50:31.789272222 +0000 UTC m=+0.165584195 container init f2abd27465463b3e48f66f90b73045ba24ce79f4f2992b5beb07eae994bfec3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:50:31 np0005464891 podman[284781]: 2025-10-01 16:50:31.796326037 +0000 UTC m=+0.172637960 container start f2abd27465463b3e48f66f90b73045ba24ce79f4f2992b5beb07eae994bfec3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  1 12:50:31 np0005464891 neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1[284797]: [NOTICE]   (284801) : New worker (284803) forked
Oct  1 12:50:31 np0005464891 neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1[284797]: [NOTICE]   (284801) : Loading success.
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.868 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337431.8671577, dc697861-16c7-4baa-8c59-84deb0c0b65c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.869 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] VM Started (Lifecycle Event)#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.872 2 DEBUG nova.compute.manager [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.879 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.883 2 INFO nova.virt.libvirt.driver [-] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Instance spawned successfully.#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.884 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.900 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.908 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.914 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.915 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.916 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.917 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.918 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.919 2 DEBUG nova.virt.libvirt.driver [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.933 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.934 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337431.8675275, dc697861-16c7-4baa-8c59-84deb0c0b65c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.934 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.989 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.993 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337431.8754086, dc697861-16c7-4baa-8c59-84deb0c0b65c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:50:31 np0005464891 nova_compute[259907]: 2025-10-01 16:50:31.994 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:50:32 np0005464891 nova_compute[259907]: 2025-10-01 16:50:32.045 2 INFO nova.compute.manager [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Took 6.34 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:50:32 np0005464891 nova_compute[259907]: 2025-10-01 16:50:32.046 2 DEBUG nova.compute.manager [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:50:32 np0005464891 nova_compute[259907]: 2025-10-01 16:50:32.056 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:50:32 np0005464891 nova_compute[259907]: 2025-10-01 16:50:32.060 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:50:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Oct  1 12:50:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Oct  1 12:50:32 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Oct  1 12:50:32 np0005464891 nova_compute[259907]: 2025-10-01 16:50:32.103 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:50:32 np0005464891 nova_compute[259907]: 2025-10-01 16:50:32.137 2 INFO nova.compute.manager [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Took 7.57 seconds to build instance.#033[00m
Oct  1 12:50:32 np0005464891 nova_compute[259907]: 2025-10-01 16:50:32.157 2 DEBUG oslo_concurrency.lockutils [None req-c1823307-b32c-482e-8205-42aefa685acb 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:50:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 134 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 815 KiB/s rd, 3.6 MiB/s wr, 136 op/s
Oct  1 12:50:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2168054456' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2168054456' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:33 np0005464891 nova_compute[259907]: 2025-10-01 16:50:33.477 2 DEBUG nova.compute.manager [req-49c15b46-6098-443a-b707-f254e3cd7b8a req-0542a3a2-6fd0-45c6-89ee-a718400477fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Received event network-vif-plugged-b4aee080-9989-4dcc-af16-952142e561a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:50:33 np0005464891 nova_compute[259907]: 2025-10-01 16:50:33.478 2 DEBUG oslo_concurrency.lockutils [req-49c15b46-6098-443a-b707-f254e3cd7b8a req-0542a3a2-6fd0-45c6-89ee-a718400477fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:50:33 np0005464891 nova_compute[259907]: 2025-10-01 16:50:33.479 2 DEBUG oslo_concurrency.lockutils [req-49c15b46-6098-443a-b707-f254e3cd7b8a req-0542a3a2-6fd0-45c6-89ee-a718400477fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:50:33 np0005464891 nova_compute[259907]: 2025-10-01 16:50:33.479 2 DEBUG oslo_concurrency.lockutils [req-49c15b46-6098-443a-b707-f254e3cd7b8a req-0542a3a2-6fd0-45c6-89ee-a718400477fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:50:33 np0005464891 nova_compute[259907]: 2025-10-01 16:50:33.480 2 DEBUG nova.compute.manager [req-49c15b46-6098-443a-b707-f254e3cd7b8a req-0542a3a2-6fd0-45c6-89ee-a718400477fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] No waiting events found dispatching network-vif-plugged-b4aee080-9989-4dcc-af16-952142e561a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:50:33 np0005464891 nova_compute[259907]: 2025-10-01 16:50:33.480 2 WARNING nova.compute.manager [req-49c15b46-6098-443a-b707-f254e3cd7b8a req-0542a3a2-6fd0-45c6-89ee-a718400477fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Received unexpected event network-vif-plugged-b4aee080-9989-4dcc-af16-952142e561a9 for instance with vm_state active and task_state None.#033[00m
Oct  1 12:50:33 np0005464891 nova_compute[259907]: 2025-10-01 16:50:33.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:50:34 np0005464891 nova_compute[259907]: 2025-10-01 16:50:34.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:34 np0005464891 NetworkManager[44940]: <info>  [1759337434.2080] manager: (patch-br-int-to-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Oct  1 12:50:34 np0005464891 NetworkManager[44940]: <info>  [1759337434.2086] manager: (patch-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Oct  1 12:50:34 np0005464891 ovn_controller[152409]: 2025-10-01T16:50:34Z|00108|binding|INFO|Releasing lport 5558844a-e29a-46f0-b86d-8940a2f4c4de from this chassis (sb_readonly=0)
Oct  1 12:50:34 np0005464891 nova_compute[259907]: 2025-10-01 16:50:34.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:34 np0005464891 ovn_controller[152409]: 2025-10-01T16:50:34Z|00109|binding|INFO|Releasing lport 5558844a-e29a-46f0-b86d-8940a2f4c4de from this chassis (sb_readonly=0)
Oct  1 12:50:34 np0005464891 nova_compute[259907]: 2025-10-01 16:50:34.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:34 np0005464891 nova_compute[259907]: 2025-10-01 16:50:34.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 134 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.4 MiB/s wr, 197 op/s
Oct  1 12:50:34 np0005464891 podman[284813]: 2025-10-01 16:50:34.97884075 +0000 UTC m=+0.078633528 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:50:35 np0005464891 nova_compute[259907]: 2025-10-01 16:50:35.691 2 DEBUG nova.compute.manager [req-4e8ecbd9-add7-4208-84e1-a5223e4f1af6 req-8140e0f2-117f-4b57-a43e-7adfbd1e3f6a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Received event network-changed-b4aee080-9989-4dcc-af16-952142e561a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:50:35 np0005464891 nova_compute[259907]: 2025-10-01 16:50:35.691 2 DEBUG nova.compute.manager [req-4e8ecbd9-add7-4208-84e1-a5223e4f1af6 req-8140e0f2-117f-4b57-a43e-7adfbd1e3f6a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Refreshing instance network info cache due to event network-changed-b4aee080-9989-4dcc-af16-952142e561a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:50:35 np0005464891 nova_compute[259907]: 2025-10-01 16:50:35.692 2 DEBUG oslo_concurrency.lockutils [req-4e8ecbd9-add7-4208-84e1-a5223e4f1af6 req-8140e0f2-117f-4b57-a43e-7adfbd1e3f6a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-dc697861-16c7-4baa-8c59-84deb0c0b65c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:50:35 np0005464891 nova_compute[259907]: 2025-10-01 16:50:35.692 2 DEBUG oslo_concurrency.lockutils [req-4e8ecbd9-add7-4208-84e1-a5223e4f1af6 req-8140e0f2-117f-4b57-a43e-7adfbd1e3f6a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-dc697861-16c7-4baa-8c59-84deb0c0b65c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:50:35 np0005464891 nova_compute[259907]: 2025-10-01 16:50:35.692 2 DEBUG nova.network.neutron [req-4e8ecbd9-add7-4208-84e1-a5223e4f1af6 req-8140e0f2-117f-4b57-a43e-7adfbd1e3f6a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Refreshing network info cache for port b4aee080-9989-4dcc-af16-952142e561a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:50:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:50:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Oct  1 12:50:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Oct  1 12:50:35 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Oct  1 12:50:36 np0005464891 nova_compute[259907]: 2025-10-01 16:50:36.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3347502998' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3347502998' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:36 np0005464891 nova_compute[259907]: 2025-10-01 16:50:36.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:50:36 np0005464891 nova_compute[259907]: 2025-10-01 16:50:36.804 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  1 12:50:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 134 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 31 KiB/s wr, 198 op/s
Oct  1 12:50:37 np0005464891 nova_compute[259907]: 2025-10-01 16:50:37.082 2 DEBUG nova.network.neutron [req-4e8ecbd9-add7-4208-84e1-a5223e4f1af6 req-8140e0f2-117f-4b57-a43e-7adfbd1e3f6a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Updated VIF entry in instance network info cache for port b4aee080-9989-4dcc-af16-952142e561a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:50:37 np0005464891 nova_compute[259907]: 2025-10-01 16:50:37.082 2 DEBUG nova.network.neutron [req-4e8ecbd9-add7-4208-84e1-a5223e4f1af6 req-8140e0f2-117f-4b57-a43e-7adfbd1e3f6a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Updating instance_info_cache with network_info: [{"id": "b4aee080-9989-4dcc-af16-952142e561a9", "address": "fa:16:3e:a7:26:f6", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4aee080-99", "ovs_interfaceid": "b4aee080-9989-4dcc-af16-952142e561a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:50:37 np0005464891 nova_compute[259907]: 2025-10-01 16:50:37.108 2 DEBUG oslo_concurrency.lockutils [req-4e8ecbd9-add7-4208-84e1-a5223e4f1af6 req-8140e0f2-117f-4b57-a43e-7adfbd1e3f6a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-dc697861-16c7-4baa-8c59-84deb0c0b65c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:50:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Oct  1 12:50:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Oct  1 12:50:37 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Oct  1 12:50:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 134 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 28 KiB/s wr, 204 op/s
Oct  1 12:50:39 np0005464891 nova_compute[259907]: 2025-10-01 16:50:39.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:39 np0005464891 nova_compute[259907]: 2025-10-01 16:50:39.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:40 np0005464891 podman[284834]: 2025-10-01 16:50:40.022118363 +0000 UTC m=+0.118601190 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  1 12:50:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Oct  1 12:50:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Oct  1 12:50:40 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Oct  1 12:50:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:50:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Oct  1 12:50:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Oct  1 12:50:40 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Oct  1 12:50:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 134 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 937 KiB/s rd, 3.3 KiB/s wr, 86 op/s
Oct  1 12:50:41 np0005464891 nova_compute[259907]: 2025-10-01 16:50:41.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:50:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:50:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:50:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:50:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:50:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:50:42 np0005464891 nova_compute[259907]: 2025-10-01 16:50:42.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 134 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 824 KiB/s rd, 3.3 KiB/s wr, 85 op/s
Oct  1 12:50:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Oct  1 12:50:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Oct  1 12:50:43 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Oct  1 12:50:44 np0005464891 ovn_controller[152409]: 2025-10-01T16:50:44Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:26:f6 10.100.0.10
Oct  1 12:50:44 np0005464891 ovn_controller[152409]: 2025-10-01T16:50:44Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:26:f6 10.100.0.10
Oct  1 12:50:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:50:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:50:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:50:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:50:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:50:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:50:44 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 1da39969-5d84-4f21-b713-a80fcd4e8b94 does not exist
Oct  1 12:50:44 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev df00dfbe-4842-488f-97e1-c3d0a5673b60 does not exist
Oct  1 12:50:44 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 650d5c36-d83d-484d-a067-163feda9e368 does not exist
Oct  1 12:50:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:50:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:50:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:50:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:50:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:50:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:50:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:50:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:50:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:50:44 np0005464891 nova_compute[259907]: 2025-10-01 16:50:44.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:44 np0005464891 podman[285127]: 2025-10-01 16:50:44.901970451 +0000 UTC m=+0.072005946 container create 1f6a3fbfa32cf0e43c026bbd4b9d4d3f61b6b3036130b81a3fd5015c36e8353c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct  1 12:50:44 np0005464891 podman[285127]: 2025-10-01 16:50:44.872365765 +0000 UTC m=+0.042401340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:50:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 149 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 281 KiB/s rd, 2.1 MiB/s wr, 175 op/s
Oct  1 12:50:45 np0005464891 systemd[1]: Started libpod-conmon-1f6a3fbfa32cf0e43c026bbd4b9d4d3f61b6b3036130b81a3fd5015c36e8353c.scope.
Oct  1 12:50:45 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:50:45 np0005464891 podman[285127]: 2025-10-01 16:50:45.093247353 +0000 UTC m=+0.263282878 container init 1f6a3fbfa32cf0e43c026bbd4b9d4d3f61b6b3036130b81a3fd5015c36e8353c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:50:45 np0005464891 podman[285127]: 2025-10-01 16:50:45.105397948 +0000 UTC m=+0.275433483 container start 1f6a3fbfa32cf0e43c026bbd4b9d4d3f61b6b3036130b81a3fd5015c36e8353c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:50:45 np0005464891 podman[285127]: 2025-10-01 16:50:45.110695175 +0000 UTC m=+0.280730750 container attach 1f6a3fbfa32cf0e43c026bbd4b9d4d3f61b6b3036130b81a3fd5015c36e8353c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 12:50:45 np0005464891 stoic_mclaren[285143]: 167 167
Oct  1 12:50:45 np0005464891 systemd[1]: libpod-1f6a3fbfa32cf0e43c026bbd4b9d4d3f61b6b3036130b81a3fd5015c36e8353c.scope: Deactivated successfully.
Oct  1 12:50:45 np0005464891 podman[285127]: 2025-10-01 16:50:45.115314321 +0000 UTC m=+0.285349836 container died 1f6a3fbfa32cf0e43c026bbd4b9d4d3f61b6b3036130b81a3fd5015c36e8353c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:50:45 np0005464891 systemd[1]: var-lib-containers-storage-overlay-709fbd5136cd3fe0531f1f3104cd97e477a1d7aa823e0ac9001fdeda8a71563c-merged.mount: Deactivated successfully.
Oct  1 12:50:45 np0005464891 podman[285127]: 2025-10-01 16:50:45.192783147 +0000 UTC m=+0.362818632 container remove 1f6a3fbfa32cf0e43c026bbd4b9d4d3f61b6b3036130b81a3fd5015c36e8353c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:50:45 np0005464891 systemd[1]: libpod-conmon-1f6a3fbfa32cf0e43c026bbd4b9d4d3f61b6b3036130b81a3fd5015c36e8353c.scope: Deactivated successfully.
Oct  1 12:50:45 np0005464891 podman[285168]: 2025-10-01 16:50:45.410961901 +0000 UTC m=+0.083008250 container create 29a06e3639eaf78c00816208b61831981c051e139ac79caa60110442b82f7fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_engelbart, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:50:45 np0005464891 podman[285168]: 2025-10-01 16:50:45.377231531 +0000 UTC m=+0.049277920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:50:45 np0005464891 systemd[1]: Started libpod-conmon-29a06e3639eaf78c00816208b61831981c051e139ac79caa60110442b82f7fed.scope.
Oct  1 12:50:45 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:50:45 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a46c6f98f65fafd7ec6d9f04cb18258da71da0c5005e33585cff4f8097854e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:50:45 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a46c6f98f65fafd7ec6d9f04cb18258da71da0c5005e33585cff4f8097854e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:50:45 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a46c6f98f65fafd7ec6d9f04cb18258da71da0c5005e33585cff4f8097854e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:50:45 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a46c6f98f65fafd7ec6d9f04cb18258da71da0c5005e33585cff4f8097854e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:50:45 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a46c6f98f65fafd7ec6d9f04cb18258da71da0c5005e33585cff4f8097854e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:50:45 np0005464891 podman[285168]: 2025-10-01 16:50:45.543056752 +0000 UTC m=+0.215103141 container init 29a06e3639eaf78c00816208b61831981c051e139ac79caa60110442b82f7fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_engelbart, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 12:50:45 np0005464891 podman[285168]: 2025-10-01 16:50:45.558045175 +0000 UTC m=+0.230091494 container start 29a06e3639eaf78c00816208b61831981c051e139ac79caa60110442b82f7fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:50:45 np0005464891 podman[285168]: 2025-10-01 16:50:45.563814873 +0000 UTC m=+0.235861263 container attach 29a06e3639eaf78c00816208b61831981c051e139ac79caa60110442b82f7fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:50:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:50:46 np0005464891 nova_compute[259907]: 2025-10-01 16:50:46.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3972126868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3972126868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:46 np0005464891 hungry_engelbart[285185]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:50:46 np0005464891 hungry_engelbart[285185]: --> relative data size: 1.0
Oct  1 12:50:46 np0005464891 hungry_engelbart[285185]: --> All data devices are unavailable
Oct  1 12:50:46 np0005464891 systemd[1]: libpod-29a06e3639eaf78c00816208b61831981c051e139ac79caa60110442b82f7fed.scope: Deactivated successfully.
Oct  1 12:50:46 np0005464891 podman[285168]: 2025-10-01 16:50:46.767267426 +0000 UTC m=+1.439313745 container died 29a06e3639eaf78c00816208b61831981c051e139ac79caa60110442b82f7fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_engelbart, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:50:46 np0005464891 systemd[1]: libpod-29a06e3639eaf78c00816208b61831981c051e139ac79caa60110442b82f7fed.scope: Consumed 1.147s CPU time.
Oct  1 12:50:46 np0005464891 systemd[1]: var-lib-containers-storage-overlay-35a46c6f98f65fafd7ec6d9f04cb18258da71da0c5005e33585cff4f8097854e-merged.mount: Deactivated successfully.
Oct  1 12:50:46 np0005464891 podman[285168]: 2025-10-01 16:50:46.822358785 +0000 UTC m=+1.494405094 container remove 29a06e3639eaf78c00816208b61831981c051e139ac79caa60110442b82f7fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_engelbart, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 12:50:46 np0005464891 systemd[1]: libpod-conmon-29a06e3639eaf78c00816208b61831981c051e139ac79caa60110442b82f7fed.scope: Deactivated successfully.
Oct  1 12:50:46 np0005464891 nova_compute[259907]: 2025-10-01 16:50:46.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 154 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 554 KiB/s rd, 2.3 MiB/s wr, 189 op/s
Oct  1 12:50:47 np0005464891 podman[285367]: 2025-10-01 16:50:47.719177834 +0000 UTC m=+0.093968841 container create 366371903b493c1dcaf0403c454d3aea757e1e7153228092aa115fc69b02c94b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brown, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct  1 12:50:47 np0005464891 podman[285367]: 2025-10-01 16:50:47.653317819 +0000 UTC m=+0.028108876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:50:47 np0005464891 systemd[1]: Started libpod-conmon-366371903b493c1dcaf0403c454d3aea757e1e7153228092aa115fc69b02c94b.scope.
Oct  1 12:50:47 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:50:47 np0005464891 podman[285367]: 2025-10-01 16:50:47.995657455 +0000 UTC m=+0.370448472 container init 366371903b493c1dcaf0403c454d3aea757e1e7153228092aa115fc69b02c94b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brown, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 12:50:48 np0005464891 podman[285367]: 2025-10-01 16:50:48.009112506 +0000 UTC m=+0.383903513 container start 366371903b493c1dcaf0403c454d3aea757e1e7153228092aa115fc69b02c94b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:50:48 np0005464891 podman[285367]: 2025-10-01 16:50:48.013370074 +0000 UTC m=+0.388161071 container attach 366371903b493c1dcaf0403c454d3aea757e1e7153228092aa115fc69b02c94b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:50:48 np0005464891 unruffled_brown[285383]: 167 167
Oct  1 12:50:48 np0005464891 systemd[1]: libpod-366371903b493c1dcaf0403c454d3aea757e1e7153228092aa115fc69b02c94b.scope: Deactivated successfully.
Oct  1 12:50:48 np0005464891 conmon[285383]: conmon 366371903b493c1dcaf0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-366371903b493c1dcaf0403c454d3aea757e1e7153228092aa115fc69b02c94b.scope/container/memory.events
Oct  1 12:50:48 np0005464891 podman[285367]: 2025-10-01 16:50:48.018560597 +0000 UTC m=+0.393351604 container died 366371903b493c1dcaf0403c454d3aea757e1e7153228092aa115fc69b02c94b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brown, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:50:48 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e9c751f520bba0c0575192f7a562ec6d04a531fe3f414542cf3e4d9ba6c98957-merged.mount: Deactivated successfully.
Oct  1 12:50:48 np0005464891 podman[285367]: 2025-10-01 16:50:48.065589322 +0000 UTC m=+0.440380299 container remove 366371903b493c1dcaf0403c454d3aea757e1e7153228092aa115fc69b02c94b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct  1 12:50:48 np0005464891 systemd[1]: libpod-conmon-366371903b493c1dcaf0403c454d3aea757e1e7153228092aa115fc69b02c94b.scope: Deactivated successfully.
Oct  1 12:50:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Oct  1 12:50:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Oct  1 12:50:48 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Oct  1 12:50:48 np0005464891 podman[285408]: 2025-10-01 16:50:48.267349164 +0000 UTC m=+0.053998289 container create b8fcdb2a7a8241c3a63c4f6500d4e47eeb6bc4e0ce5bcb37ed04b087f8d9b737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 12:50:48 np0005464891 systemd[1]: Started libpod-conmon-b8fcdb2a7a8241c3a63c4f6500d4e47eeb6bc4e0ce5bcb37ed04b087f8d9b737.scope.
Oct  1 12:50:48 np0005464891 podman[285408]: 2025-10-01 16:50:48.240670949 +0000 UTC m=+0.027320104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:50:48 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:50:48 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ecd92b32cc6eba8b3acb8114a5ce12ab39b01c5d0aa114f52a8c0f748c886d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:50:48 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ecd92b32cc6eba8b3acb8114a5ce12ab39b01c5d0aa114f52a8c0f748c886d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:50:48 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ecd92b32cc6eba8b3acb8114a5ce12ab39b01c5d0aa114f52a8c0f748c886d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:50:48 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ecd92b32cc6eba8b3acb8114a5ce12ab39b01c5d0aa114f52a8c0f748c886d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:50:48 np0005464891 podman[285408]: 2025-10-01 16:50:48.382085386 +0000 UTC m=+0.168734541 container init b8fcdb2a7a8241c3a63c4f6500d4e47eeb6bc4e0ce5bcb37ed04b087f8d9b737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 12:50:48 np0005464891 podman[285408]: 2025-10-01 16:50:48.391225609 +0000 UTC m=+0.177874724 container start b8fcdb2a7a8241c3a63c4f6500d4e47eeb6bc4e0ce5bcb37ed04b087f8d9b737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:50:48 np0005464891 podman[285408]: 2025-10-01 16:50:48.396249697 +0000 UTC m=+0.182898812 container attach b8fcdb2a7a8241c3a63c4f6500d4e47eeb6bc4e0ce5bcb37ed04b087f8d9b737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:50:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1126572163' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1126572163' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 581 KiB/s rd, 3.2 MiB/s wr, 199 op/s
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]: {
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:    "0": [
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:        {
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "devices": [
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "/dev/loop3"
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            ],
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "lv_name": "ceph_lv0",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "lv_size": "21470642176",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "name": "ceph_lv0",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "tags": {
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.cluster_name": "ceph",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.crush_device_class": "",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.encrypted": "0",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.osd_id": "0",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.type": "block",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.vdo": "0"
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            },
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "type": "block",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "vg_name": "ceph_vg0"
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:        }
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:    ],
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:    "1": [
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:        {
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "devices": [
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "/dev/loop4"
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            ],
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "lv_name": "ceph_lv1",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "lv_size": "21470642176",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "name": "ceph_lv1",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "tags": {
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.cluster_name": "ceph",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.crush_device_class": "",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.encrypted": "0",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.osd_id": "1",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.type": "block",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.vdo": "0"
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            },
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "type": "block",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "vg_name": "ceph_vg1"
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:        }
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:    ],
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:    "2": [
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:        {
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "devices": [
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "/dev/loop5"
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            ],
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "lv_name": "ceph_lv2",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "lv_size": "21470642176",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "name": "ceph_lv2",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "tags": {
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.cluster_name": "ceph",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.crush_device_class": "",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.encrypted": "0",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.osd_id": "2",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.type": "block",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:                "ceph.vdo": "0"
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            },
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "type": "block",
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:            "vg_name": "ceph_vg2"
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:        }
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]:    ]
Oct  1 12:50:49 np0005464891 naughty_solomon[285424]: }
Oct  1 12:50:49 np0005464891 systemd[1]: libpod-b8fcdb2a7a8241c3a63c4f6500d4e47eeb6bc4e0ce5bcb37ed04b087f8d9b737.scope: Deactivated successfully.
Oct  1 12:50:49 np0005464891 nova_compute[259907]: 2025-10-01 16:50:49.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:49 np0005464891 podman[285433]: 2025-10-01 16:50:49.21336157 +0000 UTC m=+0.028513427 container died b8fcdb2a7a8241c3a63c4f6500d4e47eeb6bc4e0ce5bcb37ed04b087f8d9b737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_solomon, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:50:49 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6ecd92b32cc6eba8b3acb8114a5ce12ab39b01c5d0aa114f52a8c0f748c886d8-merged.mount: Deactivated successfully.
Oct  1 12:50:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Oct  1 12:50:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Oct  1 12:50:49 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Oct  1 12:50:49 np0005464891 podman[285433]: 2025-10-01 16:50:49.267855993 +0000 UTC m=+0.083007800 container remove b8fcdb2a7a8241c3a63c4f6500d4e47eeb6bc4e0ce5bcb37ed04b087f8d9b737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:50:49 np0005464891 systemd[1]: libpod-conmon-b8fcdb2a7a8241c3a63c4f6500d4e47eeb6bc4e0ce5bcb37ed04b087f8d9b737.scope: Deactivated successfully.
Oct  1 12:50:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/750323302' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/750323302' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:49 np0005464891 nova_compute[259907]: 2025-10-01 16:50:49.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:50 np0005464891 podman[285589]: 2025-10-01 16:50:50.050078114 +0000 UTC m=+0.055544083 container create eb3806a1d6521dd5df19b6fff90f39e18a83fa574b3c82da60a56587249981ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_faraday, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:50:50 np0005464891 systemd[1]: Started libpod-conmon-eb3806a1d6521dd5df19b6fff90f39e18a83fa574b3c82da60a56587249981ae.scope.
Oct  1 12:50:50 np0005464891 podman[285589]: 2025-10-01 16:50:50.020691743 +0000 UTC m=+0.026157762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:50:50 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:50:50 np0005464891 podman[285589]: 2025-10-01 16:50:50.151854999 +0000 UTC m=+0.157320978 container init eb3806a1d6521dd5df19b6fff90f39e18a83fa574b3c82da60a56587249981ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_faraday, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 12:50:50 np0005464891 podman[285589]: 2025-10-01 16:50:50.158103511 +0000 UTC m=+0.163569440 container start eb3806a1d6521dd5df19b6fff90f39e18a83fa574b3c82da60a56587249981ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 12:50:50 np0005464891 podman[285589]: 2025-10-01 16:50:50.1616908 +0000 UTC m=+0.167156759 container attach eb3806a1d6521dd5df19b6fff90f39e18a83fa574b3c82da60a56587249981ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:50:50 np0005464891 stupefied_faraday[285606]: 167 167
Oct  1 12:50:50 np0005464891 systemd[1]: libpod-eb3806a1d6521dd5df19b6fff90f39e18a83fa574b3c82da60a56587249981ae.scope: Deactivated successfully.
Oct  1 12:50:50 np0005464891 conmon[285606]: conmon eb3806a1d6521dd5df19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb3806a1d6521dd5df19b6fff90f39e18a83fa574b3c82da60a56587249981ae.scope/container/memory.events
Oct  1 12:50:50 np0005464891 podman[285589]: 2025-10-01 16:50:50.168023584 +0000 UTC m=+0.173489513 container died eb3806a1d6521dd5df19b6fff90f39e18a83fa574b3c82da60a56587249981ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:50:50 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4f2d1e5a3af5443c6c8ffe8e834463e83fc4946847dfd5a85d27a79bb99e7435-merged.mount: Deactivated successfully.
Oct  1 12:50:50 np0005464891 podman[285589]: 2025-10-01 16:50:50.214099914 +0000 UTC m=+0.219565843 container remove eb3806a1d6521dd5df19b6fff90f39e18a83fa574b3c82da60a56587249981ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_faraday, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 12:50:50 np0005464891 systemd[1]: libpod-conmon-eb3806a1d6521dd5df19b6fff90f39e18a83fa574b3c82da60a56587249981ae.scope: Deactivated successfully.
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.301 2 DEBUG oslo_concurrency.lockutils [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "dc697861-16c7-4baa-8c59-84deb0c0b65c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.304 2 DEBUG oslo_concurrency.lockutils [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.319 2 DEBUG nova.objects.instance [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lazy-loading 'flavor' on Instance uuid dc697861-16c7-4baa-8c59-84deb0c0b65c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.337 2 INFO nova.virt.libvirt.driver [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Ignoring supplied device name: /dev/vdb#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.354 2 DEBUG oslo_concurrency.lockutils [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:50:50 np0005464891 podman[285631]: 2025-10-01 16:50:50.386823455 +0000 UTC m=+0.043168870 container create e287db1e49369727d9556b0645ad5e1b288e3a542d39ffc82481763e39fbfba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 12:50:50 np0005464891 systemd[1]: Started libpod-conmon-e287db1e49369727d9556b0645ad5e1b288e3a542d39ffc82481763e39fbfba9.scope.
Oct  1 12:50:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:50:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3541343366' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:50:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:50:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3541343366' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:50:50 np0005464891 podman[285631]: 2025-10-01 16:50:50.372004107 +0000 UTC m=+0.028349532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:50:50 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:50:50 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666aa0b9544284f6bd3f71456c29353ba797dec78dc85f6efcb20b0faeba4ec0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:50:50 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666aa0b9544284f6bd3f71456c29353ba797dec78dc85f6efcb20b0faeba4ec0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:50:50 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666aa0b9544284f6bd3f71456c29353ba797dec78dc85f6efcb20b0faeba4ec0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:50:50 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666aa0b9544284f6bd3f71456c29353ba797dec78dc85f6efcb20b0faeba4ec0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:50:50 np0005464891 podman[285631]: 2025-10-01 16:50:50.497024533 +0000 UTC m=+0.153369978 container init e287db1e49369727d9556b0645ad5e1b288e3a542d39ffc82481763e39fbfba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:50:50 np0005464891 podman[285631]: 2025-10-01 16:50:50.508408916 +0000 UTC m=+0.164754371 container start e287db1e49369727d9556b0645ad5e1b288e3a542d39ffc82481763e39fbfba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 12:50:50 np0005464891 podman[285631]: 2025-10-01 16:50:50.512200102 +0000 UTC m=+0.168545517 container attach e287db1e49369727d9556b0645ad5e1b288e3a542d39ffc82481763e39fbfba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.524 2 DEBUG oslo_concurrency.lockutils [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "dc697861-16c7-4baa-8c59-84deb0c0b65c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.526 2 DEBUG oslo_concurrency.lockutils [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.527 2 INFO nova.compute.manager [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Attaching volume 2f7bf579-a431-4cc1-8235-09a8fc3f51a4 to /dev/vdb#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.666 2 DEBUG os_brick.utils [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.668 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.688 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.689 741 DEBUG oslo.privsep.daemon [-] privsep: reply[bd536468-4544-4d7d-8b31-79920c908138]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.692 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.704 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.704 741 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb033de-a7ea-4c46-9c0e-8be96e749634]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.707 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.720 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.720 741 DEBUG oslo.privsep.daemon [-] privsep: reply[25a10842-2d9f-49f9-ab90-35491cbfb441]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.723 741 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcf48e4-cca8-4e05-82ea-4d4c66194f7c]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.724 2 DEBUG oslo_concurrency.processutils [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:50:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.764 2 DEBUG oslo_concurrency.processutils [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.767 2 DEBUG os_brick.initiator.connectors.lightos [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.768 2 DEBUG os_brick.initiator.connectors.lightos [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.768 2 DEBUG os_brick.initiator.connectors.lightos [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.769 2 DEBUG os_brick.utils [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] <== get_connector_properties: return (101ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:50:50 np0005464891 nova_compute[259907]: 2025-10-01 16:50:50.770 2 DEBUG nova.virt.block_device [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Updating existing volume attachment record: 918137c5-6350-4503-9797-09a819f85dea _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:50:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 167 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 707 KiB/s rd, 3.3 MiB/s wr, 344 op/s
Oct  1 12:50:51 np0005464891 nova_compute[259907]: 2025-10-01 16:50:51.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:50:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4265701472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:50:51 np0005464891 nova_compute[259907]: 2025-10-01 16:50:51.644 2 DEBUG nova.objects.instance [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lazy-loading 'flavor' on Instance uuid dc697861-16c7-4baa-8c59-84deb0c0b65c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]: {
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:        "osd_id": 2,
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:        "type": "bluestore"
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:    },
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:        "osd_id": 0,
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:        "type": "bluestore"
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:    },
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:        "osd_id": 1,
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:        "type": "bluestore"
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]:    }
Oct  1 12:50:51 np0005464891 nifty_zhukovsky[285648]: }
Oct  1 12:50:51 np0005464891 nova_compute[259907]: 2025-10-01 16:50:51.676 2 DEBUG nova.virt.libvirt.driver [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Attempting to attach volume 2f7bf579-a431-4cc1-8235-09a8fc3f51a4 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  1 12:50:51 np0005464891 nova_compute[259907]: 2025-10-01 16:50:51.682 2 DEBUG nova.virt.libvirt.guest [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] attach device xml: <disk type="network" device="disk">
Oct  1 12:50:51 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:50:51 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-2f7bf579-a431-4cc1-8235-09a8fc3f51a4">
Oct  1 12:50:51 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:50:51 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:50:51 np0005464891 nova_compute[259907]:  <auth username="openstack">
Oct  1 12:50:51 np0005464891 nova_compute[259907]:    <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:50:51 np0005464891 nova_compute[259907]:  </auth>
Oct  1 12:50:51 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:50:51 np0005464891 nova_compute[259907]:  <serial>2f7bf579-a431-4cc1-8235-09a8fc3f51a4</serial>
Oct  1 12:50:51 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:50:51 np0005464891 nova_compute[259907]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  1 12:50:51 np0005464891 systemd[1]: libpod-e287db1e49369727d9556b0645ad5e1b288e3a542d39ffc82481763e39fbfba9.scope: Deactivated successfully.
Oct  1 12:50:51 np0005464891 podman[285631]: 2025-10-01 16:50:51.714529942 +0000 UTC m=+1.370875417 container died e287db1e49369727d9556b0645ad5e1b288e3a542d39ffc82481763e39fbfba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 12:50:51 np0005464891 systemd[1]: libpod-e287db1e49369727d9556b0645ad5e1b288e3a542d39ffc82481763e39fbfba9.scope: Consumed 1.202s CPU time.
Oct  1 12:50:51 np0005464891 systemd[1]: var-lib-containers-storage-overlay-666aa0b9544284f6bd3f71456c29353ba797dec78dc85f6efcb20b0faeba4ec0-merged.mount: Deactivated successfully.
Oct  1 12:50:51 np0005464891 podman[285631]: 2025-10-01 16:50:51.793050147 +0000 UTC m=+1.449395572 container remove e287db1e49369727d9556b0645ad5e1b288e3a542d39ffc82481763e39fbfba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:50:51 np0005464891 systemd[1]: libpod-conmon-e287db1e49369727d9556b0645ad5e1b288e3a542d39ffc82481763e39fbfba9.scope: Deactivated successfully.
Oct  1 12:50:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:50:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:50:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:50:51 np0005464891 nova_compute[259907]: 2025-10-01 16:50:51.859 2 DEBUG nova.virt.libvirt.driver [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:50:51 np0005464891 nova_compute[259907]: 2025-10-01 16:50:51.859 2 DEBUG nova.virt.libvirt.driver [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:50:51 np0005464891 nova_compute[259907]: 2025-10-01 16:50:51.859 2 DEBUG nova.virt.libvirt.driver [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:50:51 np0005464891 nova_compute[259907]: 2025-10-01 16:50:51.860 2 DEBUG nova.virt.libvirt.driver [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] No VIF found with MAC fa:16:3e:a7:26:f6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:50:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:50:51 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 64270c86-ea84-4207-97d9-bd9d25c693a9 does not exist
Oct  1 12:50:51 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 2ebfbb08-9590-412f-a0d6-534a817cbbc3 does not exist
Oct  1 12:50:52 np0005464891 nova_compute[259907]: 2025-10-01 16:50:52.071 2 DEBUG oslo_concurrency.lockutils [None req-ffdd0ef4-e933-4f3a-9c5a-d03ea0834521 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:50:52 np0005464891 podman[285771]: 2025-10-01 16:50:52.172475475 +0000 UTC m=+0.093064776 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  1 12:50:52 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:50:52 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:50:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 167 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 519 KiB/s rd, 1.6 MiB/s wr, 260 op/s
Oct  1 12:50:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Oct  1 12:50:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Oct  1 12:50:53 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Oct  1 12:50:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:50:53 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/854452359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:50:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Oct  1 12:50:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Oct  1 12:50:54 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Oct  1 12:50:54 np0005464891 nova_compute[259907]: 2025-10-01 16:50:54.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 167 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 934 KiB/s rd, 31 KiB/s wr, 230 op/s
Oct  1 12:50:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Oct  1 12:50:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Oct  1 12:50:55 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Oct  1 12:50:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:50:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Oct  1 12:50:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Oct  1 12:50:55 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Oct  1 12:50:56 np0005464891 nova_compute[259907]: 2025-10-01 16:50:56.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:50:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 167 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 44 KiB/s wr, 145 op/s
Oct  1 12:50:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:50:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3078634699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:50:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Oct  1 12:50:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Oct  1 12:50:57 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Oct  1 12:50:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Oct  1 12:50:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Oct  1 12:50:58 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Oct  1 12:50:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:50:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3380660919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:50:58 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 183 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 205 op/s
Oct  1 12:50:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Oct  1 12:50:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Oct  1 12:50:59 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Oct  1 12:50:59 np0005464891 nova_compute[259907]: 2025-10-01 16:50:59.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Oct  1 12:51:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Oct  1 12:51:00 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Oct  1 12:51:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:51:00 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 230 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.5 MiB/s wr, 354 op/s
Oct  1 12:51:01 np0005464891 nova_compute[259907]: 2025-10-01 16:51:01.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Oct  1 12:51:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Oct  1 12:51:01 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Oct  1 12:51:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:51:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4028298266' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:51:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:51:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3575658049' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:51:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:51:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3575658049' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:51:02 np0005464891 podman[285790]: 2025-10-01 16:51:02.027219642 +0000 UTC m=+0.131344482 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  1 12:51:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Oct  1 12:51:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Oct  1 12:51:02 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Oct  1 12:51:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:51:02 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2384878877' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:51:02 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 246 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 6.3 MiB/s wr, 334 op/s
Oct  1 12:51:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Oct  1 12:51:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Oct  1 12:51:03 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Oct  1 12:51:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Oct  1 12:51:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Oct  1 12:51:04 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Oct  1 12:51:04 np0005464891 nova_compute[259907]: 2025-10-01 16:51:04.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:04 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 272 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 5.3 MiB/s wr, 275 op/s
Oct  1 12:51:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:51:05 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/174164071' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:51:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:51:05 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/174164071' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:51:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Oct  1 12:51:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Oct  1 12:51:05 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Oct  1 12:51:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:51:05 np0005464891 podman[285814]: 2025-10-01 16:51:05.970770521 +0000 UTC m=+0.076822788 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  1 12:51:06 np0005464891 nova_compute[259907]: 2025-10-01 16:51:06.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:51:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1083474500' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:51:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:51:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1083474500' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:51:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Oct  1 12:51:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Oct  1 12:51:06 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Oct  1 12:51:06 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 291 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.5 MiB/s wr, 343 op/s
Oct  1 12:51:07 np0005464891 nova_compute[259907]: 2025-10-01 16:51:07.555 2 DEBUG oslo_concurrency.lockutils [None req-36fa176d-d63f-419f-889a-c9bbf4a69ec9 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "dc697861-16c7-4baa-8c59-84deb0c0b65c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:07 np0005464891 nova_compute[259907]: 2025-10-01 16:51:07.556 2 DEBUG oslo_concurrency.lockutils [None req-36fa176d-d63f-419f-889a-c9bbf4a69ec9 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:07 np0005464891 nova_compute[259907]: 2025-10-01 16:51:07.572 2 INFO nova.compute.manager [None req-36fa176d-d63f-419f-889a-c9bbf4a69ec9 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Detaching volume 2f7bf579-a431-4cc1-8235-09a8fc3f51a4#033[00m
Oct  1 12:51:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:07.675 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:51:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:07.677 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:51:07 np0005464891 nova_compute[259907]: 2025-10-01 16:51:07.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:07 np0005464891 nova_compute[259907]: 2025-10-01 16:51:07.777 2 INFO nova.virt.block_device [None req-36fa176d-d63f-419f-889a-c9bbf4a69ec9 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Attempting to driver detach volume 2f7bf579-a431-4cc1-8235-09a8fc3f51a4 from mountpoint /dev/vdb#033[00m
Oct  1 12:51:07 np0005464891 nova_compute[259907]: 2025-10-01 16:51:07.790 2 DEBUG nova.virt.libvirt.driver [None req-36fa176d-d63f-419f-889a-c9bbf4a69ec9 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Attempting to detach device vdb from instance dc697861-16c7-4baa-8c59-84deb0c0b65c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  1 12:51:07 np0005464891 nova_compute[259907]: 2025-10-01 16:51:07.792 2 DEBUG nova.virt.libvirt.guest [None req-36fa176d-d63f-419f-889a-c9bbf4a69ec9 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:51:07 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:51:07 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-2f7bf579-a431-4cc1-8235-09a8fc3f51a4">
Oct  1 12:51:07 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:51:07 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:51:07 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:51:07 np0005464891 nova_compute[259907]:  <serial>2f7bf579-a431-4cc1-8235-09a8fc3f51a4</serial>
Oct  1 12:51:07 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:51:07 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:51:07 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:51:07 np0005464891 nova_compute[259907]: 2025-10-01 16:51:07.802 2 INFO nova.virt.libvirt.driver [None req-36fa176d-d63f-419f-889a-c9bbf4a69ec9 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Successfully detached device vdb from instance dc697861-16c7-4baa-8c59-84deb0c0b65c from the persistent domain config.#033[00m
Oct  1 12:51:07 np0005464891 nova_compute[259907]: 2025-10-01 16:51:07.802 2 DEBUG nova.virt.libvirt.driver [None req-36fa176d-d63f-419f-889a-c9bbf4a69ec9 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance dc697861-16c7-4baa-8c59-84deb0c0b65c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  1 12:51:07 np0005464891 nova_compute[259907]: 2025-10-01 16:51:07.803 2 DEBUG nova.virt.libvirt.guest [None req-36fa176d-d63f-419f-889a-c9bbf4a69ec9 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:51:07 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:51:07 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-2f7bf579-a431-4cc1-8235-09a8fc3f51a4">
Oct  1 12:51:07 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:51:07 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:51:07 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:51:07 np0005464891 nova_compute[259907]:  <serial>2f7bf579-a431-4cc1-8235-09a8fc3f51a4</serial>
Oct  1 12:51:07 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:51:07 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:51:07 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:51:07 np0005464891 nova_compute[259907]: 2025-10-01 16:51:07.907 2 DEBUG nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Received event <DeviceRemovedEvent: 1759337467.9067826, dc697861-16c7-4baa-8c59-84deb0c0b65c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  1 12:51:07 np0005464891 nova_compute[259907]: 2025-10-01 16:51:07.910 2 DEBUG nova.virt.libvirt.driver [None req-36fa176d-d63f-419f-889a-c9bbf4a69ec9 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance dc697861-16c7-4baa-8c59-84deb0c0b65c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  1 12:51:07 np0005464891 nova_compute[259907]: 2025-10-01 16:51:07.912 2 INFO nova.virt.libvirt.driver [None req-36fa176d-d63f-419f-889a-c9bbf4a69ec9 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Successfully detached device vdb from instance dc697861-16c7-4baa-8c59-84deb0c0b65c from the live domain config.#033[00m
Oct  1 12:51:08 np0005464891 nova_compute[259907]: 2025-10-01 16:51:08.098 2 DEBUG nova.objects.instance [None req-36fa176d-d63f-419f-889a-c9bbf4a69ec9 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lazy-loading 'flavor' on Instance uuid dc697861-16c7-4baa-8c59-84deb0c0b65c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:51:08 np0005464891 nova_compute[259907]: 2025-10-01 16:51:08.140 2 DEBUG oslo_concurrency.lockutils [None req-36fa176d-d63f-419f-889a-c9bbf4a69ec9 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Oct  1 12:51:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Oct  1 12:51:08 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Oct  1 12:51:08 np0005464891 nova_compute[259907]: 2025-10-01 16:51:08.911 2 DEBUG oslo_concurrency.lockutils [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "dc697861-16c7-4baa-8c59-84deb0c0b65c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:08 np0005464891 nova_compute[259907]: 2025-10-01 16:51:08.912 2 DEBUG oslo_concurrency.lockutils [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:08 np0005464891 nova_compute[259907]: 2025-10-01 16:51:08.912 2 DEBUG oslo_concurrency.lockutils [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:08 np0005464891 nova_compute[259907]: 2025-10-01 16:51:08.912 2 DEBUG oslo_concurrency.lockutils [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:08 np0005464891 nova_compute[259907]: 2025-10-01 16:51:08.913 2 DEBUG oslo_concurrency.lockutils [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:08 np0005464891 nova_compute[259907]: 2025-10-01 16:51:08.914 2 INFO nova.compute.manager [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Terminating instance#033[00m
Oct  1 12:51:08 np0005464891 nova_compute[259907]: 2025-10-01 16:51:08.916 2 DEBUG nova.compute.manager [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:51:08 np0005464891 kernel: tapb4aee080-99 (unregistering): left promiscuous mode
Oct  1 12:51:08 np0005464891 NetworkManager[44940]: <info>  [1759337468.9763] device (tapb4aee080-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:51:08 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 306 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 6.7 MiB/s wr, 364 op/s
Oct  1 12:51:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:51:08Z|00110|binding|INFO|Releasing lport b4aee080-9989-4dcc-af16-952142e561a9 from this chassis (sb_readonly=0)
Oct  1 12:51:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:51:08Z|00111|binding|INFO|Setting lport b4aee080-9989-4dcc-af16-952142e561a9 down in Southbound
Oct  1 12:51:08 np0005464891 ovn_controller[152409]: 2025-10-01T16:51:08Z|00112|binding|INFO|Removing iface tapb4aee080-99 ovn-installed in OVS
Oct  1 12:51:08 np0005464891 nova_compute[259907]: 2025-10-01 16:51:08.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:08.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:08.999 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:26:f6 10.100.0.10'], port_security=['fa:16:3e:a7:26:f6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'dc697861-16c7-4baa-8c59-84deb0c0b65c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9217a609-3f35-4647-87cd-e08d95dd1da1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19100b7dd5c9420db1d7f374559a9498', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd382a6a-4351-4841-beca-09ddced00c45', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3460a047-44ee-4ad2-938a-c15de55876d0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=b4aee080-9989-4dcc-af16-952142e561a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:51:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:09.001 162546 INFO neutron.agent.ovn.metadata.agent [-] Port b4aee080-9989-4dcc-af16-952142e561a9 in datapath 9217a609-3f35-4647-87cd-e08d95dd1da1 unbound from our chassis#033[00m
Oct  1 12:51:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:09.003 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9217a609-3f35-4647-87cd-e08d95dd1da1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:51:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:09.005 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9ecce3-54a7-4231-81ac-dcaeedff6b97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:09.005 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1 namespace which is not needed anymore#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:09 np0005464891 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct  1 12:51:09 np0005464891 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 14.533s CPU time.
Oct  1 12:51:09 np0005464891 systemd-machined[214891]: Machine qemu-11-instance-0000000b terminated.
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.166 2 INFO nova.virt.libvirt.driver [-] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Instance destroyed successfully.#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.166 2 DEBUG nova.objects.instance [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lazy-loading 'resources' on Instance uuid dc697861-16c7-4baa-8c59-84deb0c0b65c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.183 2 DEBUG nova.virt.libvirt.vif [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:50:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-818113866',display_name='tempest-VolumesBackupsTest-instance-818113866',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-818113866',id=11,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDgeFrb1867YBXwjKa/TQ0YYXKREXQsqF/dn32JrvKEOrj/bBiwwtISkB6YnLQq8eW7daoes7oHlqUTk/TbKbHXimSuQtQY8Q+G8dxvoBF1xsi9Pxx4AVYXydkaRNIq/EA==',key_name='tempest-keypair-213857542',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:50:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='19100b7dd5c9420db1d7f374559a9498',ramdisk_id='',reservation_id='r-9l7ddtdt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-1599024574',owner_user_name='tempest-VolumesBackupsTest-1599024574-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:50:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='825e1f460cae49ad9834c4d7d67e24fe',uuid=dc697861-16c7-4baa-8c59-84deb0c0b65c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b4aee080-9989-4dcc-af16-952142e561a9", "address": "fa:16:3e:a7:26:f6", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4aee080-99", "ovs_interfaceid": "b4aee080-9989-4dcc-af16-952142e561a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.183 2 DEBUG nova.network.os_vif_util [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Converting VIF {"id": "b4aee080-9989-4dcc-af16-952142e561a9", "address": "fa:16:3e:a7:26:f6", "network": {"id": "9217a609-3f35-4647-87cd-e08d95dd1da1", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-994008652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19100b7dd5c9420db1d7f374559a9498", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4aee080-99", "ovs_interfaceid": "b4aee080-9989-4dcc-af16-952142e561a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.185 2 DEBUG nova.network.os_vif_util [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a7:26:f6,bridge_name='br-int',has_traffic_filtering=True,id=b4aee080-9989-4dcc-af16-952142e561a9,network=Network(9217a609-3f35-4647-87cd-e08d95dd1da1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4aee080-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.185 2 DEBUG os_vif [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:26:f6,bridge_name='br-int',has_traffic_filtering=True,id=b4aee080-9989-4dcc-af16-952142e561a9,network=Network(9217a609-3f35-4647-87cd-e08d95dd1da1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4aee080-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:09 np0005464891 neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1[284797]: [NOTICE]   (284801) : haproxy version is 2.8.14-c23fe91
Oct  1 12:51:09 np0005464891 neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1[284797]: [NOTICE]   (284801) : path to executable is /usr/sbin/haproxy
Oct  1 12:51:09 np0005464891 neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1[284797]: [WARNING]  (284801) : Exiting Master process...
Oct  1 12:51:09 np0005464891 neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1[284797]: [WARNING]  (284801) : Exiting Master process...
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4aee080-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:09 np0005464891 neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1[284797]: [ALERT]    (284801) : Current worker (284803) exited with code 143 (Terminated)
Oct  1 12:51:09 np0005464891 neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1[284797]: [WARNING]  (284801) : All workers exited. Exiting... (0)
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:09 np0005464891 systemd[1]: libpod-f2abd27465463b3e48f66f90b73045ba24ce79f4f2992b5beb07eae994bfec3d.scope: Deactivated successfully.
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.202 2 INFO os_vif [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:26:f6,bridge_name='br-int',has_traffic_filtering=True,id=b4aee080-9989-4dcc-af16-952142e561a9,network=Network(9217a609-3f35-4647-87cd-e08d95dd1da1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4aee080-99')#033[00m
Oct  1 12:51:09 np0005464891 podman[285864]: 2025-10-01 16:51:09.20244388 +0000 UTC m=+0.054446912 container died f2abd27465463b3e48f66f90b73045ba24ce79f4f2992b5beb07eae994bfec3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:51:09 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f2abd27465463b3e48f66f90b73045ba24ce79f4f2992b5beb07eae994bfec3d-userdata-shm.mount: Deactivated successfully.
Oct  1 12:51:09 np0005464891 systemd[1]: var-lib-containers-storage-overlay-de938dbb380afc0e8f32ff4fc3049b4f0c01366dad6acc8666ea10cfc7702995-merged.mount: Deactivated successfully.
Oct  1 12:51:09 np0005464891 podman[285864]: 2025-10-01 16:51:09.264989594 +0000 UTC m=+0.116992626 container cleanup f2abd27465463b3e48f66f90b73045ba24ce79f4f2992b5beb07eae994bfec3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  1 12:51:09 np0005464891 systemd[1]: libpod-conmon-f2abd27465463b3e48f66f90b73045ba24ce79f4f2992b5beb07eae994bfec3d.scope: Deactivated successfully.
Oct  1 12:51:09 np0005464891 podman[285919]: 2025-10-01 16:51:09.369858765 +0000 UTC m=+0.070905126 container remove f2abd27465463b3e48f66f90b73045ba24ce79f4f2992b5beb07eae994bfec3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:51:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:09.381 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[42fd2e8a-4e54-47c1-adc0-6a1a76dbcc17]: (4, ('Wed Oct  1 04:51:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1 (f2abd27465463b3e48f66f90b73045ba24ce79f4f2992b5beb07eae994bfec3d)\nf2abd27465463b3e48f66f90b73045ba24ce79f4f2992b5beb07eae994bfec3d\nWed Oct  1 04:51:09 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1 (f2abd27465463b3e48f66f90b73045ba24ce79f4f2992b5beb07eae994bfec3d)\nf2abd27465463b3e48f66f90b73045ba24ce79f4f2992b5beb07eae994bfec3d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:09.384 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3dcd2ee1-3789-4e9a-bdb7-cff3c2980d72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:09.385 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9217a609-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:51:09 np0005464891 kernel: tap9217a609-30: left promiscuous mode
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:09.397 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[51e0c796-8b09-418c-81f9-b1880a8d125b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:09.436 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f91356af-0722-4e8d-9a05-6c00b2b86b00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:09.438 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7a3e45-7672-4a6a-a43b-b1ff6e113191]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:09.463 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2cece1e5-bd65-4886-b1ca-4f1fa5098302]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437958, 'reachable_time': 31824, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285934, 'error': None, 'target': 'ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:09 np0005464891 systemd[1]: run-netns-ovnmeta\x2d9217a609\x2d3f35\x2d4647\x2d87cd\x2de08d95dd1da1.mount: Deactivated successfully.
Oct  1 12:51:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:09.472 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9217a609-3f35-4647-87cd-e08d95dd1da1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:51:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:09.473 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[4efb4253-73a6-410f-884c-5a31055fda24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.694 2 INFO nova.virt.libvirt.driver [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Deleting instance files /var/lib/nova/instances/dc697861-16c7-4baa-8c59-84deb0c0b65c_del#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.695 2 INFO nova.virt.libvirt.driver [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Deletion of /var/lib/nova/instances/dc697861-16c7-4baa-8c59-84deb0c0b65c_del complete#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.701 2 DEBUG nova.compute.manager [req-371c02c3-69d4-492b-80a5-bb0afc7ce2a1 req-f67f1117-9b99-47d6-a651-81358114341b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Received event network-vif-unplugged-b4aee080-9989-4dcc-af16-952142e561a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.701 2 DEBUG oslo_concurrency.lockutils [req-371c02c3-69d4-492b-80a5-bb0afc7ce2a1 req-f67f1117-9b99-47d6-a651-81358114341b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.701 2 DEBUG oslo_concurrency.lockutils [req-371c02c3-69d4-492b-80a5-bb0afc7ce2a1 req-f67f1117-9b99-47d6-a651-81358114341b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.702 2 DEBUG oslo_concurrency.lockutils [req-371c02c3-69d4-492b-80a5-bb0afc7ce2a1 req-f67f1117-9b99-47d6-a651-81358114341b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.702 2 DEBUG nova.compute.manager [req-371c02c3-69d4-492b-80a5-bb0afc7ce2a1 req-f67f1117-9b99-47d6-a651-81358114341b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] No waiting events found dispatching network-vif-unplugged-b4aee080-9989-4dcc-af16-952142e561a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.702 2 DEBUG nova.compute.manager [req-371c02c3-69d4-492b-80a5-bb0afc7ce2a1 req-f67f1117-9b99-47d6-a651-81358114341b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Received event network-vif-unplugged-b4aee080-9989-4dcc-af16-952142e561a9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.749 2 INFO nova.compute.manager [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.750 2 DEBUG oslo.service.loopingcall [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.751 2 DEBUG nova.compute.manager [-] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:51:09 np0005464891 nova_compute[259907]: 2025-10-01 16:51:09.751 2 DEBUG nova.network.neutron [-] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:51:10 np0005464891 nova_compute[259907]: 2025-10-01 16:51:10.725 2 DEBUG nova.network.neutron [-] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:51:10 np0005464891 nova_compute[259907]: 2025-10-01 16:51:10.747 2 INFO nova.compute.manager [-] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Took 1.00 seconds to deallocate network for instance.#033[00m
Oct  1 12:51:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:51:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Oct  1 12:51:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Oct  1 12:51:10 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Oct  1 12:51:10 np0005464891 nova_compute[259907]: 2025-10-01 16:51:10.812 2 DEBUG oslo_concurrency.lockutils [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:10 np0005464891 nova_compute[259907]: 2025-10-01 16:51:10.813 2 DEBUG oslo_concurrency.lockutils [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:10 np0005464891 nova_compute[259907]: 2025-10-01 16:51:10.887 2 DEBUG oslo_concurrency.processutils [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:51:10 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 253 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 169 op/s
Oct  1 12:51:11 np0005464891 podman[285936]: 2025-10-01 16:51:11.001699986 +0000 UTC m=+0.100684887 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.076 2 DEBUG nova.compute.manager [req-88a50429-e8e6-4a05-a598-442be3b10e15 req-0d6976dd-63f4-4f55-8879-6901f146b65a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Received event network-vif-deleted-b4aee080-9989-4dcc-af16-952142e561a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:51:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4226100083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.326 2 DEBUG oslo_concurrency.processutils [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.335 2 DEBUG nova.compute.provider_tree [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.352 2 DEBUG nova.scheduler.client.report [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.385 2 DEBUG oslo_concurrency.lockutils [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.414 2 INFO nova.scheduler.client.report [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Deleted allocations for instance dc697861-16c7-4baa-8c59-84deb0c0b65c#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.514 2 DEBUG oslo_concurrency.lockutils [None req-8f91f8ec-11d0-4b7d-9a85-f18235f6ac26 825e1f460cae49ad9834c4d7d67e24fe 19100b7dd5c9420db1d7f374559a9498 - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.676 2 DEBUG oslo_concurrency.lockutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Acquiring lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.677 2 DEBUG oslo_concurrency.lockutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:11 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:11.680 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.694 2 DEBUG nova.compute.manager [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.763 2 DEBUG oslo_concurrency.lockutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.764 2 DEBUG oslo_concurrency.lockutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.772 2 DEBUG nova.virt.hardware [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.773 2 INFO nova.compute.claims [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.795 2 DEBUG nova.compute.manager [req-15c1f590-9450-47fa-9b3b-cc6e75ab07de req-30e95eef-0ceb-4928-bfba-df7ac5e0e069 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Received event network-vif-plugged-b4aee080-9989-4dcc-af16-952142e561a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.795 2 DEBUG oslo_concurrency.lockutils [req-15c1f590-9450-47fa-9b3b-cc6e75ab07de req-30e95eef-0ceb-4928-bfba-df7ac5e0e069 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.795 2 DEBUG oslo_concurrency.lockutils [req-15c1f590-9450-47fa-9b3b-cc6e75ab07de req-30e95eef-0ceb-4928-bfba-df7ac5e0e069 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.795 2 DEBUG oslo_concurrency.lockutils [req-15c1f590-9450-47fa-9b3b-cc6e75ab07de req-30e95eef-0ceb-4928-bfba-df7ac5e0e069 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "dc697861-16c7-4baa-8c59-84deb0c0b65c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.796 2 DEBUG nova.compute.manager [req-15c1f590-9450-47fa-9b3b-cc6e75ab07de req-30e95eef-0ceb-4928-bfba-df7ac5e0e069 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] No waiting events found dispatching network-vif-plugged-b4aee080-9989-4dcc-af16-952142e561a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.796 2 WARNING nova.compute.manager [req-15c1f590-9450-47fa-9b3b-cc6e75ab07de req-30e95eef-0ceb-4928-bfba-df7ac5e0e069 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Received unexpected event network-vif-plugged-b4aee080-9989-4dcc-af16-952142e561a9 for instance with vm_state deleted and task_state None.#033[00m
Oct  1 12:51:11 np0005464891 nova_compute[259907]: 2025-10-01 16:51:11.863 2 DEBUG oslo_concurrency.processutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:51:12
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.control', 'images', 'backups', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data']
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:51:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:51:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4200914070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.341 2 DEBUG oslo_concurrency.processutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.351 2 DEBUG nova.compute.provider_tree [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.373 2 DEBUG nova.scheduler.client.report [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.407 2 DEBUG oslo_concurrency.lockutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.408 2 DEBUG nova.compute.manager [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:51:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:12.454 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:12.454 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:12.455 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.488 2 DEBUG nova.compute.manager [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.488 2 DEBUG nova.network.neutron [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.516 2 INFO nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.544 2 DEBUG nova.compute.manager [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.602 2 INFO nova.virt.block_device [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Booting with volume fd3b174c-670e-4d17-b8de-e44e78e6bcf0 at /dev/vda#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.696 2 DEBUG nova.policy [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '99a779b3f1b644f590f56e3904b4c777', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1e5bc249518a47fd9bc1ca87595c86c7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.781 2 DEBUG os_brick.utils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.783 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.795 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.796 741 DEBUG oslo.privsep.daemon [-] privsep: reply[01ab794f-11c8-4168-a50e-0fccebc4958a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.797 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.806 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.807 741 DEBUG oslo.privsep.daemon [-] privsep: reply[652a1cc2-5aeb-4d01-8759-69a51bfeeac8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.808 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:51:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.817 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.818 741 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3fac50-137a-4f2f-99c9-f7a95ddb0e2e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.819 741 DEBUG oslo.privsep.daemon [-] privsep: reply[2598c80b-15b8-47ab-b1f7-040bb6838bd7]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.820 2 DEBUG oslo_concurrency.processutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:51:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Oct  1 12:51:12 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.857 2 DEBUG oslo_concurrency.processutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.859 2 DEBUG os_brick.initiator.connectors.lightos [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.860 2 DEBUG os_brick.initiator.connectors.lightos [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.860 2 DEBUG os_brick.initiator.connectors.lightos [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.860 2 DEBUG os_brick.utils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:51:12 np0005464891 nova_compute[259907]: 2025-10-01 16:51:12.861 2 DEBUG nova.virt.block_device [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Updating existing volume attachment record: 30137636-525b-4bad-9f4e-6f978ec797ee _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:51:12 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 226 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.3 MiB/s wr, 221 op/s
Oct  1 12:51:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:51:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/744441371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:51:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:51:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/744441371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:51:13 np0005464891 nova_compute[259907]: 2025-10-01 16:51:13.380 2 DEBUG nova.network.neutron [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Successfully created port: bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:51:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:51:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2037875448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:51:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Oct  1 12:51:13 np0005464891 nova_compute[259907]: 2025-10-01 16:51:13.838 2 DEBUG nova.compute.manager [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:51:13 np0005464891 nova_compute[259907]: 2025-10-01 16:51:13.839 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:51:13 np0005464891 nova_compute[259907]: 2025-10-01 16:51:13.840 2 INFO nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Creating image(s)#033[00m
Oct  1 12:51:13 np0005464891 nova_compute[259907]: 2025-10-01 16:51:13.840 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  1 12:51:13 np0005464891 nova_compute[259907]: 2025-10-01 16:51:13.840 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Ensure instance console log exists: /var/lib/nova/instances/84f412f7-074e-4bf3-b06c-ff2e47c89bcb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:51:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Oct  1 12:51:13 np0005464891 nova_compute[259907]: 2025-10-01 16:51:13.841 2 DEBUG oslo_concurrency.lockutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:13 np0005464891 nova_compute[259907]: 2025-10-01 16:51:13.841 2 DEBUG oslo_concurrency.lockutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:13 np0005464891 nova_compute[259907]: 2025-10-01 16:51:13.842 2 DEBUG oslo_concurrency.lockutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:13 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Oct  1 12:51:14 np0005464891 nova_compute[259907]: 2025-10-01 16:51:14.093 2 DEBUG nova.network.neutron [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Successfully updated port: bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:51:14 np0005464891 nova_compute[259907]: 2025-10-01 16:51:14.107 2 DEBUG oslo_concurrency.lockutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Acquiring lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:51:14 np0005464891 nova_compute[259907]: 2025-10-01 16:51:14.108 2 DEBUG oslo_concurrency.lockutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Acquired lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:51:14 np0005464891 nova_compute[259907]: 2025-10-01 16:51:14.108 2 DEBUG nova.network.neutron [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:51:14 np0005464891 nova_compute[259907]: 2025-10-01 16:51:14.153 2 DEBUG nova.compute.manager [req-b3c1bd59-e138-4094-8e87-c4aaa1d0770e req-59bc3f42-29dd-4cfc-93db-503ac9f3c019 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Received event network-changed-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:51:14 np0005464891 nova_compute[259907]: 2025-10-01 16:51:14.153 2 DEBUG nova.compute.manager [req-b3c1bd59-e138-4094-8e87-c4aaa1d0770e req-59bc3f42-29dd-4cfc-93db-503ac9f3c019 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Refreshing instance network info cache due to event network-changed-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:51:14 np0005464891 nova_compute[259907]: 2025-10-01 16:51:14.154 2 DEBUG oslo_concurrency.lockutils [req-b3c1bd59-e138-4094-8e87-c4aaa1d0770e req-59bc3f42-29dd-4cfc-93db-503ac9f3c019 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:51:14 np0005464891 nova_compute[259907]: 2025-10-01 16:51:14.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:14 np0005464891 nova_compute[259907]: 2025-10-01 16:51:14.217 2 DEBUG nova.network.neutron [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:51:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Oct  1 12:51:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Oct  1 12:51:14 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Oct  1 12:51:14 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 226 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 5.7 KiB/s wr, 100 op/s
Oct  1 12:51:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:51:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Oct  1 12:51:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Oct  1 12:51:15 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:51:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2501603827' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:51:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:51:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2501603827' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.486 2 DEBUG nova.network.neutron [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Updating instance_info_cache with network_info: [{"id": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "address": "fa:16:3e:a7:06:4c", "network": {"id": "c857a1f9-2cbe-44e2-8b57-649872049256", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1951088954-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e5bc249518a47fd9bc1ca87595c86c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd0e0e9e-68", "ovs_interfaceid": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.509 2 DEBUG oslo_concurrency.lockutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Releasing lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.509 2 DEBUG nova.compute.manager [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Instance network_info: |[{"id": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "address": "fa:16:3e:a7:06:4c", "network": {"id": "c857a1f9-2cbe-44e2-8b57-649872049256", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1951088954-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e5bc249518a47fd9bc1ca87595c86c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd0e0e9e-68", "ovs_interfaceid": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.510 2 DEBUG oslo_concurrency.lockutils [req-b3c1bd59-e138-4094-8e87-c4aaa1d0770e req-59bc3f42-29dd-4cfc-93db-503ac9f3c019 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.510 2 DEBUG nova.network.neutron [req-b3c1bd59-e138-4094-8e87-c4aaa1d0770e req-59bc3f42-29dd-4cfc-93db-503ac9f3c019 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Refreshing network info cache for port bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.513 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Start _get_guest_xml network_info=[{"id": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "address": "fa:16:3e:a7:06:4c", "network": {"id": "c857a1f9-2cbe-44e2-8b57-649872049256", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1951088954-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e5bc249518a47fd9bc1ca87595c86c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd0e0e9e-68", "ovs_interfaceid": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'mount_device': '/dev/vda', 'attachment_id': '30137636-525b-4bad-9f4e-6f978ec797ee', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-fd3b174c-670e-4d17-b8de-e44e78e6bcf0', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'fd3b174c-670e-4d17-b8de-e44e78e6bcf0', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '84f412f7-074e-4bf3-b06c-ff2e47c89bcb', 'attached_at': '', 'detached_at': '', 'volume_id': 'fd3b174c-670e-4d17-b8de-e44e78e6bcf0', 'serial': 'fd3b174c-670e-4d17-b8de-e44e78e6bcf0'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.518 2 WARNING nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.523 2 DEBUG nova.virt.libvirt.host [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.524 2 DEBUG nova.virt.libvirt.host [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.530 2 DEBUG nova.virt.libvirt.host [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.530 2 DEBUG nova.virt.libvirt.host [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.531 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.531 2 DEBUG nova.virt.hardware [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.531 2 DEBUG nova.virt.hardware [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.532 2 DEBUG nova.virt.hardware [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.532 2 DEBUG nova.virt.hardware [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.532 2 DEBUG nova.virt.hardware [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.532 2 DEBUG nova.virt.hardware [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.533 2 DEBUG nova.virt.hardware [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.533 2 DEBUG nova.virt.hardware [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.533 2 DEBUG nova.virt.hardware [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.533 2 DEBUG nova.virt.hardware [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.534 2 DEBUG nova.virt.hardware [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.558 2 DEBUG nova.storage.rbd_utils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] rbd image 84f412f7-074e-4bf3-b06c-ff2e47c89bcb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:51:16 np0005464891 nova_compute[259907]: 2025-10-01 16:51:16.561 2 DEBUG oslo_concurrency.processutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:51:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:51:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4220824438' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:51:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:51:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4220824438' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:51:16 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 226 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 5.3 KiB/s wr, 140 op/s
Oct  1 12:51:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:51:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/172464347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.060 2 DEBUG oslo_concurrency.processutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.088 2 DEBUG nova.virt.libvirt.vif [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:51:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1327694327',display_name='tempest-TestVolumeBackupRestore-server-1327694327',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1327694327',id=12,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjAHh3bskmvZWDZw2GNJhicvkzd/a2S0/OiBeUQh9JInB3OK8Kri9Il248gAmb2dBL9aD+sn4x8t6ZEsEDbfzryxCzf1QjdeyYKwdCufvtakUmsf3b7U5SyPgzMmQ7BJg==',key_name='tempest-TestVolumeBackupRestore-1725508196',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1e5bc249518a47fd9bc1ca87595c86c7',ramdisk_id='',reservation_id='r-bnofjk18',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-293844552',owner_user_name='tempest-TestVolumeBackupRestore-293844552-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:51:12Z,user_data=None,user_id='99a779b3f1b644f590f56e3904b4c777',uuid=84f412f7-074e-4bf3-b06c-ff2e47c89bcb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "address": "fa:16:3e:a7:06:4c", "network": {"id": "c857a1f9-2cbe-44e2-8b57-649872049256", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1951088954-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e5bc249518a47fd9bc1ca87595c86c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd0e0e9e-68", "ovs_interfaceid": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.089 2 DEBUG nova.network.os_vif_util [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Converting VIF {"id": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "address": "fa:16:3e:a7:06:4c", "network": {"id": "c857a1f9-2cbe-44e2-8b57-649872049256", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1951088954-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e5bc249518a47fd9bc1ca87595c86c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd0e0e9e-68", "ovs_interfaceid": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.090 2 DEBUG nova.network.os_vif_util [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:06:4c,bridge_name='br-int',has_traffic_filtering=True,id=bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17,network=Network(c857a1f9-2cbe-44e2-8b57-649872049256),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd0e0e9e-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.091 2 DEBUG nova.objects.instance [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84f412f7-074e-4bf3-b06c-ff2e47c89bcb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.117 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  <uuid>84f412f7-074e-4bf3-b06c-ff2e47c89bcb</uuid>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  <name>instance-0000000c</name>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <nova:name>tempest-TestVolumeBackupRestore-server-1327694327</nova:name>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:51:16</nova:creationTime>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:        <nova:user uuid="99a779b3f1b644f590f56e3904b4c777">tempest-TestVolumeBackupRestore-293844552-project-member</nova:user>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:        <nova:project uuid="1e5bc249518a47fd9bc1ca87595c86c7">tempest-TestVolumeBackupRestore-293844552</nova:project>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:        <nova:port uuid="bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <entry name="serial">84f412f7-074e-4bf3-b06c-ff2e47c89bcb</entry>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <entry name="uuid">84f412f7-074e-4bf3-b06c-ff2e47c89bcb</entry>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/84f412f7-074e-4bf3-b06c-ff2e47c89bcb_disk.config">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="volumes/volume-fd3b174c-670e-4d17-b8de-e44e78e6bcf0">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <serial>fd3b174c-670e-4d17-b8de-e44e78e6bcf0</serial>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:a7:06:4c"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <target dev="tapbd0e0e9e-68"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/84f412f7-074e-4bf3-b06c-ff2e47c89bcb/console.log" append="off"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:51:17 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:51:17 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:51:17 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:51:17 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.118 2 DEBUG nova.compute.manager [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Preparing to wait for external event network-vif-plugged-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.119 2 DEBUG oslo_concurrency.lockutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Acquiring lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.119 2 DEBUG oslo_concurrency.lockutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.120 2 DEBUG oslo_concurrency.lockutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.121 2 DEBUG nova.virt.libvirt.vif [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:51:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1327694327',display_name='tempest-TestVolumeBackupRestore-server-1327694327',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1327694327',id=12,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjAHh3bskmvZWDZw2GNJhicvkzd/a2S0/OiBeUQh9JInB3OK8Kri9Il248gAmb2dBL9aD+sn4x8t6ZEsEDbfzryxCzf1QjdeyYKwdCufvtakUmsf3b7U5SyPgzMmQ7BJg==',key_name='tempest-TestVolumeBackupRestore-1725508196',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1e5bc249518a47fd9bc1ca87595c86c7',ramdisk_id='',reservation_id='r-bnofjk18',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-293844552',owner_user_name='tempest-TestVolumeBackupRestore-293844552-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:51:12Z,user_data=None,user_id='99a779b3f1b644f590f56e3904b4c777',uuid=84f412f7-074e-4bf3-b06c-ff2e47c89bcb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "address": "fa:16:3e:a7:06:4c", "network": {"id": "c857a1f9-2cbe-44e2-8b57-649872049256", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1951088954-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e5bc249518a47fd9bc1ca87595c86c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd0e0e9e-68", "ovs_interfaceid": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.121 2 DEBUG nova.network.os_vif_util [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Converting VIF {"id": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "address": "fa:16:3e:a7:06:4c", "network": {"id": "c857a1f9-2cbe-44e2-8b57-649872049256", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1951088954-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e5bc249518a47fd9bc1ca87595c86c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd0e0e9e-68", "ovs_interfaceid": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.122 2 DEBUG nova.network.os_vif_util [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:06:4c,bridge_name='br-int',has_traffic_filtering=True,id=bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17,network=Network(c857a1f9-2cbe-44e2-8b57-649872049256),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd0e0e9e-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.123 2 DEBUG os_vif [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:06:4c,bridge_name='br-int',has_traffic_filtering=True,id=bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17,network=Network(c857a1f9-2cbe-44e2-8b57-649872049256),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd0e0e9e-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.124 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.125 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd0e0e9e-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd0e0e9e-68, col_values=(('external_ids', {'iface-id': 'bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:06:4c', 'vm-uuid': '84f412f7-074e-4bf3-b06c-ff2e47c89bcb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:17 np0005464891 NetworkManager[44940]: <info>  [1759337477.1311] manager: (tapbd0e0e9e-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.137 2 INFO os_vif [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:06:4c,bridge_name='br-int',has_traffic_filtering=True,id=bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17,network=Network(c857a1f9-2cbe-44e2-8b57-649872049256),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd0e0e9e-68')#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.310 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.311 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.311 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] No VIF found with MAC fa:16:3e:a7:06:4c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.312 2 INFO nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Using config drive#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.337 2 DEBUG nova.storage.rbd_utils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] rbd image 84f412f7-074e-4bf3-b06c-ff2e47c89bcb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.875 2 INFO nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Creating config drive at /var/lib/nova/instances/84f412f7-074e-4bf3-b06c-ff2e47c89bcb/disk.config#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.880 2 DEBUG oslo_concurrency.processutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84f412f7-074e-4bf3-b06c-ff2e47c89bcb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb5qmonzv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.968 2 DEBUG nova.network.neutron [req-b3c1bd59-e138-4094-8e87-c4aaa1d0770e req-59bc3f42-29dd-4cfc-93db-503ac9f3c019 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Updated VIF entry in instance network info cache for port bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.969 2 DEBUG nova.network.neutron [req-b3c1bd59-e138-4094-8e87-c4aaa1d0770e req-59bc3f42-29dd-4cfc-93db-503ac9f3c019 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Updating instance_info_cache with network_info: [{"id": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "address": "fa:16:3e:a7:06:4c", "network": {"id": "c857a1f9-2cbe-44e2-8b57-649872049256", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1951088954-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e5bc249518a47fd9bc1ca87595c86c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd0e0e9e-68", "ovs_interfaceid": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:51:17 np0005464891 nova_compute[259907]: 2025-10-01 16:51:17.984 2 DEBUG oslo_concurrency.lockutils [req-b3c1bd59-e138-4094-8e87-c4aaa1d0770e req-59bc3f42-29dd-4cfc-93db-503ac9f3c019 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.010 2 DEBUG oslo_concurrency.processutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84f412f7-074e-4bf3-b06c-ff2e47c89bcb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb5qmonzv" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.036 2 DEBUG nova.storage.rbd_utils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] rbd image 84f412f7-074e-4bf3-b06c-ff2e47c89bcb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.039 2 DEBUG oslo_concurrency.processutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84f412f7-074e-4bf3-b06c-ff2e47c89bcb/disk.config 84f412f7-074e-4bf3-b06c-ff2e47c89bcb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:51:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:51:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4129131777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:51:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:51:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4129131777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.184 2 DEBUG oslo_concurrency.processutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84f412f7-074e-4bf3-b06c-ff2e47c89bcb/disk.config 84f412f7-074e-4bf3-b06c-ff2e47c89bcb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.185 2 INFO nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Deleting local config drive /var/lib/nova/instances/84f412f7-074e-4bf3-b06c-ff2e47c89bcb/disk.config because it was imported into RBD.#033[00m
Oct  1 12:51:18 np0005464891 kernel: tapbd0e0e9e-68: entered promiscuous mode
Oct  1 12:51:18 np0005464891 NetworkManager[44940]: <info>  [1759337478.2400] manager: (tapbd0e0e9e-68): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Oct  1 12:51:18 np0005464891 ovn_controller[152409]: 2025-10-01T16:51:18Z|00113|binding|INFO|Claiming lport bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 for this chassis.
Oct  1 12:51:18 np0005464891 ovn_controller[152409]: 2025-10-01T16:51:18Z|00114|binding|INFO|bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17: Claiming fa:16:3e:a7:06:4c 10.100.0.7
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.253 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:06:4c 10.100.0.7'], port_security=['fa:16:3e:a7:06:4c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '84f412f7-074e-4bf3-b06c-ff2e47c89bcb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c857a1f9-2cbe-44e2-8b57-649872049256', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e5bc249518a47fd9bc1ca87595c86c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '35afd31e-9752-4e03-a4e9-f1d2f21d05e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88cf185d-0539-4a5f-bbd5-b5dab7791505, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.254 162546 INFO neutron.agent.ovn.metadata.agent [-] Port bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 in datapath c857a1f9-2cbe-44e2-8b57-649872049256 bound to our chassis#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.256 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c857a1f9-2cbe-44e2-8b57-649872049256#033[00m
Oct  1 12:51:18 np0005464891 ovn_controller[152409]: 2025-10-01T16:51:18Z|00115|binding|INFO|Setting lport bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 ovn-installed in OVS
Oct  1 12:51:18 np0005464891 ovn_controller[152409]: 2025-10-01T16:51:18Z|00116|binding|INFO|Setting lport bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 up in Southbound
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.270 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9d4747-01e2-4bac-aa32-9d74702dcea4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.270 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc857a1f9-21 in ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.273 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc857a1f9-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.274 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[fc63bc26-d7fc-4cdc-90b7-a78bb1900d87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.275 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f2935c96-9f3f-49d0-9806-90b95cbf981a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:18 np0005464891 systemd-machined[214891]: New machine qemu-12-instance-0000000c.
Oct  1 12:51:18 np0005464891 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.288 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[e658a7a7-ec89-4217-a1cb-a9e3d0437e24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:18 np0005464891 systemd-udevd[286123]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:51:18 np0005464891 NetworkManager[44940]: <info>  [1759337478.3102] device (tapbd0e0e9e-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:51:18 np0005464891 NetworkManager[44940]: <info>  [1759337478.3114] device (tapbd0e0e9e-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.314 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[516489c5-a2c6-4564-b806-1db38be60d9d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.347 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[f4fc7588-5f44-4799-92fd-bc3f97873e47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.353 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[53b1e974-5434-4de0-9450-f4fa5bdbaae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:18 np0005464891 NetworkManager[44940]: <info>  [1759337478.3540] manager: (tapc857a1f9-20): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Oct  1 12:51:18 np0005464891 systemd-udevd[286126]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.382 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[3fba6cbf-5170-449a-9a4a-2cef2650055c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.385 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[862359e1-5208-41ca-a2c1-55b2964aa9f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:18 np0005464891 NetworkManager[44940]: <info>  [1759337478.4112] device (tapc857a1f9-20): carrier: link connected
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.422 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6ad4a4-a422-4d19-a4f2-88053b1b7b96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.441 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[1c294eda-a78e-4a76-947e-05932afae923]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc857a1f9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:4f:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442698, 'reachable_time': 20877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286155, 'error': None, 'target': 'ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.451 2 DEBUG nova.compute.manager [req-a9e84cd7-31f5-4a78-8bfe-63de08202a2a req-7c597ebb-785c-436d-894e-67bb7983e2b6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Received event network-vif-plugged-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.451 2 DEBUG oslo_concurrency.lockutils [req-a9e84cd7-31f5-4a78-8bfe-63de08202a2a req-7c597ebb-785c-436d-894e-67bb7983e2b6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.451 2 DEBUG oslo_concurrency.lockutils [req-a9e84cd7-31f5-4a78-8bfe-63de08202a2a req-7c597ebb-785c-436d-894e-67bb7983e2b6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.452 2 DEBUG oslo_concurrency.lockutils [req-a9e84cd7-31f5-4a78-8bfe-63de08202a2a req-7c597ebb-785c-436d-894e-67bb7983e2b6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.452 2 DEBUG nova.compute.manager [req-a9e84cd7-31f5-4a78-8bfe-63de08202a2a req-7c597ebb-785c-436d-894e-67bb7983e2b6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Processing event network-vif-plugged-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.460 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7188d4-1ddd-4bee-b0ee-e90bb57bcb93]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:4f98'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442698, 'tstamp': 442698}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286156, 'error': None, 'target': 'ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.478 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[1abbad2e-80e4-4fa9-b8f1-0d0693ac1f0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc857a1f9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:4f:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442698, 'reachable_time': 20877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286157, 'error': None, 'target': 'ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.509 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[0666d09b-a910-4f06-b954-a4783423212f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.563 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[daf31e19-4f5d-4f01-ac93-84eeafea6551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.564 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc857a1f9-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.564 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.565 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc857a1f9-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:18 np0005464891 NetworkManager[44940]: <info>  [1759337478.5680] manager: (tapc857a1f9-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Oct  1 12:51:18 np0005464891 kernel: tapc857a1f9-20: entered promiscuous mode
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.570 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc857a1f9-20, col_values=(('external_ids', {'iface-id': '68dd5ec3-0a7d-4ed7-a1eb-999ea2f2ec94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:18 np0005464891 ovn_controller[152409]: 2025-10-01T16:51:18Z|00117|binding|INFO|Releasing lport 68dd5ec3-0a7d-4ed7-a1eb-999ea2f2ec94 from this chassis (sb_readonly=0)
Oct  1 12:51:18 np0005464891 nova_compute[259907]: 2025-10-01 16:51:18.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.600 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c857a1f9-2cbe-44e2-8b57-649872049256.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c857a1f9-2cbe-44e2-8b57-649872049256.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.600 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd4499f-9a5e-41db-9ea2-d3422d1c2cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.601 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-c857a1f9-2cbe-44e2-8b57-649872049256
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/c857a1f9-2cbe-44e2-8b57-649872049256.pid.haproxy
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID c857a1f9-2cbe-44e2-8b57-649872049256
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:51:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:18.602 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256', 'env', 'PROCESS_TAG=haproxy-c857a1f9-2cbe-44e2-8b57-649872049256', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c857a1f9-2cbe-44e2-8b57-649872049256.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:51:18 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 226 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 6.0 KiB/s wr, 159 op/s
Oct  1 12:51:19 np0005464891 podman[286231]: 2025-10-01 16:51:19.009139424 +0000 UTC m=+0.073017104 container create a400efb0d2e66bd48dcc21b830c735fc4c19b29f2636541f883f7c25a02c0b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:51:19 np0005464891 systemd[1]: Started libpod-conmon-a400efb0d2e66bd48dcc21b830c735fc4c19b29f2636541f883f7c25a02c0b3f.scope.
Oct  1 12:51:19 np0005464891 podman[286231]: 2025-10-01 16:51:18.97452839 +0000 UTC m=+0.038406110 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:51:19 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:51:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfcc1063f0925af29b211042d57fc56093a1abd623b7b5962c713738da70b3d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:51:19 np0005464891 podman[286231]: 2025-10-01 16:51:19.095618137 +0000 UTC m=+0.159495867 container init a400efb0d2e66bd48dcc21b830c735fc4c19b29f2636541f883f7c25a02c0b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:51:19 np0005464891 podman[286231]: 2025-10-01 16:51:19.101186691 +0000 UTC m=+0.165064401 container start a400efb0d2e66bd48dcc21b830c735fc4c19b29f2636541f883f7c25a02c0b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct  1 12:51:19 np0005464891 neutron-haproxy-ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256[286247]: [NOTICE]   (286251) : New worker (286253) forked
Oct  1 12:51:19 np0005464891 neutron-haproxy-ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256[286247]: [NOTICE]   (286251) : Loading success.
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.269 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337479.26841, 84f412f7-074e-4bf3-b06c-ff2e47c89bcb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.270 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] VM Started (Lifecycle Event)#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.272 2 DEBUG nova.compute.manager [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.277 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.281 2 INFO nova.virt.libvirt.driver [-] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Instance spawned successfully.#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.281 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.312 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.323 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.329 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.330 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.331 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.331 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.332 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.333 2 DEBUG nova.virt.libvirt.driver [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.350 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.351 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337479.2696228, 84f412f7-074e-4bf3-b06c-ff2e47c89bcb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.352 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.402 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.407 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337479.2746797, 84f412f7-074e-4bf3-b06c-ff2e47c89bcb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.407 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.417 2 INFO nova.compute.manager [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Took 5.58 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.418 2 DEBUG nova.compute.manager [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.428 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.433 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.463 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.491 2 INFO nova.compute.manager [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Took 7.75 seconds to build instance.#033[00m
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.512 2 DEBUG oslo_concurrency.lockutils [None req-acd11927-5429-43a5-9095-00c8f26c16aa 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:51:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3310657281' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:51:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:51:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3310657281' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:51:19 np0005464891 ovn_controller[152409]: 2025-10-01T16:51:19Z|00118|binding|INFO|Releasing lport 68dd5ec3-0a7d-4ed7-a1eb-999ea2f2ec94 from this chassis (sb_readonly=0)
Oct  1 12:51:19 np0005464891 nova_compute[259907]: 2025-10-01 16:51:19.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:20 np0005464891 ovn_controller[152409]: 2025-10-01T16:51:20Z|00119|binding|INFO|Releasing lport 68dd5ec3-0a7d-4ed7-a1eb-999ea2f2ec94 from this chassis (sb_readonly=0)
Oct  1 12:51:20 np0005464891 nova_compute[259907]: 2025-10-01 16:51:20.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:20 np0005464891 nova_compute[259907]: 2025-10-01 16:51:20.548 2 DEBUG nova.compute.manager [req-743db1d9-5598-4fed-96f4-36cb6df76ad6 req-43549f90-5e8d-476a-9182-8774ffc9e2fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Received event network-vif-plugged-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:51:20 np0005464891 nova_compute[259907]: 2025-10-01 16:51:20.549 2 DEBUG oslo_concurrency.lockutils [req-743db1d9-5598-4fed-96f4-36cb6df76ad6 req-43549f90-5e8d-476a-9182-8774ffc9e2fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:20 np0005464891 nova_compute[259907]: 2025-10-01 16:51:20.549 2 DEBUG oslo_concurrency.lockutils [req-743db1d9-5598-4fed-96f4-36cb6df76ad6 req-43549f90-5e8d-476a-9182-8774ffc9e2fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:20 np0005464891 nova_compute[259907]: 2025-10-01 16:51:20.550 2 DEBUG oslo_concurrency.lockutils [req-743db1d9-5598-4fed-96f4-36cb6df76ad6 req-43549f90-5e8d-476a-9182-8774ffc9e2fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:20 np0005464891 nova_compute[259907]: 2025-10-01 16:51:20.550 2 DEBUG nova.compute.manager [req-743db1d9-5598-4fed-96f4-36cb6df76ad6 req-43549f90-5e8d-476a-9182-8774ffc9e2fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] No waiting events found dispatching network-vif-plugged-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:51:20 np0005464891 nova_compute[259907]: 2025-10-01 16:51:20.551 2 WARNING nova.compute.manager [req-743db1d9-5598-4fed-96f4-36cb6df76ad6 req-43549f90-5e8d-476a-9182-8774ffc9e2fc af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Received unexpected event network-vif-plugged-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 for instance with vm_state active and task_state None.#033[00m
Oct  1 12:51:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:51:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Oct  1 12:51:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Oct  1 12:51:20 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Oct  1 12:51:20 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1388: 321 pgs: 321 active+clean; 227 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 786 KiB/s rd, 33 KiB/s wr, 251 op/s
Oct  1 12:51:21 np0005464891 nova_compute[259907]: 2025-10-01 16:51:21.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0010391357564954093 of space, bias 1.0, pg target 0.3117407269486228 quantized to 32 (current 32)
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0003461242226671876 of space, bias 1.0, pg target 0.10383726680015627 quantized to 32 (current 32)
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:51:22 np0005464891 nova_compute[259907]: 2025-10-01 16:51:22.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:22 np0005464891 podman[286263]: 2025-10-01 16:51:22.967675777 +0000 UTC m=+0.074899755 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  1 12:51:22 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 227 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 25 KiB/s wr, 246 op/s
Oct  1 12:51:23 np0005464891 nova_compute[259907]: 2025-10-01 16:51:23.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:23 np0005464891 NetworkManager[44940]: <info>  [1759337483.0838] manager: (patch-br-int-to-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct  1 12:51:23 np0005464891 NetworkManager[44940]: <info>  [1759337483.0852] manager: (patch-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Oct  1 12:51:23 np0005464891 nova_compute[259907]: 2025-10-01 16:51:23.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:23 np0005464891 ovn_controller[152409]: 2025-10-01T16:51:23Z|00120|binding|INFO|Releasing lport 68dd5ec3-0a7d-4ed7-a1eb-999ea2f2ec94 from this chassis (sb_readonly=0)
Oct  1 12:51:23 np0005464891 nova_compute[259907]: 2025-10-01 16:51:23.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:23 np0005464891 nova_compute[259907]: 2025-10-01 16:51:23.750 2 DEBUG nova.compute.manager [req-e0e83ccc-8d46-49f2-9538-1b46a19455b7 req-884e7e7a-f048-45fd-abe0-680a9bf773d9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Received event network-changed-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:51:23 np0005464891 nova_compute[259907]: 2025-10-01 16:51:23.751 2 DEBUG nova.compute.manager [req-e0e83ccc-8d46-49f2-9538-1b46a19455b7 req-884e7e7a-f048-45fd-abe0-680a9bf773d9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Refreshing instance network info cache due to event network-changed-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:51:23 np0005464891 nova_compute[259907]: 2025-10-01 16:51:23.752 2 DEBUG oslo_concurrency.lockutils [req-e0e83ccc-8d46-49f2-9538-1b46a19455b7 req-884e7e7a-f048-45fd-abe0-680a9bf773d9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:51:23 np0005464891 nova_compute[259907]: 2025-10-01 16:51:23.752 2 DEBUG oslo_concurrency.lockutils [req-e0e83ccc-8d46-49f2-9538-1b46a19455b7 req-884e7e7a-f048-45fd-abe0-680a9bf773d9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:51:23 np0005464891 nova_compute[259907]: 2025-10-01 16:51:23.752 2 DEBUG nova.network.neutron [req-e0e83ccc-8d46-49f2-9538-1b46a19455b7 req-884e7e7a-f048-45fd-abe0-680a9bf773d9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Refreshing network info cache for port bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:51:24 np0005464891 nova_compute[259907]: 2025-10-01 16:51:24.161 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337469.160528, dc697861-16c7-4baa-8c59-84deb0c0b65c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:51:24 np0005464891 nova_compute[259907]: 2025-10-01 16:51:24.162 2 INFO nova.compute.manager [-] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:51:24 np0005464891 nova_compute[259907]: 2025-10-01 16:51:24.194 2 DEBUG nova.compute.manager [None req-6d909ab9-7403-4c07-b5be-6e607e58c377 - - - - - -] [instance: dc697861-16c7-4baa-8c59-84deb0c0b65c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:51:24 np0005464891 nova_compute[259907]: 2025-10-01 16:51:24.818 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:51:24 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 227 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 20 KiB/s wr, 174 op/s
Oct  1 12:51:25 np0005464891 nova_compute[259907]: 2025-10-01 16:51:25.556 2 DEBUG nova.network.neutron [req-e0e83ccc-8d46-49f2-9538-1b46a19455b7 req-884e7e7a-f048-45fd-abe0-680a9bf773d9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Updated VIF entry in instance network info cache for port bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:51:25 np0005464891 nova_compute[259907]: 2025-10-01 16:51:25.557 2 DEBUG nova.network.neutron [req-e0e83ccc-8d46-49f2-9538-1b46a19455b7 req-884e7e7a-f048-45fd-abe0-680a9bf773d9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Updating instance_info_cache with network_info: [{"id": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "address": "fa:16:3e:a7:06:4c", "network": {"id": "c857a1f9-2cbe-44e2-8b57-649872049256", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1951088954-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e5bc249518a47fd9bc1ca87595c86c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd0e0e9e-68", "ovs_interfaceid": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:51:25 np0005464891 nova_compute[259907]: 2025-10-01 16:51:25.586 2 DEBUG oslo_concurrency.lockutils [req-e0e83ccc-8d46-49f2-9538-1b46a19455b7 req-884e7e7a-f048-45fd-abe0-680a9bf773d9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:51:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:51:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Oct  1 12:51:25 np0005464891 nova_compute[259907]: 2025-10-01 16:51:25.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:51:25 np0005464891 nova_compute[259907]: 2025-10-01 16:51:25.804 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:51:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Oct  1 12:51:25 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Oct  1 12:51:25 np0005464891 nova_compute[259907]: 2025-10-01 16:51:25.837 2 DEBUG nova.compute.manager [req-fc6d6c03-0e0a-4d82-b3f5-5b8e5e7ef1bd req-3ec32e21-d600-47f7-a6aa-bb4ccf908855 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Received event network-changed-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:51:25 np0005464891 nova_compute[259907]: 2025-10-01 16:51:25.838 2 DEBUG nova.compute.manager [req-fc6d6c03-0e0a-4d82-b3f5-5b8e5e7ef1bd req-3ec32e21-d600-47f7-a6aa-bb4ccf908855 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Refreshing instance network info cache due to event network-changed-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:51:25 np0005464891 nova_compute[259907]: 2025-10-01 16:51:25.838 2 DEBUG oslo_concurrency.lockutils [req-fc6d6c03-0e0a-4d82-b3f5-5b8e5e7ef1bd req-3ec32e21-d600-47f7-a6aa-bb4ccf908855 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:51:25 np0005464891 nova_compute[259907]: 2025-10-01 16:51:25.838 2 DEBUG oslo_concurrency.lockutils [req-fc6d6c03-0e0a-4d82-b3f5-5b8e5e7ef1bd req-3ec32e21-d600-47f7-a6aa-bb4ccf908855 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:51:25 np0005464891 nova_compute[259907]: 2025-10-01 16:51:25.839 2 DEBUG nova.network.neutron [req-fc6d6c03-0e0a-4d82-b3f5-5b8e5e7ef1bd req-3ec32e21-d600-47f7-a6aa-bb4ccf908855 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Refreshing network info cache for port bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:51:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:51:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3341966226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:51:26 np0005464891 nova_compute[259907]: 2025-10-01 16:51:26.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:26 np0005464891 nova_compute[259907]: 2025-10-01 16:51:26.828 2 DEBUG nova.network.neutron [req-fc6d6c03-0e0a-4d82-b3f5-5b8e5e7ef1bd req-3ec32e21-d600-47f7-a6aa-bb4ccf908855 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Updated VIF entry in instance network info cache for port bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:51:26 np0005464891 nova_compute[259907]: 2025-10-01 16:51:26.829 2 DEBUG nova.network.neutron [req-fc6d6c03-0e0a-4d82-b3f5-5b8e5e7ef1bd req-3ec32e21-d600-47f7-a6aa-bb4ccf908855 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Updating instance_info_cache with network_info: [{"id": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "address": "fa:16:3e:a7:06:4c", "network": {"id": "c857a1f9-2cbe-44e2-8b57-649872049256", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1951088954-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e5bc249518a47fd9bc1ca87595c86c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd0e0e9e-68", "ovs_interfaceid": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:51:26 np0005464891 nova_compute[259907]: 2025-10-01 16:51:26.844 2 DEBUG oslo_concurrency.lockutils [req-fc6d6c03-0e0a-4d82-b3f5-5b8e5e7ef1bd req-3ec32e21-d600-47f7-a6aa-bb4ccf908855 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:51:26 np0005464891 nova_compute[259907]: 2025-10-01 16:51:26.845 2 DEBUG nova.compute.manager [req-fc6d6c03-0e0a-4d82-b3f5-5b8e5e7ef1bd req-3ec32e21-d600-47f7-a6aa-bb4ccf908855 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Received event network-changed-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:51:26 np0005464891 nova_compute[259907]: 2025-10-01 16:51:26.846 2 DEBUG nova.compute.manager [req-fc6d6c03-0e0a-4d82-b3f5-5b8e5e7ef1bd req-3ec32e21-d600-47f7-a6aa-bb4ccf908855 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Refreshing instance network info cache due to event network-changed-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:51:26 np0005464891 nova_compute[259907]: 2025-10-01 16:51:26.846 2 DEBUG oslo_concurrency.lockutils [req-fc6d6c03-0e0a-4d82-b3f5-5b8e5e7ef1bd req-3ec32e21-d600-47f7-a6aa-bb4ccf908855 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:51:26 np0005464891 nova_compute[259907]: 2025-10-01 16:51:26.846 2 DEBUG oslo_concurrency.lockutils [req-fc6d6c03-0e0a-4d82-b3f5-5b8e5e7ef1bd req-3ec32e21-d600-47f7-a6aa-bb4ccf908855 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:51:26 np0005464891 nova_compute[259907]: 2025-10-01 16:51:26.846 2 DEBUG nova.network.neutron [req-fc6d6c03-0e0a-4d82-b3f5-5b8e5e7ef1bd req-3ec32e21-d600-47f7-a6aa-bb4ccf908855 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Refreshing network info cache for port bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:51:26 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 227 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 176 op/s
Oct  1 12:51:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Oct  1 12:51:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Oct  1 12:51:27 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Oct  1 12:51:27 np0005464891 nova_compute[259907]: 2025-10-01 16:51:27.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:27 np0005464891 nova_compute[259907]: 2025-10-01 16:51:27.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:51:27 np0005464891 nova_compute[259907]: 2025-10-01 16:51:27.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:51:27 np0005464891 nova_compute[259907]: 2025-10-01 16:51:27.806 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:51:27 np0005464891 nova_compute[259907]: 2025-10-01 16:51:27.941 2 DEBUG nova.network.neutron [req-fc6d6c03-0e0a-4d82-b3f5-5b8e5e7ef1bd req-3ec32e21-d600-47f7-a6aa-bb4ccf908855 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Updated VIF entry in instance network info cache for port bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:51:27 np0005464891 nova_compute[259907]: 2025-10-01 16:51:27.942 2 DEBUG nova.network.neutron [req-fc6d6c03-0e0a-4d82-b3f5-5b8e5e7ef1bd req-3ec32e21-d600-47f7-a6aa-bb4ccf908855 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Updating instance_info_cache with network_info: [{"id": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "address": "fa:16:3e:a7:06:4c", "network": {"id": "c857a1f9-2cbe-44e2-8b57-649872049256", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1951088954-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e5bc249518a47fd9bc1ca87595c86c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd0e0e9e-68", "ovs_interfaceid": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:51:27 np0005464891 nova_compute[259907]: 2025-10-01 16:51:27.964 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:51:27 np0005464891 nova_compute[259907]: 2025-10-01 16:51:27.966 2 DEBUG oslo_concurrency.lockutils [req-fc6d6c03-0e0a-4d82-b3f5-5b8e5e7ef1bd req-3ec32e21-d600-47f7-a6aa-bb4ccf908855 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:51:27 np0005464891 nova_compute[259907]: 2025-10-01 16:51:27.967 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquired lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:51:27 np0005464891 nova_compute[259907]: 2025-10-01 16:51:27.967 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  1 12:51:27 np0005464891 nova_compute[259907]: 2025-10-01 16:51:27.967 2 DEBUG nova.objects.instance [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 84f412f7-074e-4bf3-b06c-ff2e47c89bcb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:51:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Oct  1 12:51:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Oct  1 12:51:28 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Oct  1 12:51:28 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 321 active+clean; 227 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 KiB/s wr, 72 op/s
Oct  1 12:51:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Oct  1 12:51:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Oct  1 12:51:29 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Oct  1 12:51:29 np0005464891 nova_compute[259907]: 2025-10-01 16:51:29.312 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Updating instance_info_cache with network_info: [{"id": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "address": "fa:16:3e:a7:06:4c", "network": {"id": "c857a1f9-2cbe-44e2-8b57-649872049256", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1951088954-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e5bc249518a47fd9bc1ca87595c86c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd0e0e9e-68", "ovs_interfaceid": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:51:29 np0005464891 nova_compute[259907]: 2025-10-01 16:51:29.328 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Releasing lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:51:29 np0005464891 nova_compute[259907]: 2025-10-01 16:51:29.328 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  1 12:51:29 np0005464891 nova_compute[259907]: 2025-10-01 16:51:29.328 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:51:29 np0005464891 nova_compute[259907]: 2025-10-01 16:51:29.329 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:51:29 np0005464891 nova_compute[259907]: 2025-10-01 16:51:29.346 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:29 np0005464891 nova_compute[259907]: 2025-10-01 16:51:29.347 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:29 np0005464891 nova_compute[259907]: 2025-10-01 16:51:29.347 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:29 np0005464891 nova_compute[259907]: 2025-10-01 16:51:29.347 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:51:29 np0005464891 nova_compute[259907]: 2025-10-01 16:51:29.347 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:51:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:51:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3111648843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:51:29 np0005464891 nova_compute[259907]: 2025-10-01 16:51:29.758 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:51:29 np0005464891 nova_compute[259907]: 2025-10-01 16:51:29.827 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:51:29 np0005464891 nova_compute[259907]: 2025-10-01 16:51:29.828 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:51:30 np0005464891 nova_compute[259907]: 2025-10-01 16:51:30.010 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:51:30 np0005464891 nova_compute[259907]: 2025-10-01 16:51:30.011 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4357MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:51:30 np0005464891 nova_compute[259907]: 2025-10-01 16:51:30.011 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:30 np0005464891 nova_compute[259907]: 2025-10-01 16:51:30.012 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:30 np0005464891 nova_compute[259907]: 2025-10-01 16:51:30.075 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 84f412f7-074e-4bf3-b06c-ff2e47c89bcb actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:51:30 np0005464891 nova_compute[259907]: 2025-10-01 16:51:30.075 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:51:30 np0005464891 nova_compute[259907]: 2025-10-01 16:51:30.075 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:51:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Oct  1 12:51:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Oct  1 12:51:30 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Oct  1 12:51:30 np0005464891 nova_compute[259907]: 2025-10-01 16:51:30.114 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:51:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:51:30 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2618720964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:51:30 np0005464891 nova_compute[259907]: 2025-10-01 16:51:30.622 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:51:30 np0005464891 nova_compute[259907]: 2025-10-01 16:51:30.629 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:51:30 np0005464891 nova_compute[259907]: 2025-10-01 16:51:30.652 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:51:30 np0005464891 nova_compute[259907]: 2025-10-01 16:51:30.681 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:51:30 np0005464891 nova_compute[259907]: 2025-10-01 16:51:30.682 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:51:30 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 227 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 5.7 KiB/s wr, 72 op/s
Oct  1 12:51:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Oct  1 12:51:31 np0005464891 nova_compute[259907]: 2025-10-01 16:51:31.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Oct  1 12:51:31 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Oct  1 12:51:31 np0005464891 nova_compute[259907]: 2025-10-01 16:51:31.158 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:51:31 np0005464891 nova_compute[259907]: 2025-10-01 16:51:31.158 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:51:31 np0005464891 nova_compute[259907]: 2025-10-01 16:51:31.800 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:51:31 np0005464891 nova_compute[259907]: 2025-10-01 16:51:31.819 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:51:32 np0005464891 ovn_controller[152409]: 2025-10-01T16:51:32Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:06:4c 10.100.0.7
Oct  1 12:51:32 np0005464891 ovn_controller[152409]: 2025-10-01T16:51:32Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:06:4c 10.100.0.7
Oct  1 12:51:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Oct  1 12:51:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Oct  1 12:51:32 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Oct  1 12:51:32 np0005464891 nova_compute[259907]: 2025-10-01 16:51:32.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:51:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/295885070' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:51:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:51:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/295885070' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:51:32 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 238 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 375 KiB/s rd, 2.0 MiB/s wr, 147 op/s
Oct  1 12:51:32 np0005464891 podman[286329]: 2025-10-01 16:51:32.992577493 +0000 UTC m=+0.094673441 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  1 12:51:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:51:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4058574069' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:51:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:51:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4058574069' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:51:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:51:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4101914198' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:51:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:51:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4101914198' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:51:34 np0005464891 nova_compute[259907]: 2025-10-01 16:51:34.803 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:51:34 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 253 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 467 KiB/s rd, 3.3 MiB/s wr, 197 op/s
Oct  1 12:51:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:51:36 np0005464891 nova_compute[259907]: 2025-10-01 16:51:36.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:51:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1221352361' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:51:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:51:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1221352361' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:51:36 np0005464891 podman[286356]: 2025-10-01 16:51:36.986539502 +0000 UTC m=+0.093842118 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct  1 12:51:36 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 269 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 640 KiB/s rd, 3.7 MiB/s wr, 229 op/s
Oct  1 12:51:37 np0005464891 nova_compute[259907]: 2025-10-01 16:51:37.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.603 2 DEBUG nova.compute.manager [req-92ddecfa-ae31-4d6f-af49-b5964aca7da4 req-594695b2-d18a-46cd-8f1f-88c66f88e100 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Received event network-changed-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.604 2 DEBUG nova.compute.manager [req-92ddecfa-ae31-4d6f-af49-b5964aca7da4 req-594695b2-d18a-46cd-8f1f-88c66f88e100 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Refreshing instance network info cache due to event network-changed-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.604 2 DEBUG oslo_concurrency.lockutils [req-92ddecfa-ae31-4d6f-af49-b5964aca7da4 req-594695b2-d18a-46cd-8f1f-88c66f88e100 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.604 2 DEBUG oslo_concurrency.lockutils [req-92ddecfa-ae31-4d6f-af49-b5964aca7da4 req-594695b2-d18a-46cd-8f1f-88c66f88e100 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.604 2 DEBUG nova.network.neutron [req-92ddecfa-ae31-4d6f-af49-b5964aca7da4 req-594695b2-d18a-46cd-8f1f-88c66f88e100 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Refreshing network info cache for port bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:51:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:51:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1235372505' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:51:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:51:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1235372505' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.723 2 DEBUG oslo_concurrency.lockutils [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Acquiring lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.723 2 DEBUG oslo_concurrency.lockutils [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.724 2 DEBUG oslo_concurrency.lockutils [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Acquiring lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.724 2 DEBUG oslo_concurrency.lockutils [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.724 2 DEBUG oslo_concurrency.lockutils [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.725 2 INFO nova.compute.manager [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Terminating instance#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.726 2 DEBUG nova.compute.manager [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:51:38 np0005464891 kernel: tapbd0e0e9e-68 (unregistering): left promiscuous mode
Oct  1 12:51:38 np0005464891 NetworkManager[44940]: <info>  [1759337498.7860] device (tapbd0e0e9e-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:51:38 np0005464891 ovn_controller[152409]: 2025-10-01T16:51:38Z|00121|binding|INFO|Releasing lport bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 from this chassis (sb_readonly=0)
Oct  1 12:51:38 np0005464891 ovn_controller[152409]: 2025-10-01T16:51:38Z|00122|binding|INFO|Setting lport bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 down in Southbound
Oct  1 12:51:38 np0005464891 ovn_controller[152409]: 2025-10-01T16:51:38Z|00123|binding|INFO|Removing iface tapbd0e0e9e-68 ovn-installed in OVS
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:38 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:38.804 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:06:4c 10.100.0.7'], port_security=['fa:16:3e:a7:06:4c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '84f412f7-074e-4bf3-b06c-ff2e47c89bcb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c857a1f9-2cbe-44e2-8b57-649872049256', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e5bc249518a47fd9bc1ca87595c86c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '35afd31e-9752-4e03-a4e9-f1d2f21d05e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88cf185d-0539-4a5f-bbd5-b5dab7791505, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:51:38 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:38.805 162546 INFO neutron.agent.ovn.metadata.agent [-] Port bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 in datapath c857a1f9-2cbe-44e2-8b57-649872049256 unbound from our chassis#033[00m
Oct  1 12:51:38 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:38.806 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c857a1f9-2cbe-44e2-8b57-649872049256, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:51:38 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:38.808 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[68214df8-c1ec-4fb2-a77f-66b8d39b406d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:38 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:38.808 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256 namespace which is not needed anymore#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:38 np0005464891 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct  1 12:51:38 np0005464891 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 14.165s CPU time.
Oct  1 12:51:38 np0005464891 systemd-machined[214891]: Machine qemu-12-instance-0000000c terminated.
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:38 np0005464891 neutron-haproxy-ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256[286247]: [NOTICE]   (286251) : haproxy version is 2.8.14-c23fe91
Oct  1 12:51:38 np0005464891 neutron-haproxy-ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256[286247]: [NOTICE]   (286251) : path to executable is /usr/sbin/haproxy
Oct  1 12:51:38 np0005464891 neutron-haproxy-ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256[286247]: [WARNING]  (286251) : Exiting Master process...
Oct  1 12:51:38 np0005464891 neutron-haproxy-ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256[286247]: [ALERT]    (286251) : Current worker (286253) exited with code 143 (Terminated)
Oct  1 12:51:38 np0005464891 neutron-haproxy-ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256[286247]: [WARNING]  (286251) : All workers exited. Exiting... (0)
Oct  1 12:51:38 np0005464891 systemd[1]: libpod-a400efb0d2e66bd48dcc21b830c735fc4c19b29f2636541f883f7c25a02c0b3f.scope: Deactivated successfully.
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:38 np0005464891 podman[286399]: 2025-10-01 16:51:38.960140512 +0000 UTC m=+0.048274581 container died a400efb0d2e66bd48dcc21b830c735fc4c19b29f2636541f883f7c25a02c0b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.962 2 INFO nova.virt.libvirt.driver [-] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Instance destroyed successfully.#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.962 2 DEBUG nova.objects.instance [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Lazy-loading 'resources' on Instance uuid 84f412f7-074e-4bf3-b06c-ff2e47c89bcb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.977 2 DEBUG nova.virt.libvirt.vif [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:51:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1327694327',display_name='tempest-TestVolumeBackupRestore-server-1327694327',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1327694327',id=12,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjAHh3bskmvZWDZw2GNJhicvkzd/a2S0/OiBeUQh9JInB3OK8Kri9Il248gAmb2dBL9aD+sn4x8t6ZEsEDbfzryxCzf1QjdeyYKwdCufvtakUmsf3b7U5SyPgzMmQ7BJg==',key_name='tempest-TestVolumeBackupRestore-1725508196',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:51:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1e5bc249518a47fd9bc1ca87595c86c7',ramdisk_id='',reservation_id='r-bnofjk18',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-293844552',owner_user_name='tempest-TestVolumeBackupRestore-293844552-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:51:19Z,user_data=None,user_id='99a779b3f1b644f590f56e3904b4c777',uuid=84f412f7-074e-4bf3-b06c-ff2e47c89bcb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "address": "fa:16:3e:a7:06:4c", "network": {"id": "c857a1f9-2cbe-44e2-8b57-649872049256", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1951088954-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e5bc249518a47fd9bc1ca87595c86c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd0e0e9e-68", "ovs_interfaceid": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.977 2 DEBUG nova.network.os_vif_util [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Converting VIF {"id": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "address": "fa:16:3e:a7:06:4c", "network": {"id": "c857a1f9-2cbe-44e2-8b57-649872049256", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1951088954-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e5bc249518a47fd9bc1ca87595c86c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd0e0e9e-68", "ovs_interfaceid": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.979 2 DEBUG nova.network.os_vif_util [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a7:06:4c,bridge_name='br-int',has_traffic_filtering=True,id=bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17,network=Network(c857a1f9-2cbe-44e2-8b57-649872049256),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd0e0e9e-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.980 2 DEBUG os_vif [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:06:4c,bridge_name='br-int',has_traffic_filtering=True,id=bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17,network=Network(c857a1f9-2cbe-44e2-8b57-649872049256),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd0e0e9e-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.982 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd0e0e9e-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:38 np0005464891 nova_compute[259907]: 2025-10-01 16:51:38.988 2 INFO os_vif [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:06:4c,bridge_name='br-int',has_traffic_filtering=True,id=bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17,network=Network(c857a1f9-2cbe-44e2-8b57-649872049256),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd0e0e9e-68')#033[00m
Oct  1 12:51:38 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 269 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 575 KiB/s rd, 3.2 MiB/s wr, 217 op/s
Oct  1 12:51:39 np0005464891 systemd[1]: var-lib-containers-storage-overlay-dfcc1063f0925af29b211042d57fc56093a1abd623b7b5962c713738da70b3d6-merged.mount: Deactivated successfully.
Oct  1 12:51:39 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a400efb0d2e66bd48dcc21b830c735fc4c19b29f2636541f883f7c25a02c0b3f-userdata-shm.mount: Deactivated successfully.
Oct  1 12:51:39 np0005464891 podman[286399]: 2025-10-01 16:51:39.013809532 +0000 UTC m=+0.101943641 container cleanup a400efb0d2e66bd48dcc21b830c735fc4c19b29f2636541f883f7c25a02c0b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:51:39 np0005464891 systemd[1]: libpod-conmon-a400efb0d2e66bd48dcc21b830c735fc4c19b29f2636541f883f7c25a02c0b3f.scope: Deactivated successfully.
Oct  1 12:51:39 np0005464891 podman[286454]: 2025-10-01 16:51:39.097805127 +0000 UTC m=+0.049697061 container remove a400efb0d2e66bd48dcc21b830c735fc4c19b29f2636541f883f7c25a02c0b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  1 12:51:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:39.104 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[4df32eaa-947f-4e5b-8f61-f02d474926c2]: (4, ('Wed Oct  1 04:51:38 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256 (a400efb0d2e66bd48dcc21b830c735fc4c19b29f2636541f883f7c25a02c0b3f)\na400efb0d2e66bd48dcc21b830c735fc4c19b29f2636541f883f7c25a02c0b3f\nWed Oct  1 04:51:39 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256 (a400efb0d2e66bd48dcc21b830c735fc4c19b29f2636541f883f7c25a02c0b3f)\na400efb0d2e66bd48dcc21b830c735fc4c19b29f2636541f883f7c25a02c0b3f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:39.105 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[1f74b763-5c7e-450e-8c54-db5d27c9c684]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:39.107 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc857a1f9-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:51:39 np0005464891 kernel: tapc857a1f9-20: left promiscuous mode
Oct  1 12:51:39 np0005464891 nova_compute[259907]: 2025-10-01 16:51:39.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:39 np0005464891 nova_compute[259907]: 2025-10-01 16:51:39.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:39.133 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3e022e0e-a2cc-4795-b7be-ae24f81c8801]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:39.166 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d2548aa9-b3cf-49b9-aa84-bfe6587de18c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:39.168 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d313747a-7924-471e-abd1-5c18e69c6624]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:39 np0005464891 nova_compute[259907]: 2025-10-01 16:51:39.195 2 INFO nova.virt.libvirt.driver [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Deleting instance files /var/lib/nova/instances/84f412f7-074e-4bf3-b06c-ff2e47c89bcb_del#033[00m
Oct  1 12:51:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:39.194 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[59fc266d-a13c-40ab-b0ad-68f5dcb3a95e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442691, 'reachable_time': 30323, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286472, 'error': None, 'target': 'ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:39 np0005464891 nova_compute[259907]: 2025-10-01 16:51:39.196 2 INFO nova.virt.libvirt.driver [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Deletion of /var/lib/nova/instances/84f412f7-074e-4bf3-b06c-ff2e47c89bcb_del complete#033[00m
Oct  1 12:51:39 np0005464891 systemd[1]: run-netns-ovnmeta\x2dc857a1f9\x2d2cbe\x2d44e2\x2d8b57\x2d649872049256.mount: Deactivated successfully.
Oct  1 12:51:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:39.202 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c857a1f9-2cbe-44e2-8b57-649872049256 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:51:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:51:39.202 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[31d501b1-5914-40ff-870c-341117225a19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:51:39 np0005464891 nova_compute[259907]: 2025-10-01 16:51:39.203 2 DEBUG nova.compute.manager [req-95609d9e-8122-4a4b-bb64-f74abd8cfaa5 req-c146832f-17be-4f30-9082-463173d6eafe af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Received event network-vif-unplugged-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:51:39 np0005464891 nova_compute[259907]: 2025-10-01 16:51:39.204 2 DEBUG oslo_concurrency.lockutils [req-95609d9e-8122-4a4b-bb64-f74abd8cfaa5 req-c146832f-17be-4f30-9082-463173d6eafe af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:39 np0005464891 nova_compute[259907]: 2025-10-01 16:51:39.205 2 DEBUG oslo_concurrency.lockutils [req-95609d9e-8122-4a4b-bb64-f74abd8cfaa5 req-c146832f-17be-4f30-9082-463173d6eafe af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:39 np0005464891 nova_compute[259907]: 2025-10-01 16:51:39.205 2 DEBUG oslo_concurrency.lockutils [req-95609d9e-8122-4a4b-bb64-f74abd8cfaa5 req-c146832f-17be-4f30-9082-463173d6eafe af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:39 np0005464891 nova_compute[259907]: 2025-10-01 16:51:39.206 2 DEBUG nova.compute.manager [req-95609d9e-8122-4a4b-bb64-f74abd8cfaa5 req-c146832f-17be-4f30-9082-463173d6eafe af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] No waiting events found dispatching network-vif-unplugged-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:51:39 np0005464891 nova_compute[259907]: 2025-10-01 16:51:39.206 2 DEBUG nova.compute.manager [req-95609d9e-8122-4a4b-bb64-f74abd8cfaa5 req-c146832f-17be-4f30-9082-463173d6eafe af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Received event network-vif-unplugged-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:51:39 np0005464891 nova_compute[259907]: 2025-10-01 16:51:39.246 2 INFO nova.compute.manager [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Took 0.52 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:51:39 np0005464891 nova_compute[259907]: 2025-10-01 16:51:39.247 2 DEBUG oslo.service.loopingcall [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:51:39 np0005464891 nova_compute[259907]: 2025-10-01 16:51:39.247 2 DEBUG nova.compute.manager [-] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:51:39 np0005464891 nova_compute[259907]: 2025-10-01 16:51:39.247 2 DEBUG nova.network.neutron [-] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:51:40 np0005464891 nova_compute[259907]: 2025-10-01 16:51:40.218 2 DEBUG nova.network.neutron [-] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:51:40 np0005464891 nova_compute[259907]: 2025-10-01 16:51:40.234 2 INFO nova.compute.manager [-] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Took 0.99 seconds to deallocate network for instance.#033[00m
Oct  1 12:51:40 np0005464891 nova_compute[259907]: 2025-10-01 16:51:40.392 2 DEBUG nova.network.neutron [req-92ddecfa-ae31-4d6f-af49-b5964aca7da4 req-594695b2-d18a-46cd-8f1f-88c66f88e100 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Updated VIF entry in instance network info cache for port bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:51:40 np0005464891 nova_compute[259907]: 2025-10-01 16:51:40.393 2 DEBUG nova.network.neutron [req-92ddecfa-ae31-4d6f-af49-b5964aca7da4 req-594695b2-d18a-46cd-8f1f-88c66f88e100 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Updating instance_info_cache with network_info: [{"id": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "address": "fa:16:3e:a7:06:4c", "network": {"id": "c857a1f9-2cbe-44e2-8b57-649872049256", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1951088954-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e5bc249518a47fd9bc1ca87595c86c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd0e0e9e-68", "ovs_interfaceid": "bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:51:40 np0005464891 nova_compute[259907]: 2025-10-01 16:51:40.413 2 DEBUG oslo_concurrency.lockutils [req-92ddecfa-ae31-4d6f-af49-b5964aca7da4 req-594695b2-d18a-46cd-8f1f-88c66f88e100 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-84f412f7-074e-4bf3-b06c-ff2e47c89bcb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:51:40 np0005464891 nova_compute[259907]: 2025-10-01 16:51:40.426 2 INFO nova.compute.manager [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Took 0.19 seconds to detach 1 volumes for instance.#033[00m
Oct  1 12:51:40 np0005464891 nova_compute[259907]: 2025-10-01 16:51:40.466 2 DEBUG oslo_concurrency.lockutils [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:40 np0005464891 nova_compute[259907]: 2025-10-01 16:51:40.466 2 DEBUG oslo_concurrency.lockutils [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:40 np0005464891 nova_compute[259907]: 2025-10-01 16:51:40.524 2 DEBUG oslo_concurrency.processutils [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:51:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:51:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2398145469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:51:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:51:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Oct  1 12:51:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Oct  1 12:51:40 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Oct  1 12:51:40 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 269 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 534 KiB/s rd, 2.9 MiB/s wr, 214 op/s
Oct  1 12:51:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:51:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1879422948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.056 2 DEBUG oslo_concurrency.processutils [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.062 2 DEBUG nova.compute.provider_tree [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.078 2 DEBUG nova.scheduler.client.report [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.103 2 DEBUG oslo_concurrency.lockutils [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.134 2 INFO nova.scheduler.client.report [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Deleted allocations for instance 84f412f7-074e-4bf3-b06c-ff2e47c89bcb#033[00m
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.200 2 DEBUG oslo_concurrency.lockutils [None req-6927904b-2a50-463b-8c18-574104049a82 99a779b3f1b644f590f56e3904b4c777 1e5bc249518a47fd9bc1ca87595c86c7 - - default default] Lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.277 2 DEBUG nova.compute.manager [req-f384ae40-661b-4681-a108-5b637688da05 req-e6b3e9c7-c003-4863-8b74-16c405754ade af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Received event network-vif-plugged-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.277 2 DEBUG oslo_concurrency.lockutils [req-f384ae40-661b-4681-a108-5b637688da05 req-e6b3e9c7-c003-4863-8b74-16c405754ade af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.278 2 DEBUG oslo_concurrency.lockutils [req-f384ae40-661b-4681-a108-5b637688da05 req-e6b3e9c7-c003-4863-8b74-16c405754ade af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.278 2 DEBUG oslo_concurrency.lockutils [req-f384ae40-661b-4681-a108-5b637688da05 req-e6b3e9c7-c003-4863-8b74-16c405754ade af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "84f412f7-074e-4bf3-b06c-ff2e47c89bcb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.278 2 DEBUG nova.compute.manager [req-f384ae40-661b-4681-a108-5b637688da05 req-e6b3e9c7-c003-4863-8b74-16c405754ade af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] No waiting events found dispatching network-vif-plugged-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.278 2 WARNING nova.compute.manager [req-f384ae40-661b-4681-a108-5b637688da05 req-e6b3e9c7-c003-4863-8b74-16c405754ade af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Received unexpected event network-vif-plugged-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 for instance with vm_state deleted and task_state None.#033[00m
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.278 2 DEBUG nova.compute.manager [req-f384ae40-661b-4681-a108-5b637688da05 req-e6b3e9c7-c003-4863-8b74-16c405754ade af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Received event network-vif-deleted-bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.279 2 INFO nova.compute.manager [req-f384ae40-661b-4681-a108-5b637688da05 req-e6b3e9c7-c003-4863-8b74-16c405754ade af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Neutron deleted interface bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17; detaching it from the instance and deleting it from the info cache#033[00m
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.279 2 DEBUG nova.network.neutron [req-f384ae40-661b-4681-a108-5b637688da05 req-e6b3e9c7-c003-4863-8b74-16c405754ade af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct  1 12:51:41 np0005464891 nova_compute[259907]: 2025-10-01 16:51:41.282 2 DEBUG nova.compute.manager [req-f384ae40-661b-4681-a108-5b637688da05 req-e6b3e9c7-c003-4863-8b74-16c405754ade af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Detach interface failed, port_id=bd0e0e9e-68cd-4c71-8f9a-eae55ba44b17, reason: Instance 84f412f7-074e-4bf3-b06c-ff2e47c89bcb could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  1 12:51:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Oct  1 12:51:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Oct  1 12:51:41 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Oct  1 12:51:41 np0005464891 podman[286496]: 2025-10-01 16:51:41.969645765 +0000 UTC m=+0.076116659 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  1 12:51:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:51:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:51:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:51:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:51:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:51:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:51:42 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1408: 321 pgs: 321 active+clean; 269 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 296 KiB/s rd, 848 KiB/s wr, 154 op/s
Oct  1 12:51:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:51:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/952245761' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:51:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Oct  1 12:51:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Oct  1 12:51:43 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Oct  1 12:51:43 np0005464891 nova_compute[259907]: 2025-10-01 16:51:43.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:51:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2902333613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:51:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:51:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2902333613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:51:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:51:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1457799134' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:51:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:51:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1457799134' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:51:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Oct  1 12:51:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Oct  1 12:51:44 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Oct  1 12:51:44 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 203 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 15 KiB/s wr, 164 op/s
Oct  1 12:51:45 np0005464891 nova_compute[259907]: 2025-10-01 16:51:45.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:51:45.438189) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337505438269, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2629, "num_deletes": 292, "total_data_size": 3589780, "memory_usage": 3647376, "flush_reason": "Manual Compaction"}
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337505458217, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3514306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26209, "largest_seqno": 28837, "table_properties": {"data_size": 3502066, "index_size": 8030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 27807, "raw_average_key_size": 22, "raw_value_size": 3476996, "raw_average_value_size": 2801, "num_data_blocks": 344, "num_entries": 1241, "num_filter_entries": 1241, "num_deletions": 292, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759337363, "oldest_key_time": 1759337363, "file_creation_time": 1759337505, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 20069 microseconds, and 9587 cpu microseconds.
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:51:45.458267) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3514306 bytes OK
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:51:45.458292) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:51:45.462291) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:51:45.462315) EVENT_LOG_v1 {"time_micros": 1759337505462307, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:51:45.462339) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3578033, prev total WAL file size 3578074, number of live WAL files 2.
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:51:45.464070) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3431KB)], [59(7211KB)]
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337505464164, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10899301, "oldest_snapshot_seqno": -1}
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5659 keys, 9108823 bytes, temperature: kUnknown
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337505543421, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9108823, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9067767, "index_size": 25778, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 141413, "raw_average_key_size": 24, "raw_value_size": 8962663, "raw_average_value_size": 1583, "num_data_blocks": 1046, "num_entries": 5659, "num_filter_entries": 5659, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759337505, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:51:45.544115) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9108823 bytes
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:51:45.546554) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.3 rd, 114.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 7.0 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 6222, records dropped: 563 output_compression: NoCompression
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:51:45.546572) EVENT_LOG_v1 {"time_micros": 1759337505546562, "job": 32, "event": "compaction_finished", "compaction_time_micros": 79380, "compaction_time_cpu_micros": 42055, "output_level": 6, "num_output_files": 1, "total_output_size": 9108823, "num_input_records": 6222, "num_output_records": 5659, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337505547654, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337505549304, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:51:45.463837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:51:45.549336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:51:45.549348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:51:45.549350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:51:45.549351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:51:45.549353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:51:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:51:46 np0005464891 nova_compute[259907]: 2025-10-01 16:51:46.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:46 np0005464891 nova_compute[259907]: 2025-10-01 16:51:46.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:46 np0005464891 nova_compute[259907]: 2025-10-01 16:51:46.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:51:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/107346790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:51:46 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1413: 321 pgs: 321 active+clean; 146 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 13 KiB/s wr, 150 op/s
Oct  1 12:51:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Oct  1 12:51:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Oct  1 12:51:48 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Oct  1 12:51:48 np0005464891 nova_compute[259907]: 2025-10-01 16:51:48.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:48 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 88 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 11 KiB/s wr, 190 op/s
Oct  1 12:51:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Oct  1 12:51:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Oct  1 12:51:49 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Oct  1 12:51:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:51:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Oct  1 12:51:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Oct  1 12:51:50 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Oct  1 12:51:50 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 88 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 120 KiB/s rd, 6.3 KiB/s wr, 162 op/s
Oct  1 12:51:51 np0005464891 nova_compute[259907]: 2025-10-01 16:51:51.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Oct  1 12:51:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Oct  1 12:51:52 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Oct  1 12:51:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:51:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:51:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:51:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:51:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:51:52 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1420: 321 pgs: 321 active+clean; 88 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 3.9 KiB/s wr, 93 op/s
Oct  1 12:51:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:51:52 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 6ae1e2c2-5b40-4173-9676-b2fc4e7f41b2 does not exist
Oct  1 12:51:53 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 2d8341a3-127f-4d26-9b38-3be7e132e6ff does not exist
Oct  1 12:51:53 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev d70519c8-7597-4320-a7f3-c98bb39eafd4 does not exist
Oct  1 12:51:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:51:53 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:51:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:51:53 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:51:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:51:53 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:51:53 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:51:53 np0005464891 podman[286673]: 2025-10-01 16:51:53.269589688 +0000 UTC m=+0.120646616 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  1 12:51:53 np0005464891 podman[286806]: 2025-10-01 16:51:53.818117898 +0000 UTC m=+0.029390822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:51:53 np0005464891 nova_compute[259907]: 2025-10-01 16:51:53.960 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337498.9595351, 84f412f7-074e-4bf3-b06c-ff2e47c89bcb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:51:53 np0005464891 nova_compute[259907]: 2025-10-01 16:51:53.960 2 INFO nova.compute.manager [-] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:51:53 np0005464891 nova_compute[259907]: 2025-10-01 16:51:53.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:53 np0005464891 podman[286806]: 2025-10-01 16:51:53.99670057 +0000 UTC m=+0.207973504 container create 6256eec720ff3c068a20a5274e9bf936002adee124de969593421ac18b2aae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 12:51:54 np0005464891 nova_compute[259907]: 2025-10-01 16:51:54.000 2 DEBUG nova.compute.manager [None req-ec9bb0ab-8e05-48f0-bbeb-fd545a0b1c2a - - - - - -] [instance: 84f412f7-074e-4bf3-b06c-ff2e47c89bcb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:51:54 np0005464891 systemd[1]: Started libpod-conmon-6256eec720ff3c068a20a5274e9bf936002adee124de969593421ac18b2aae85.scope.
Oct  1 12:51:54 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:51:54 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:51:54 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:51:54 np0005464891 podman[286806]: 2025-10-01 16:51:54.557335134 +0000 UTC m=+0.768608108 container init 6256eec720ff3c068a20a5274e9bf936002adee124de969593421ac18b2aae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:51:54 np0005464891 podman[286806]: 2025-10-01 16:51:54.569816178 +0000 UTC m=+0.781089112 container start 6256eec720ff3c068a20a5274e9bf936002adee124de969593421ac18b2aae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:51:54 np0005464891 goofy_jemison[286822]: 167 167
Oct  1 12:51:54 np0005464891 systemd[1]: libpod-6256eec720ff3c068a20a5274e9bf936002adee124de969593421ac18b2aae85.scope: Deactivated successfully.
Oct  1 12:51:54 np0005464891 podman[286806]: 2025-10-01 16:51:54.806679077 +0000 UTC m=+1.017952081 container attach 6256eec720ff3c068a20a5274e9bf936002adee124de969593421ac18b2aae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:51:54 np0005464891 podman[286806]: 2025-10-01 16:51:54.807506849 +0000 UTC m=+1.018779793 container died 6256eec720ff3c068a20a5274e9bf936002adee124de969593421ac18b2aae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:51:54 np0005464891 systemd[1]: var-lib-containers-storage-overlay-97a84d2093702d40ddfb3eb6cf8cbd4e6633106a3e8b2dfeeffd2b5c83dc24da-merged.mount: Deactivated successfully.
Oct  1 12:51:54 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 88 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 5.7 KiB/s wr, 124 op/s
Oct  1 12:51:55 np0005464891 podman[286806]: 2025-10-01 16:51:55.333301312 +0000 UTC m=+1.544574206 container remove 6256eec720ff3c068a20a5274e9bf936002adee124de969593421ac18b2aae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 12:51:55 np0005464891 systemd[1]: libpod-conmon-6256eec720ff3c068a20a5274e9bf936002adee124de969593421ac18b2aae85.scope: Deactivated successfully.
Oct  1 12:51:55 np0005464891 podman[286845]: 2025-10-01 16:51:55.55053836 +0000 UTC m=+0.026431079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:51:55 np0005464891 podman[286845]: 2025-10-01 16:51:55.719736484 +0000 UTC m=+0.195629173 container create cd8c38c4154b93940bb15f9b6131af1f8575c824ce4cd143ce35915561309816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_borg, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:51:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:51:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Oct  1 12:51:55 np0005464891 systemd[1]: Started libpod-conmon-cd8c38c4154b93940bb15f9b6131af1f8575c824ce4cd143ce35915561309816.scope.
Oct  1 12:51:55 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:51:55 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e898d7a11d067b8238fcd874c3cc24a1290bf79d2e4e193b653bfa474a3b1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:51:55 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e898d7a11d067b8238fcd874c3cc24a1290bf79d2e4e193b653bfa474a3b1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:51:55 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e898d7a11d067b8238fcd874c3cc24a1290bf79d2e4e193b653bfa474a3b1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:51:55 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e898d7a11d067b8238fcd874c3cc24a1290bf79d2e4e193b653bfa474a3b1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:51:55 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e898d7a11d067b8238fcd874c3cc24a1290bf79d2e4e193b653bfa474a3b1c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:51:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Oct  1 12:51:55 np0005464891 podman[286845]: 2025-10-01 16:51:55.939487391 +0000 UTC m=+0.415380170 container init cd8c38c4154b93940bb15f9b6131af1f8575c824ce4cd143ce35915561309816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_borg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:51:55 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Oct  1 12:51:55 np0005464891 podman[286845]: 2025-10-01 16:51:55.952663695 +0000 UTC m=+0.428556424 container start cd8c38c4154b93940bb15f9b6131af1f8575c824ce4cd143ce35915561309816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_borg, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:51:56 np0005464891 podman[286845]: 2025-10-01 16:51:56.01380999 +0000 UTC m=+0.489702709 container attach cd8c38c4154b93940bb15f9b6131af1f8575c824ce4cd143ce35915561309816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 12:51:56 np0005464891 nova_compute[259907]: 2025-10-01 16:51:56.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:51:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2677310306' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:51:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:51:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2677310306' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:51:56 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 88 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.9 KiB/s wr, 72 op/s
Oct  1 12:51:57 np0005464891 naughty_borg[286861]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:51:57 np0005464891 naughty_borg[286861]: --> relative data size: 1.0
Oct  1 12:51:57 np0005464891 naughty_borg[286861]: --> All data devices are unavailable
Oct  1 12:51:57 np0005464891 systemd[1]: libpod-cd8c38c4154b93940bb15f9b6131af1f8575c824ce4cd143ce35915561309816.scope: Deactivated successfully.
Oct  1 12:51:57 np0005464891 systemd[1]: libpod-cd8c38c4154b93940bb15f9b6131af1f8575c824ce4cd143ce35915561309816.scope: Consumed 1.192s CPU time.
Oct  1 12:51:57 np0005464891 podman[286845]: 2025-10-01 16:51:57.211076831 +0000 UTC m=+1.686969560 container died cd8c38c4154b93940bb15f9b6131af1f8575c824ce4cd143ce35915561309816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_borg, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:51:57 np0005464891 systemd[1]: var-lib-containers-storage-overlay-13e898d7a11d067b8238fcd874c3cc24a1290bf79d2e4e193b653bfa474a3b1c-merged.mount: Deactivated successfully.
Oct  1 12:51:57 np0005464891 podman[286845]: 2025-10-01 16:51:57.487427349 +0000 UTC m=+1.963320038 container remove cd8c38c4154b93940bb15f9b6131af1f8575c824ce4cd143ce35915561309816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_borg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:51:57 np0005464891 systemd[1]: libpod-conmon-cd8c38c4154b93940bb15f9b6131af1f8575c824ce4cd143ce35915561309816.scope: Deactivated successfully.
Oct  1 12:51:58 np0005464891 podman[287045]: 2025-10-01 16:51:58.345043388 +0000 UTC m=+0.080734196 container create cfd6c89503689c617e4d0917f46703561efd34c4d0f8e23643dbe3f41936a30d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 12:51:58 np0005464891 podman[287045]: 2025-10-01 16:51:58.292120569 +0000 UTC m=+0.027811397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:51:58 np0005464891 systemd[1]: Started libpod-conmon-cfd6c89503689c617e4d0917f46703561efd34c4d0f8e23643dbe3f41936a30d.scope.
Oct  1 12:51:58 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:51:58 np0005464891 podman[287045]: 2025-10-01 16:51:58.475674178 +0000 UTC m=+0.211365036 container init cfd6c89503689c617e4d0917f46703561efd34c4d0f8e23643dbe3f41936a30d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 12:51:58 np0005464891 podman[287045]: 2025-10-01 16:51:58.48587833 +0000 UTC m=+0.221569178 container start cfd6c89503689c617e4d0917f46703561efd34c4d0f8e23643dbe3f41936a30d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:51:58 np0005464891 zen_margulis[287061]: 167 167
Oct  1 12:51:58 np0005464891 systemd[1]: libpod-cfd6c89503689c617e4d0917f46703561efd34c4d0f8e23643dbe3f41936a30d.scope: Deactivated successfully.
Oct  1 12:51:58 np0005464891 podman[287045]: 2025-10-01 16:51:58.512388281 +0000 UTC m=+0.248079139 container attach cfd6c89503689c617e4d0917f46703561efd34c4d0f8e23643dbe3f41936a30d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 12:51:58 np0005464891 podman[287045]: 2025-10-01 16:51:58.512985827 +0000 UTC m=+0.248676705 container died cfd6c89503689c617e4d0917f46703561efd34c4d0f8e23643dbe3f41936a30d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_margulis, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 12:51:58 np0005464891 systemd[1]: var-lib-containers-storage-overlay-28245ba0ed216b1dcd4d3c439ea5bbb3b59d209cef8c41051b043d7b26e28037-merged.mount: Deactivated successfully.
Oct  1 12:51:58 np0005464891 podman[287045]: 2025-10-01 16:51:58.712139047 +0000 UTC m=+0.447829895 container remove cfd6c89503689c617e4d0917f46703561efd34c4d0f8e23643dbe3f41936a30d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 12:51:58 np0005464891 systemd[1]: libpod-conmon-cfd6c89503689c617e4d0917f46703561efd34c4d0f8e23643dbe3f41936a30d.scope: Deactivated successfully.
Oct  1 12:51:58 np0005464891 nova_compute[259907]: 2025-10-01 16:51:58.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:51:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 88 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.2 KiB/s wr, 58 op/s
Oct  1 12:51:59 np0005464891 podman[287087]: 2025-10-01 16:51:59.023137269 +0000 UTC m=+0.122620511 container create 2669e0cc5422a96255269265e6eb4e489874a597bf8a744340b0771e22a9bbd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 12:51:59 np0005464891 podman[287087]: 2025-10-01 16:51:58.934489415 +0000 UTC m=+0.033972697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:51:59 np0005464891 systemd[1]: Started libpod-conmon-2669e0cc5422a96255269265e6eb4e489874a597bf8a744340b0771e22a9bbd6.scope.
Oct  1 12:51:59 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:51:59 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c44ba30068d87728cc7a7d37e26f7341c19706405bef08acaee03a8c7525d7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:51:59 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c44ba30068d87728cc7a7d37e26f7341c19706405bef08acaee03a8c7525d7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:51:59 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c44ba30068d87728cc7a7d37e26f7341c19706405bef08acaee03a8c7525d7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:51:59 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c44ba30068d87728cc7a7d37e26f7341c19706405bef08acaee03a8c7525d7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:51:59 np0005464891 podman[287087]: 2025-10-01 16:51:59.207406848 +0000 UTC m=+0.306890170 container init 2669e0cc5422a96255269265e6eb4e489874a597bf8a744340b0771e22a9bbd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:51:59 np0005464891 podman[287087]: 2025-10-01 16:51:59.217393753 +0000 UTC m=+0.316876995 container start 2669e0cc5422a96255269265e6eb4e489874a597bf8a744340b0771e22a9bbd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:51:59 np0005464891 podman[287087]: 2025-10-01 16:51:59.246136876 +0000 UTC m=+0.345620208 container attach 2669e0cc5422a96255269265e6eb4e489874a597bf8a744340b0771e22a9bbd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]: {
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:    "0": [
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:        {
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "devices": [
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "/dev/loop3"
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            ],
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "lv_name": "ceph_lv0",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "lv_size": "21470642176",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "name": "ceph_lv0",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "tags": {
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.cluster_name": "ceph",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.crush_device_class": "",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.encrypted": "0",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.osd_id": "0",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.type": "block",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.vdo": "0"
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            },
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "type": "block",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "vg_name": "ceph_vg0"
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:        }
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:    ],
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:    "1": [
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:        {
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "devices": [
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "/dev/loop4"
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            ],
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "lv_name": "ceph_lv1",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "lv_size": "21470642176",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "name": "ceph_lv1",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "tags": {
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.cluster_name": "ceph",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.crush_device_class": "",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.encrypted": "0",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.osd_id": "1",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.type": "block",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.vdo": "0"
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            },
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "type": "block",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "vg_name": "ceph_vg1"
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:        }
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:    ],
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:    "2": [
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:        {
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "devices": [
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "/dev/loop5"
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            ],
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "lv_name": "ceph_lv2",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "lv_size": "21470642176",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "name": "ceph_lv2",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "tags": {
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.cluster_name": "ceph",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.crush_device_class": "",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.encrypted": "0",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.osd_id": "2",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.type": "block",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:                "ceph.vdo": "0"
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            },
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "type": "block",
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:            "vg_name": "ceph_vg2"
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:        }
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]:    ]
Oct  1 12:52:00 np0005464891 stupefied_morse[287103]: }
Oct  1 12:52:00 np0005464891 systemd[1]: libpod-2669e0cc5422a96255269265e6eb4e489874a597bf8a744340b0771e22a9bbd6.scope: Deactivated successfully.
Oct  1 12:52:00 np0005464891 podman[287087]: 2025-10-01 16:52:00.085134832 +0000 UTC m=+1.184618084 container died 2669e0cc5422a96255269265e6eb4e489874a597bf8a744340b0771e22a9bbd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:52:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:52:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2758481163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:52:00 np0005464891 systemd[1]: var-lib-containers-storage-overlay-5c44ba30068d87728cc7a7d37e26f7341c19706405bef08acaee03a8c7525d7e-merged.mount: Deactivated successfully.
Oct  1 12:52:00 np0005464891 podman[287087]: 2025-10-01 16:52:00.500410418 +0000 UTC m=+1.599893700 container remove 2669e0cc5422a96255269265e6eb4e489874a597bf8a744340b0771e22a9bbd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:52:00 np0005464891 systemd[1]: libpod-conmon-2669e0cc5422a96255269265e6eb4e489874a597bf8a744340b0771e22a9bbd6.scope: Deactivated successfully.
Oct  1 12:52:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:52:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Oct  1 12:52:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Oct  1 12:52:00 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Oct  1 12:52:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:52:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3245064310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:52:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 88 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.0 KiB/s wr, 58 op/s
Oct  1 12:52:01 np0005464891 nova_compute[259907]: 2025-10-01 16:52:01.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:01 np0005464891 podman[287268]: 2025-10-01 16:52:01.489950963 +0000 UTC m=+0.125858749 container create 3ae2a5a3d43995bdf03b25eb74e29ed7afb929b239ea697da0ee9379d5da8f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:52:01 np0005464891 podman[287268]: 2025-10-01 16:52:01.409650541 +0000 UTC m=+0.045558417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:52:01 np0005464891 systemd[1]: Started libpod-conmon-3ae2a5a3d43995bdf03b25eb74e29ed7afb929b239ea697da0ee9379d5da8f57.scope.
Oct  1 12:52:01 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:52:01 np0005464891 podman[287268]: 2025-10-01 16:52:01.670326806 +0000 UTC m=+0.306234682 container init 3ae2a5a3d43995bdf03b25eb74e29ed7afb929b239ea697da0ee9379d5da8f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:52:01 np0005464891 podman[287268]: 2025-10-01 16:52:01.676851195 +0000 UTC m=+0.312759001 container start 3ae2a5a3d43995bdf03b25eb74e29ed7afb929b239ea697da0ee9379d5da8f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 12:52:01 np0005464891 recursing_torvalds[287284]: 167 167
Oct  1 12:52:01 np0005464891 systemd[1]: libpod-3ae2a5a3d43995bdf03b25eb74e29ed7afb929b239ea697da0ee9379d5da8f57.scope: Deactivated successfully.
Oct  1 12:52:01 np0005464891 podman[287268]: 2025-10-01 16:52:01.764933144 +0000 UTC m=+0.400841180 container attach 3ae2a5a3d43995bdf03b25eb74e29ed7afb929b239ea697da0ee9379d5da8f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 12:52:01 np0005464891 podman[287268]: 2025-10-01 16:52:01.765578101 +0000 UTC m=+0.401485967 container died 3ae2a5a3d43995bdf03b25eb74e29ed7afb929b239ea697da0ee9379d5da8f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:52:01 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d443ef38a4b5eab8e28f44f19bc93b390c6a4c4c3d668cc1bb374c8694e47694-merged.mount: Deactivated successfully.
Oct  1 12:52:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Oct  1 12:52:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Oct  1 12:52:02 np0005464891 podman[287268]: 2025-10-01 16:52:02.04165277 +0000 UTC m=+0.677560586 container remove 3ae2a5a3d43995bdf03b25eb74e29ed7afb929b239ea697da0ee9379d5da8f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:52:02 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Oct  1 12:52:02 np0005464891 systemd[1]: libpod-conmon-3ae2a5a3d43995bdf03b25eb74e29ed7afb929b239ea697da0ee9379d5da8f57.scope: Deactivated successfully.
Oct  1 12:52:02 np0005464891 podman[287310]: 2025-10-01 16:52:02.270725525 +0000 UTC m=+0.069129696 container create 6e506635078eb97734cab16673a63d7f8cad1b548ca9161df3f386a993129bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 12:52:02 np0005464891 podman[287310]: 2025-10-01 16:52:02.230421244 +0000 UTC m=+0.028825385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:52:02 np0005464891 systemd[1]: Started libpod-conmon-6e506635078eb97734cab16673a63d7f8cad1b548ca9161df3f386a993129bff.scope.
Oct  1 12:52:02 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:52:02 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1d1bc8064d5ec8867a446217ae8e52ecf84884e9ddf3f6fbfa207e359c75ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:52:02 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1d1bc8064d5ec8867a446217ae8e52ecf84884e9ddf3f6fbfa207e359c75ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:52:02 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1d1bc8064d5ec8867a446217ae8e52ecf84884e9ddf3f6fbfa207e359c75ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:52:02 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1d1bc8064d5ec8867a446217ae8e52ecf84884e9ddf3f6fbfa207e359c75ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:52:02 np0005464891 podman[287310]: 2025-10-01 16:52:02.577205413 +0000 UTC m=+0.375609574 container init 6e506635078eb97734cab16673a63d7f8cad1b548ca9161df3f386a993129bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:52:02 np0005464891 podman[287310]: 2025-10-01 16:52:02.586789627 +0000 UTC m=+0.385193798 container start 6e506635078eb97734cab16673a63d7f8cad1b548ca9161df3f386a993129bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 12:52:02 np0005464891 podman[287310]: 2025-10-01 16:52:02.612112465 +0000 UTC m=+0.410516626 container attach 6e506635078eb97734cab16673a63d7f8cad1b548ca9161df3f386a993129bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:52:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 88 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.6 KiB/s wr, 21 op/s
Oct  1 12:52:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Oct  1 12:52:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Oct  1 12:52:03 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Oct  1 12:52:03 np0005464891 interesting_jang[287327]: {
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:        "osd_id": 2,
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:        "type": "bluestore"
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:    },
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:        "osd_id": 0,
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:        "type": "bluestore"
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:    },
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:        "osd_id": 1,
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:        "type": "bluestore"
Oct  1 12:52:03 np0005464891 interesting_jang[287327]:    }
Oct  1 12:52:03 np0005464891 interesting_jang[287327]: }
Oct  1 12:52:03 np0005464891 systemd[1]: libpod-6e506635078eb97734cab16673a63d7f8cad1b548ca9161df3f386a993129bff.scope: Deactivated successfully.
Oct  1 12:52:03 np0005464891 systemd[1]: libpod-6e506635078eb97734cab16673a63d7f8cad1b548ca9161df3f386a993129bff.scope: Consumed 1.072s CPU time.
Oct  1 12:52:03 np0005464891 podman[287360]: 2025-10-01 16:52:03.706906972 +0000 UTC m=+0.034390969 container died 6e506635078eb97734cab16673a63d7f8cad1b548ca9161df3f386a993129bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jang, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:52:03 np0005464891 systemd[1]: var-lib-containers-storage-overlay-db1d1bc8064d5ec8867a446217ae8e52ecf84884e9ddf3f6fbfa207e359c75ad-merged.mount: Deactivated successfully.
Oct  1 12:52:03 np0005464891 podman[287361]: 2025-10-01 16:52:03.855284092 +0000 UTC m=+0.167509879 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 12:52:03 np0005464891 podman[287360]: 2025-10-01 16:52:03.872403494 +0000 UTC m=+0.199887401 container remove 6e506635078eb97734cab16673a63d7f8cad1b548ca9161df3f386a993129bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 12:52:03 np0005464891 systemd[1]: libpod-conmon-6e506635078eb97734cab16673a63d7f8cad1b548ca9161df3f386a993129bff.scope: Deactivated successfully.
Oct  1 12:52:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:52:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:52:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:52:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:52:03 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 90365052-c02b-4573-952b-0d50b53cdb17 does not exist
Oct  1 12:52:03 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev a099d336-da01-4b68-8713-a3a6bae593fd does not exist
Oct  1 12:52:03 np0005464891 nova_compute[259907]: 2025-10-01 16:52:03.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:04 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:52:04 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:52:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 88 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.0 KiB/s wr, 41 op/s
Oct  1 12:52:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Oct  1 12:52:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Oct  1 12:52:05 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Oct  1 12:52:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:52:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Oct  1 12:52:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Oct  1 12:52:05 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Oct  1 12:52:06 np0005464891 nova_compute[259907]: 2025-10-01 16:52:06.055 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Acquiring lock "5076fb4d-3680-4a43-b137-762db8ee9de6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:52:06 np0005464891 nova_compute[259907]: 2025-10-01 16:52:06.056 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:52:06 np0005464891 nova_compute[259907]: 2025-10-01 16:52:06.119 2 DEBUG nova.compute.manager [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:52:06 np0005464891 nova_compute[259907]: 2025-10-01 16:52:06.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:06 np0005464891 nova_compute[259907]: 2025-10-01 16:52:06.335 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:52:06 np0005464891 nova_compute[259907]: 2025-10-01 16:52:06.336 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:52:06 np0005464891 nova_compute[259907]: 2025-10-01 16:52:06.349 2 DEBUG nova.virt.hardware [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:52:06 np0005464891 nova_compute[259907]: 2025-10-01 16:52:06.350 2 INFO nova.compute.claims [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:52:06 np0005464891 nova_compute[259907]: 2025-10-01 16:52:06.472 2 DEBUG oslo_concurrency.processutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:52:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:52:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1425868736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:52:06 np0005464891 nova_compute[259907]: 2025-10-01 16:52:06.929 2 DEBUG oslo_concurrency.processutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:52:06 np0005464891 nova_compute[259907]: 2025-10-01 16:52:06.934 2 DEBUG nova.compute.provider_tree [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:52:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Oct  1 12:52:06 np0005464891 nova_compute[259907]: 2025-10-01 16:52:06.960 2 DEBUG nova.scheduler.client.report [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:52:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 88 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 6.2 KiB/s wr, 87 op/s
Oct  1 12:52:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Oct  1 12:52:07 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.095 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.096 2 DEBUG nova.compute.manager [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.153 2 DEBUG nova.compute.manager [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.153 2 DEBUG nova.network.neutron [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.187 2 INFO nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.247 2 DEBUG nova.compute.manager [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.390 2 DEBUG nova.compute.manager [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.392 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.393 2 INFO nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Creating image(s)#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.418 2 DEBUG nova.storage.rbd_utils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] rbd image 5076fb4d-3680-4a43-b137-762db8ee9de6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.441 2 DEBUG nova.storage.rbd_utils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] rbd image 5076fb4d-3680-4a43-b137-762db8ee9de6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.468 2 DEBUG nova.storage.rbd_utils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] rbd image 5076fb4d-3680-4a43-b137-762db8ee9de6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.471 2 DEBUG oslo_concurrency.processutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.498 2 DEBUG nova.policy [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a7aa882d4d1e40a9aeef4f8bbd50372a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '471bace20aee4e2a82d226b5f69cdfd8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.547 2 DEBUG oslo_concurrency.processutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.548 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Acquiring lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.549 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.549 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.566 2 DEBUG nova.storage.rbd_utils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] rbd image 5076fb4d-3680-4a43-b137-762db8ee9de6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.569 2 DEBUG oslo_concurrency.processutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 5076fb4d-3680-4a43-b137-762db8ee9de6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:52:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:52:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1302408175' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:52:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:52:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1302408175' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.818 2 DEBUG oslo_concurrency.processutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 5076fb4d-3680-4a43-b137-762db8ee9de6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:52:07 np0005464891 nova_compute[259907]: 2025-10-01 16:52:07.901 2 DEBUG nova.storage.rbd_utils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] resizing rbd image 5076fb4d-3680-4a43-b137-762db8ee9de6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  1 12:52:07 np0005464891 podman[287591]: 2025-10-01 16:52:07.944817687 +0000 UTC m=+0.059957854 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  1 12:52:08 np0005464891 nova_compute[259907]: 2025-10-01 16:52:08.000 2 DEBUG nova.objects.instance [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lazy-loading 'migration_context' on Instance uuid 5076fb4d-3680-4a43-b137-762db8ee9de6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:52:08 np0005464891 nova_compute[259907]: 2025-10-01 16:52:08.029 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  1 12:52:08 np0005464891 nova_compute[259907]: 2025-10-01 16:52:08.030 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Ensure instance console log exists: /var/lib/nova/instances/5076fb4d-3680-4a43-b137-762db8ee9de6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:52:08 np0005464891 nova_compute[259907]: 2025-10-01 16:52:08.031 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:52:08 np0005464891 nova_compute[259907]: 2025-10-01 16:52:08.031 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:52:08 np0005464891 nova_compute[259907]: 2025-10-01 16:52:08.031 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:52:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:52:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1527640104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:52:08 np0005464891 nova_compute[259907]: 2025-10-01 16:52:08.763 2 DEBUG nova.network.neutron [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Successfully created port: f842220e-e045-41b3-a476-251d10fab2e1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:52:08 np0005464891 nova_compute[259907]: 2025-10-01 16:52:08.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 99 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 636 KiB/s wr, 74 op/s
Oct  1 12:52:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Oct  1 12:52:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Oct  1 12:52:09 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Oct  1 12:52:09 np0005464891 nova_compute[259907]: 2025-10-01 16:52:09.621 2 DEBUG nova.network.neutron [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Successfully updated port: f842220e-e045-41b3-a476-251d10fab2e1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:52:09 np0005464891 nova_compute[259907]: 2025-10-01 16:52:09.637 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Acquiring lock "refresh_cache-5076fb4d-3680-4a43-b137-762db8ee9de6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:52:09 np0005464891 nova_compute[259907]: 2025-10-01 16:52:09.637 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Acquired lock "refresh_cache-5076fb4d-3680-4a43-b137-762db8ee9de6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:52:09 np0005464891 nova_compute[259907]: 2025-10-01 16:52:09.637 2 DEBUG nova.network.neutron [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:52:09 np0005464891 nova_compute[259907]: 2025-10-01 16:52:09.740 2 DEBUG nova.compute.manager [req-02bff5ae-a280-413c-b499-e50e8fa5a803 req-0e4b6115-2136-4171-90bd-ee8277ea9eda af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Received event network-changed-f842220e-e045-41b3-a476-251d10fab2e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:52:09 np0005464891 nova_compute[259907]: 2025-10-01 16:52:09.741 2 DEBUG nova.compute.manager [req-02bff5ae-a280-413c-b499-e50e8fa5a803 req-0e4b6115-2136-4171-90bd-ee8277ea9eda af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Refreshing instance network info cache due to event network-changed-f842220e-e045-41b3-a476-251d10fab2e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:52:09 np0005464891 nova_compute[259907]: 2025-10-01 16:52:09.742 2 DEBUG oslo_concurrency.lockutils [req-02bff5ae-a280-413c-b499-e50e8fa5a803 req-0e4b6115-2136-4171-90bd-ee8277ea9eda af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-5076fb4d-3680-4a43-b137-762db8ee9de6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:52:09 np0005464891 nova_compute[259907]: 2025-10-01 16:52:09.829 2 DEBUG nova.network.neutron [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:52:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Oct  1 12:52:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Oct  1 12:52:10 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Oct  1 12:52:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:52:10 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/588792588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.867 2 DEBUG nova.network.neutron [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Updating instance_info_cache with network_info: [{"id": "f842220e-e045-41b3-a476-251d10fab2e1", "address": "fa:16:3e:26:23:21", "network": {"id": "077f8413-89f8-4043-83f9-97c1e959d04f", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-295665754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "471bace20aee4e2a82d226b5f69cdfd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf842220e-e0", "ovs_interfaceid": "f842220e-e045-41b3-a476-251d10fab2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.894 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Releasing lock "refresh_cache-5076fb4d-3680-4a43-b137-762db8ee9de6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.895 2 DEBUG nova.compute.manager [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Instance network_info: |[{"id": "f842220e-e045-41b3-a476-251d10fab2e1", "address": "fa:16:3e:26:23:21", "network": {"id": "077f8413-89f8-4043-83f9-97c1e959d04f", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-295665754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "471bace20aee4e2a82d226b5f69cdfd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf842220e-e0", "ovs_interfaceid": "f842220e-e045-41b3-a476-251d10fab2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.895 2 DEBUG oslo_concurrency.lockutils [req-02bff5ae-a280-413c-b499-e50e8fa5a803 req-0e4b6115-2136-4171-90bd-ee8277ea9eda af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-5076fb4d-3680-4a43-b137-762db8ee9de6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.895 2 DEBUG nova.network.neutron [req-02bff5ae-a280-413c-b499-e50e8fa5a803 req-0e4b6115-2136-4171-90bd-ee8277ea9eda af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Refreshing network info cache for port f842220e-e045-41b3-a476-251d10fab2e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.899 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Start _get_guest_xml network_info=[{"id": "f842220e-e045-41b3-a476-251d10fab2e1", "address": "fa:16:3e:26:23:21", "network": {"id": "077f8413-89f8-4043-83f9-97c1e959d04f", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-295665754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "471bace20aee4e2a82d226b5f69cdfd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf842220e-e0", "ovs_interfaceid": "f842220e-e045-41b3-a476-251d10fab2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'image_id': 'f01c1e7c-fea3-4433-a44a-d71153552c78'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.905 2 WARNING nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.916 2 DEBUG nova.virt.libvirt.host [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.917 2 DEBUG nova.virt.libvirt.host [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.921 2 DEBUG nova.virt.libvirt.host [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.921 2 DEBUG nova.virt.libvirt.host [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.922 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.922 2 DEBUG nova.virt.hardware [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.923 2 DEBUG nova.virt.hardware [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.923 2 DEBUG nova.virt.hardware [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.924 2 DEBUG nova.virt.hardware [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.924 2 DEBUG nova.virt.hardware [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.924 2 DEBUG nova.virt.hardware [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.925 2 DEBUG nova.virt.hardware [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.925 2 DEBUG nova.virt.hardware [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.925 2 DEBUG nova.virt.hardware [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.925 2 DEBUG nova.virt.hardware [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.926 2 DEBUG nova.virt.hardware [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:52:10 np0005464891 nova_compute[259907]: 2025-10-01 16:52:10.930 2 DEBUG oslo_concurrency.processutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:52:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:52:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 134 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 4.2 MiB/s wr, 173 op/s
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Oct  1 12:52:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Oct  1 12:52:11 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Oct  1 12:52:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:52:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/879209862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.414 2 DEBUG oslo_concurrency.processutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.444 2 DEBUG nova.storage.rbd_utils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] rbd image 5076fb4d-3680-4a43-b137-762db8ee9de6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.451 2 DEBUG oslo_concurrency.processutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:52:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:52:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3520874002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.881 2 DEBUG oslo_concurrency.processutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.883 2 DEBUG nova.virt.libvirt.vif [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:52:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-935559608',display_name='tempest-VolumesExtendAttachedTest-instance-935559608',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-935559608',id=13,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMBKSGuNZ8Zgd5elFxHZA0Lxnv6DxiDS0oT75+y2FS5fbNAyTR80lPH+T6Uxmfz/PNJJ1He3Xp3l5520kqNVdYDjlXhExX0PrjfyD6Z59A8kiEgxGP1TRUqgjtHAmnVanw==',key_name='tempest-keypair-1602620396',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='471bace20aee4e2a82d226b5f69cdfd8',ramdisk_id='',reservation_id='r-jqmeyndu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-92058092',owner_user_name='tempest-VolumesExtendAttachedTest-92058092-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:52:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a7aa882d4d1e40a9aeef4f8bbd50372a',uuid=5076fb4d-3680-4a43-b137-762db8ee9de6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f842220e-e045-41b3-a476-251d10fab2e1", "address": "fa:16:3e:26:23:21", "network": {"id": "077f8413-89f8-4043-83f9-97c1e959d04f", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-295665754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "471bace20aee4e2a82d226b5f69cdfd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf842220e-e0", "ovs_interfaceid": "f842220e-e045-41b3-a476-251d10fab2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.883 2 DEBUG nova.network.os_vif_util [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Converting VIF {"id": "f842220e-e045-41b3-a476-251d10fab2e1", "address": "fa:16:3e:26:23:21", "network": {"id": "077f8413-89f8-4043-83f9-97c1e959d04f", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-295665754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "471bace20aee4e2a82d226b5f69cdfd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf842220e-e0", "ovs_interfaceid": "f842220e-e045-41b3-a476-251d10fab2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.884 2 DEBUG nova.network.os_vif_util [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:23:21,bridge_name='br-int',has_traffic_filtering=True,id=f842220e-e045-41b3-a476-251d10fab2e1,network=Network(077f8413-89f8-4043-83f9-97c1e959d04f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf842220e-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.886 2 DEBUG nova.objects.instance [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5076fb4d-3680-4a43-b137-762db8ee9de6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.902 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  <uuid>5076fb4d-3680-4a43-b137-762db8ee9de6</uuid>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  <name>instance-0000000d</name>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <nova:name>tempest-VolumesExtendAttachedTest-instance-935559608</nova:name>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:52:10</nova:creationTime>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:        <nova:user uuid="a7aa882d4d1e40a9aeef4f8bbd50372a">tempest-VolumesExtendAttachedTest-92058092-project-member</nova:user>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:        <nova:project uuid="471bace20aee4e2a82d226b5f69cdfd8">tempest-VolumesExtendAttachedTest-92058092</nova:project>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="f01c1e7c-fea3-4433-a44a-d71153552c78"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:        <nova:port uuid="f842220e-e045-41b3-a476-251d10fab2e1">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <entry name="serial">5076fb4d-3680-4a43-b137-762db8ee9de6</entry>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <entry name="uuid">5076fb4d-3680-4a43-b137-762db8ee9de6</entry>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/5076fb4d-3680-4a43-b137-762db8ee9de6_disk">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/5076fb4d-3680-4a43-b137-762db8ee9de6_disk.config">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:26:23:21"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <target dev="tapf842220e-e0"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/5076fb4d-3680-4a43-b137-762db8ee9de6/console.log" append="off"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:52:11 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:52:11 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:52:11 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:52:11 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.904 2 DEBUG nova.compute.manager [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Preparing to wait for external event network-vif-plugged-f842220e-e045-41b3-a476-251d10fab2e1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.904 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Acquiring lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.904 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.905 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.905 2 DEBUG nova.virt.libvirt.vif [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:52:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-935559608',display_name='tempest-VolumesExtendAttachedTest-instance-935559608',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-935559608',id=13,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMBKSGuNZ8Zgd5elFxHZA0Lxnv6DxiDS0oT75+y2FS5fbNAyTR80lPH+T6Uxmfz/PNJJ1He3Xp3l5520kqNVdYDjlXhExX0PrjfyD6Z59A8kiEgxGP1TRUqgjtHAmnVanw==',key_name='tempest-keypair-1602620396',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='471bace20aee4e2a82d226b5f69cdfd8',ramdisk_id='',reservation_id='r-jqmeyndu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-92058092',owner_user_name='tempest-VolumesExtendAttachedTest-92058092-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:52:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a7aa882d4d1e40a9aeef4f8bbd50372a',uuid=5076fb4d-3680-4a43-b137-762db8ee9de6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f842220e-e045-41b3-a476-251d10fab2e1", "address": "fa:16:3e:26:23:21", "network": {"id": "077f8413-89f8-4043-83f9-97c1e959d04f", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-295665754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "471bace20aee4e2a82d226b5f69cdfd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf842220e-e0", "ovs_interfaceid": "f842220e-e045-41b3-a476-251d10fab2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.905 2 DEBUG nova.network.os_vif_util [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Converting VIF {"id": "f842220e-e045-41b3-a476-251d10fab2e1", "address": "fa:16:3e:26:23:21", "network": {"id": "077f8413-89f8-4043-83f9-97c1e959d04f", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-295665754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "471bace20aee4e2a82d226b5f69cdfd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf842220e-e0", "ovs_interfaceid": "f842220e-e045-41b3-a476-251d10fab2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.906 2 DEBUG nova.network.os_vif_util [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:23:21,bridge_name='br-int',has_traffic_filtering=True,id=f842220e-e045-41b3-a476-251d10fab2e1,network=Network(077f8413-89f8-4043-83f9-97c1e959d04f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf842220e-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.906 2 DEBUG os_vif [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:23:21,bridge_name='br-int',has_traffic_filtering=True,id=f842220e-e045-41b3-a476-251d10fab2e1,network=Network(077f8413-89f8-4043-83f9-97c1e959d04f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf842220e-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.907 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.908 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.909 2 DEBUG nova.network.neutron [req-02bff5ae-a280-413c-b499-e50e8fa5a803 req-0e4b6115-2136-4171-90bd-ee8277ea9eda af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Updated VIF entry in instance network info cache for port f842220e-e045-41b3-a476-251d10fab2e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.909 2 DEBUG nova.network.neutron [req-02bff5ae-a280-413c-b499-e50e8fa5a803 req-0e4b6115-2136-4171-90bd-ee8277ea9eda af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Updating instance_info_cache with network_info: [{"id": "f842220e-e045-41b3-a476-251d10fab2e1", "address": "fa:16:3e:26:23:21", "network": {"id": "077f8413-89f8-4043-83f9-97c1e959d04f", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-295665754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "471bace20aee4e2a82d226b5f69cdfd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf842220e-e0", "ovs_interfaceid": "f842220e-e045-41b3-a476-251d10fab2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf842220e-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.914 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf842220e-e0, col_values=(('external_ids', {'iface-id': 'f842220e-e045-41b3-a476-251d10fab2e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:23:21', 'vm-uuid': '5076fb4d-3680-4a43-b137-762db8ee9de6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:11 np0005464891 NetworkManager[44940]: <info>  [1759337531.9179] manager: (tapf842220e-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.924 2 INFO os_vif [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:23:21,bridge_name='br-int',has_traffic_filtering=True,id=f842220e-e045-41b3-a476-251d10fab2e1,network=Network(077f8413-89f8-4043-83f9-97c1e959d04f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf842220e-e0')#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.925 2 DEBUG oslo_concurrency.lockutils [req-02bff5ae-a280-413c-b499-e50e8fa5a803 req-0e4b6115-2136-4171-90bd-ee8277ea9eda af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-5076fb4d-3680-4a43-b137-762db8ee9de6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.979 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.980 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.980 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] No VIF found with MAC fa:16:3e:26:23:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:52:11 np0005464891 nova_compute[259907]: 2025-10-01 16:52:11.981 2 INFO nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Using config drive#033[00m
Oct  1 12:52:12 np0005464891 nova_compute[259907]: 2025-10-01 16:52:12.005 2 DEBUG nova.storage.rbd_utils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] rbd image 5076fb4d-3680-4a43-b137-762db8ee9de6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:52:12
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.rgw.root', 'backups', 'default.rgw.log', 'images', '.mgr']
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:52:12 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:52:12 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 12:52:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Oct  1 12:52:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Oct  1 12:52:12 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:52:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:52:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:12.455 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:52:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:12.455 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:52:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:12.456 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:52:12 np0005464891 nova_compute[259907]: 2025-10-01 16:52:12.706 2 INFO nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Creating config drive at /var/lib/nova/instances/5076fb4d-3680-4a43-b137-762db8ee9de6/disk.config#033[00m
Oct  1 12:52:12 np0005464891 nova_compute[259907]: 2025-10-01 16:52:12.714 2 DEBUG oslo_concurrency.processutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5076fb4d-3680-4a43-b137-762db8ee9de6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj4se7lfm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:52:12 np0005464891 nova_compute[259907]: 2025-10-01 16:52:12.859 2 DEBUG oslo_concurrency.processutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5076fb4d-3680-4a43-b137-762db8ee9de6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj4se7lfm" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:52:12 np0005464891 nova_compute[259907]: 2025-10-01 16:52:12.907 2 DEBUG nova.storage.rbd_utils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] rbd image 5076fb4d-3680-4a43-b137-762db8ee9de6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:52:12 np0005464891 nova_compute[259907]: 2025-10-01 16:52:12.913 2 DEBUG oslo_concurrency.processutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5076fb4d-3680-4a43-b137-762db8ee9de6/disk.config 5076fb4d-3680-4a43-b137-762db8ee9de6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:52:12 np0005464891 podman[287745]: 2025-10-01 16:52:12.990001333 +0000 UTC m=+0.086340031 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  1 12:52:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 134 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 4.4 MiB/s wr, 207 op/s
Oct  1 12:52:13 np0005464891 nova_compute[259907]: 2025-10-01 16:52:13.255 2 DEBUG oslo_concurrency.processutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5076fb4d-3680-4a43-b137-762db8ee9de6/disk.config 5076fb4d-3680-4a43-b137-762db8ee9de6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:52:13 np0005464891 nova_compute[259907]: 2025-10-01 16:52:13.256 2 INFO nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Deleting local config drive /var/lib/nova/instances/5076fb4d-3680-4a43-b137-762db8ee9de6/disk.config because it was imported into RBD.#033[00m
Oct  1 12:52:13 np0005464891 kernel: tapf842220e-e0: entered promiscuous mode
Oct  1 12:52:13 np0005464891 NetworkManager[44940]: <info>  [1759337533.3305] manager: (tapf842220e-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Oct  1 12:52:13 np0005464891 nova_compute[259907]: 2025-10-01 16:52:13.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:13 np0005464891 ovn_controller[152409]: 2025-10-01T16:52:13Z|00124|binding|INFO|Claiming lport f842220e-e045-41b3-a476-251d10fab2e1 for this chassis.
Oct  1 12:52:13 np0005464891 ovn_controller[152409]: 2025-10-01T16:52:13Z|00125|binding|INFO|f842220e-e045-41b3-a476-251d10fab2e1: Claiming fa:16:3e:26:23:21 10.100.0.12
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.351 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:23:21 10.100.0.12'], port_security=['fa:16:3e:26:23:21 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5076fb4d-3680-4a43-b137-762db8ee9de6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-077f8413-89f8-4043-83f9-97c1e959d04f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '471bace20aee4e2a82d226b5f69cdfd8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a76c9ec2-ad66-423d-8fe0-3d505aabf592', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1ca07cc-10ad-454f-ae6c-4a35cf8c54ca, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=f842220e-e045-41b3-a476-251d10fab2e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.353 162546 INFO neutron.agent.ovn.metadata.agent [-] Port f842220e-e045-41b3-a476-251d10fab2e1 in datapath 077f8413-89f8-4043-83f9-97c1e959d04f bound to our chassis#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.356 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 077f8413-89f8-4043-83f9-97c1e959d04f#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.375 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[684ed3ef-448e-4236-a870-2201f6894652]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.376 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap077f8413-81 in ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:52:13 np0005464891 systemd-udevd[287815]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.378 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap077f8413-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.378 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[9d14d1ad-d613-435f-bd31-67fb015accb0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.379 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[36011ba0-5c66-4f4d-a0ec-bd718cb235c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:13 np0005464891 systemd-machined[214891]: New machine qemu-13-instance-0000000d.
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.393 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[26dfa551-dff8-4479-abe9-f43e32c61900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:13 np0005464891 NetworkManager[44940]: <info>  [1759337533.3948] device (tapf842220e-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:52:13 np0005464891 NetworkManager[44940]: <info>  [1759337533.3961] device (tapf842220e-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:52:13 np0005464891 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.420 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[fd365bc8-ace3-4fa2-b557-68769cc1c0a8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:13 np0005464891 nova_compute[259907]: 2025-10-01 16:52:13.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:13 np0005464891 ovn_controller[152409]: 2025-10-01T16:52:13Z|00126|binding|INFO|Setting lport f842220e-e045-41b3-a476-251d10fab2e1 ovn-installed in OVS
Oct  1 12:52:13 np0005464891 ovn_controller[152409]: 2025-10-01T16:52:13Z|00127|binding|INFO|Setting lport f842220e-e045-41b3-a476-251d10fab2e1 up in Southbound
Oct  1 12:52:13 np0005464891 nova_compute[259907]: 2025-10-01 16:52:13.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.454 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8674d3-827e-4f85-883f-7a4b802c0a3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.460 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[4886db13-07e4-400f-a4e5-251ba805ad09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:13 np0005464891 NetworkManager[44940]: <info>  [1759337533.4618] manager: (tap077f8413-80): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.497 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c6c3fe-292e-470b-b06b-d51216b6cb6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.501 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[f2804c76-1873-4793-bae6-7073da8053f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:13 np0005464891 NetworkManager[44940]: <info>  [1759337533.5267] device (tap077f8413-80): carrier: link connected
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.531 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb1d381-d26e-4973-a4ef-1a2c79e78eb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.564 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[567e5ce1-df0e-4a61-84a6-e550406892a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap077f8413-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:5c:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448209, 'reachable_time': 18118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287848, 'error': None, 'target': 'ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.584 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5c2386-352e-48e7-85ac-5edca882cf10]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe99:5c27'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448209, 'tstamp': 448209}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287849, 'error': None, 'target': 'ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.606 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d0779c-2e21-413f-bfdd-7252b226b95e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap077f8413-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:5c:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448209, 'reachable_time': 18118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287850, 'error': None, 'target': 'ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.640 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[77940d93-1734-4d0d-83f5-bd1156dee6c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.718 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[1dcb27ef-1889-444d-af2c-42d48affaf8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.720 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap077f8413-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.720 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.720 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap077f8413-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:52:13 np0005464891 nova_compute[259907]: 2025-10-01 16:52:13.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:13 np0005464891 kernel: tap077f8413-80: entered promiscuous mode
Oct  1 12:52:13 np0005464891 NetworkManager[44940]: <info>  [1759337533.7236] manager: (tap077f8413-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Oct  1 12:52:13 np0005464891 nova_compute[259907]: 2025-10-01 16:52:13.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.725 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap077f8413-80, col_values=(('external_ids', {'iface-id': '3bab54fe-c610-441d-9d95-22bd293c6a2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:52:13 np0005464891 nova_compute[259907]: 2025-10-01 16:52:13.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:13 np0005464891 ovn_controller[152409]: 2025-10-01T16:52:13Z|00128|binding|INFO|Releasing lport 3bab54fe-c610-441d-9d95-22bd293c6a2a from this chassis (sb_readonly=0)
Oct  1 12:52:13 np0005464891 nova_compute[259907]: 2025-10-01 16:52:13.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.747 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/077f8413-89f8-4043-83f9-97c1e959d04f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/077f8413-89f8-4043-83f9-97c1e959d04f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.748 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[fad4c50b-f7c3-4d9b-9fc2-34b9cd18b4f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.748 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-077f8413-89f8-4043-83f9-97c1e959d04f
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/077f8413-89f8-4043-83f9-97c1e959d04f.pid.haproxy
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID 077f8413-89f8-4043-83f9-97c1e959d04f
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:52:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:13.749 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f', 'env', 'PROCESS_TAG=haproxy-077f8413-89f8-4043-83f9-97c1e959d04f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/077f8413-89f8-4043-83f9-97c1e959d04f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.145 2 DEBUG nova.compute.manager [req-e174942e-18bd-4626-a714-ca2fb9c8496d req-336b2c82-e142-4ff5-8dc1-27c680397e0a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Received event network-vif-plugged-f842220e-e045-41b3-a476-251d10fab2e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.147 2 DEBUG oslo_concurrency.lockutils [req-e174942e-18bd-4626-a714-ca2fb9c8496d req-336b2c82-e142-4ff5-8dc1-27c680397e0a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.147 2 DEBUG oslo_concurrency.lockutils [req-e174942e-18bd-4626-a714-ca2fb9c8496d req-336b2c82-e142-4ff5-8dc1-27c680397e0a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.147 2 DEBUG oslo_concurrency.lockutils [req-e174942e-18bd-4626-a714-ca2fb9c8496d req-336b2c82-e142-4ff5-8dc1-27c680397e0a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.148 2 DEBUG nova.compute.manager [req-e174942e-18bd-4626-a714-ca2fb9c8496d req-336b2c82-e142-4ff5-8dc1-27c680397e0a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Processing event network-vif-plugged-f842220e-e045-41b3-a476-251d10fab2e1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:52:14 np0005464891 podman[287924]: 2025-10-01 16:52:14.213478256 +0000 UTC m=+0.060756895 container create 6e205f7ff5b145fc190282f7a62b093cb59141cce8a824c5887754e7f8eebab3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:52:14 np0005464891 systemd[1]: Started libpod-conmon-6e205f7ff5b145fc190282f7a62b093cb59141cce8a824c5887754e7f8eebab3.scope.
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:14 np0005464891 podman[287924]: 2025-10-01 16:52:14.186810541 +0000 UTC m=+0.034089180 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:52:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:14.282 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:52:14 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:52:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26e6aef1525df43c1e9c6155538f01c46e682439cc5874930662430ae3abb423/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:52:14 np0005464891 podman[287924]: 2025-10-01 16:52:14.326510232 +0000 UTC m=+0.173788871 container init 6e205f7ff5b145fc190282f7a62b093cb59141cce8a824c5887754e7f8eebab3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 12:52:14 np0005464891 podman[287924]: 2025-10-01 16:52:14.331708105 +0000 UTC m=+0.178986724 container start 6e205f7ff5b145fc190282f7a62b093cb59141cce8a824c5887754e7f8eebab3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:52:14 np0005464891 neutron-haproxy-ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f[287940]: [NOTICE]   (287944) : New worker (287946) forked
Oct  1 12:52:14 np0005464891 neutron-haproxy-ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f[287940]: [NOTICE]   (287944) : Loading success.
Oct  1 12:52:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:14.396 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:52:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.422 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337534.4223313, 5076fb4d-3680-4a43-b137-762db8ee9de6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.423 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] VM Started (Lifecycle Event)#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.425 2 DEBUG nova.compute.manager [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.436 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.440 2 INFO nova.virt.libvirt.driver [-] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Instance spawned successfully.#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.440 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.454 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.457 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:52:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.527 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.527 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337534.423297, 5076fb4d-3680-4a43-b137-762db8ee9de6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.528 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.532 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.532 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.533 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.533 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.534 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.534 2 DEBUG nova.virt.libvirt.driver [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:52:14 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.600 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:52:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:52:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1592925260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.604 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337534.428374, 5076fb4d-3680-4a43-b137-762db8ee9de6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.605 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.678 2 INFO nova.compute.manager [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Took 7.29 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.680 2 DEBUG nova.compute.manager [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.684 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.693 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.772 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.808 2 INFO nova.compute.manager [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Took 8.59 seconds to build instance.#033[00m
Oct  1 12:52:14 np0005464891 nova_compute[259907]: 2025-10-01 16:52:14.834 2 DEBUG oslo_concurrency.lockutils [None req-ae71dc49-4de7-4c68-97d2-a6d5eb850c39 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:52:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 134 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 141 KiB/s rd, 44 KiB/s wr, 155 op/s
Oct  1 12:52:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:52:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1389151728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:52:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:52:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1389151728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:52:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Oct  1 12:52:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Oct  1 12:52:15 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Oct  1 12:52:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:52:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Oct  1 12:52:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Oct  1 12:52:15 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Oct  1 12:52:16 np0005464891 nova_compute[259907]: 2025-10-01 16:52:16.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:16 np0005464891 nova_compute[259907]: 2025-10-01 16:52:16.251 2 DEBUG nova.compute.manager [req-78af7d44-d78e-44ee-ac19-600d757f579a req-1d749694-a52e-4f1b-9875-ed92b086c0cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Received event network-vif-plugged-f842220e-e045-41b3-a476-251d10fab2e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:52:16 np0005464891 nova_compute[259907]: 2025-10-01 16:52:16.252 2 DEBUG oslo_concurrency.lockutils [req-78af7d44-d78e-44ee-ac19-600d757f579a req-1d749694-a52e-4f1b-9875-ed92b086c0cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:52:16 np0005464891 nova_compute[259907]: 2025-10-01 16:52:16.252 2 DEBUG oslo_concurrency.lockutils [req-78af7d44-d78e-44ee-ac19-600d757f579a req-1d749694-a52e-4f1b-9875-ed92b086c0cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:52:16 np0005464891 nova_compute[259907]: 2025-10-01 16:52:16.252 2 DEBUG oslo_concurrency.lockutils [req-78af7d44-d78e-44ee-ac19-600d757f579a req-1d749694-a52e-4f1b-9875-ed92b086c0cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:52:16 np0005464891 nova_compute[259907]: 2025-10-01 16:52:16.252 2 DEBUG nova.compute.manager [req-78af7d44-d78e-44ee-ac19-600d757f579a req-1d749694-a52e-4f1b-9875-ed92b086c0cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] No waiting events found dispatching network-vif-plugged-f842220e-e045-41b3-a476-251d10fab2e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:52:16 np0005464891 nova_compute[259907]: 2025-10-01 16:52:16.253 2 WARNING nova.compute.manager [req-78af7d44-d78e-44ee-ac19-600d757f579a req-1d749694-a52e-4f1b-9875-ed92b086c0cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Received unexpected event network-vif-plugged-f842220e-e045-41b3-a476-251d10fab2e1 for instance with vm_state active and task_state None.#033[00m
Oct  1 12:52:16 np0005464891 nova_compute[259907]: 2025-10-01 16:52:16.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:16 np0005464891 nova_compute[259907]: 2025-10-01 16:52:16.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:16 np0005464891 NetworkManager[44940]: <info>  [1759337536.9386] manager: (patch-br-int-to-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Oct  1 12:52:16 np0005464891 NetworkManager[44940]: <info>  [1759337536.9413] manager: (patch-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Oct  1 12:52:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 134 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 44 KiB/s wr, 156 op/s
Oct  1 12:52:17 np0005464891 nova_compute[259907]: 2025-10-01 16:52:17.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:17 np0005464891 ovn_controller[152409]: 2025-10-01T16:52:17Z|00129|binding|INFO|Releasing lport 3bab54fe-c610-441d-9d95-22bd293c6a2a from this chassis (sb_readonly=0)
Oct  1 12:52:17 np0005464891 nova_compute[259907]: 2025-10-01 16:52:17.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:17 np0005464891 nova_compute[259907]: 2025-10-01 16:52:17.141 2 DEBUG nova.compute.manager [req-ccfed0b7-4e8f-4fc7-9d2e-2c5e5312d930 req-a79fe169-dbc8-421f-9fac-7988961f757f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Received event network-changed-f842220e-e045-41b3-a476-251d10fab2e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:52:17 np0005464891 nova_compute[259907]: 2025-10-01 16:52:17.141 2 DEBUG nova.compute.manager [req-ccfed0b7-4e8f-4fc7-9d2e-2c5e5312d930 req-a79fe169-dbc8-421f-9fac-7988961f757f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Refreshing instance network info cache due to event network-changed-f842220e-e045-41b3-a476-251d10fab2e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:52:17 np0005464891 nova_compute[259907]: 2025-10-01 16:52:17.142 2 DEBUG oslo_concurrency.lockutils [req-ccfed0b7-4e8f-4fc7-9d2e-2c5e5312d930 req-a79fe169-dbc8-421f-9fac-7988961f757f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-5076fb4d-3680-4a43-b137-762db8ee9de6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:52:17 np0005464891 nova_compute[259907]: 2025-10-01 16:52:17.142 2 DEBUG oslo_concurrency.lockutils [req-ccfed0b7-4e8f-4fc7-9d2e-2c5e5312d930 req-a79fe169-dbc8-421f-9fac-7988961f757f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-5076fb4d-3680-4a43-b137-762db8ee9de6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:52:17 np0005464891 nova_compute[259907]: 2025-10-01 16:52:17.142 2 DEBUG nova.network.neutron [req-ccfed0b7-4e8f-4fc7-9d2e-2c5e5312d930 req-a79fe169-dbc8-421f-9fac-7988961f757f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Refreshing network info cache for port f842220e-e045-41b3-a476-251d10fab2e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:52:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:52:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2442430015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:52:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:52:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3678589730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:52:18 np0005464891 nova_compute[259907]: 2025-10-01 16:52:18.468 2 DEBUG nova.network.neutron [req-ccfed0b7-4e8f-4fc7-9d2e-2c5e5312d930 req-a79fe169-dbc8-421f-9fac-7988961f757f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Updated VIF entry in instance network info cache for port f842220e-e045-41b3-a476-251d10fab2e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:52:18 np0005464891 nova_compute[259907]: 2025-10-01 16:52:18.469 2 DEBUG nova.network.neutron [req-ccfed0b7-4e8f-4fc7-9d2e-2c5e5312d930 req-a79fe169-dbc8-421f-9fac-7988961f757f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Updating instance_info_cache with network_info: [{"id": "f842220e-e045-41b3-a476-251d10fab2e1", "address": "fa:16:3e:26:23:21", "network": {"id": "077f8413-89f8-4043-83f9-97c1e959d04f", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-295665754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "471bace20aee4e2a82d226b5f69cdfd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf842220e-e0", "ovs_interfaceid": "f842220e-e045-41b3-a476-251d10fab2e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:52:18 np0005464891 nova_compute[259907]: 2025-10-01 16:52:18.502 2 DEBUG oslo_concurrency.lockutils [req-ccfed0b7-4e8f-4fc7-9d2e-2c5e5312d930 req-a79fe169-dbc8-421f-9fac-7988961f757f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-5076fb4d-3680-4a43-b137-762db8ee9de6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:52:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Oct  1 12:52:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Oct  1 12:52:18 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Oct  1 12:52:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 134 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 46 KiB/s wr, 224 op/s
Oct  1 12:52:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Oct  1 12:52:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Oct  1 12:52:19 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Oct  1 12:52:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:52:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2771497360' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:52:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:52:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2771497360' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:52:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:52:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 134 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.8 KiB/s wr, 282 op/s
Oct  1 12:52:21 np0005464891 nova_compute[259907]: 2025-10-01 16:52:21.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Oct  1 12:52:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Oct  1 12:52:21 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Oct  1 12:52:21 np0005464891 nova_compute[259907]: 2025-10-01 16:52:21.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:52:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3167285301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003487950323956502 of space, bias 1.0, pg target 0.10463850971869505 quantized to 32 (current 32)
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003480955346096243 of space, bias 1.0, pg target 0.10442866038288728 quantized to 32 (current 32)
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.1446327407696817e-06 of space, bias 1.0, pg target 0.0003433898222309045 quantized to 32 (current 32)
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:52:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:52:22 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:22.399 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:52:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Oct  1 12:52:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Oct  1 12:52:22 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Oct  1 12:52:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 134 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.5 KiB/s wr, 306 op/s
Oct  1 12:52:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Oct  1 12:52:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Oct  1 12:52:23 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Oct  1 12:52:23 np0005464891 podman[287956]: 2025-10-01 16:52:23.959596809 +0000 UTC m=+0.064518670 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  1 12:52:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 134 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 8.9 KiB/s wr, 352 op/s
Oct  1 12:52:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:52:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3642284014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:52:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Oct  1 12:52:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Oct  1 12:52:25 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Oct  1 12:52:25 np0005464891 nova_compute[259907]: 2025-10-01 16:52:25.800 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:52:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:52:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Oct  1 12:52:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Oct  1 12:52:25 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Oct  1 12:52:26 np0005464891 nova_compute[259907]: 2025-10-01 16:52:26.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:26 np0005464891 nova_compute[259907]: 2025-10-01 16:52:26.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:52:26 np0005464891 nova_compute[259907]: 2025-10-01 16:52:26.804 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:52:26 np0005464891 nova_compute[259907]: 2025-10-01 16:52:26.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Oct  1 12:52:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Oct  1 12:52:26 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Oct  1 12:52:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 134 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 4.7 KiB/s wr, 136 op/s
Oct  1 12:52:27 np0005464891 ovn_controller[152409]: 2025-10-01T16:52:27Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:23:21 10.100.0.12
Oct  1 12:52:27 np0005464891 ovn_controller[152409]: 2025-10-01T16:52:27Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:23:21 10.100.0.12
Oct  1 12:52:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Oct  1 12:52:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Oct  1 12:52:27 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Oct  1 12:52:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:52:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2418121820' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:52:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:52:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2418121820' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:52:28 np0005464891 nova_compute[259907]: 2025-10-01 16:52:28.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:52:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Oct  1 12:52:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Oct  1 12:52:29 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Oct  1 12:52:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1462: 321 pgs: 321 active+clean; 137 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 526 KiB/s wr, 110 op/s
Oct  1 12:52:29 np0005464891 nova_compute[259907]: 2025-10-01 16:52:29.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:52:29 np0005464891 nova_compute[259907]: 2025-10-01 16:52:29.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:52:29 np0005464891 nova_compute[259907]: 2025-10-01 16:52:29.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:52:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Oct  1 12:52:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Oct  1 12:52:30 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Oct  1 12:52:30 np0005464891 nova_compute[259907]: 2025-10-01 16:52:30.068 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "refresh_cache-5076fb4d-3680-4a43-b137-762db8ee9de6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:52:30 np0005464891 nova_compute[259907]: 2025-10-01 16:52:30.069 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquired lock "refresh_cache-5076fb4d-3680-4a43-b137-762db8ee9de6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:52:30 np0005464891 nova_compute[259907]: 2025-10-01 16:52:30.069 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  1 12:52:30 np0005464891 nova_compute[259907]: 2025-10-01 16:52:30.069 2 DEBUG nova.objects.instance [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5076fb4d-3680-4a43-b137-762db8ee9de6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:52:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:52:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Oct  1 12:52:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Oct  1 12:52:30 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Oct  1 12:52:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 167 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 6.4 MiB/s wr, 425 op/s
Oct  1 12:52:31 np0005464891 nova_compute[259907]: 2025-10-01 16:52:31.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:31 np0005464891 nova_compute[259907]: 2025-10-01 16:52:31.475 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Updating instance_info_cache with network_info: [{"id": "f842220e-e045-41b3-a476-251d10fab2e1", "address": "fa:16:3e:26:23:21", "network": {"id": "077f8413-89f8-4043-83f9-97c1e959d04f", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-295665754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "471bace20aee4e2a82d226b5f69cdfd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf842220e-e0", "ovs_interfaceid": "f842220e-e045-41b3-a476-251d10fab2e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:52:31 np0005464891 nova_compute[259907]: 2025-10-01 16:52:31.492 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Releasing lock "refresh_cache-5076fb4d-3680-4a43-b137-762db8ee9de6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:52:31 np0005464891 nova_compute[259907]: 2025-10-01 16:52:31.493 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  1 12:52:31 np0005464891 nova_compute[259907]: 2025-10-01 16:52:31.493 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:52:31 np0005464891 nova_compute[259907]: 2025-10-01 16:52:31.494 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:52:31 np0005464891 nova_compute[259907]: 2025-10-01 16:52:31.494 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:52:31 np0005464891 nova_compute[259907]: 2025-10-01 16:52:31.519 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:52:31 np0005464891 nova_compute[259907]: 2025-10-01 16:52:31.520 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:52:31 np0005464891 nova_compute[259907]: 2025-10-01 16:52:31.520 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:52:31 np0005464891 nova_compute[259907]: 2025-10-01 16:52:31.520 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:52:31 np0005464891 nova_compute[259907]: 2025-10-01 16:52:31.521 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:52:31 np0005464891 nova_compute[259907]: 2025-10-01 16:52:31.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:52:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3126279503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:52:32 np0005464891 nova_compute[259907]: 2025-10-01 16:52:32.002 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:52:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Oct  1 12:52:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Oct  1 12:52:32 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Oct  1 12:52:32 np0005464891 nova_compute[259907]: 2025-10-01 16:52:32.244 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:52:32 np0005464891 nova_compute[259907]: 2025-10-01 16:52:32.244 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:52:32 np0005464891 nova_compute[259907]: 2025-10-01 16:52:32.412 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:52:32 np0005464891 nova_compute[259907]: 2025-10-01 16:52:32.413 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4325MB free_disk=59.9428596496582GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:52:32 np0005464891 nova_compute[259907]: 2025-10-01 16:52:32.413 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:52:32 np0005464891 nova_compute[259907]: 2025-10-01 16:52:32.414 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:52:32 np0005464891 nova_compute[259907]: 2025-10-01 16:52:32.521 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 5076fb4d-3680-4a43-b137-762db8ee9de6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:52:32 np0005464891 nova_compute[259907]: 2025-10-01 16:52:32.522 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:52:32 np0005464891 nova_compute[259907]: 2025-10-01 16:52:32.523 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:52:32 np0005464891 nova_compute[259907]: 2025-10-01 16:52:32.550 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing inventories for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 12:52:32 np0005464891 nova_compute[259907]: 2025-10-01 16:52:32.572 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating ProviderTree inventory for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 12:52:32 np0005464891 nova_compute[259907]: 2025-10-01 16:52:32.573 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating inventory in ProviderTree for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 12:52:32 np0005464891 nova_compute[259907]: 2025-10-01 16:52:32.599 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing aggregate associations for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 12:52:32 np0005464891 nova_compute[259907]: 2025-10-01 16:52:32.638 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing trait associations for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8, traits: HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 12:52:32 np0005464891 nova_compute[259907]: 2025-10-01 16:52:32.690 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:52:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 167 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 6.0 MiB/s wr, 393 op/s
Oct  1 12:52:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Oct  1 12:52:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Oct  1 12:52:33 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Oct  1 12:52:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:52:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1187072463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:52:33 np0005464891 nova_compute[259907]: 2025-10-01 16:52:33.199 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:52:33 np0005464891 nova_compute[259907]: 2025-10-01 16:52:33.206 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:52:33 np0005464891 nova_compute[259907]: 2025-10-01 16:52:33.222 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:52:33 np0005464891 nova_compute[259907]: 2025-10-01 16:52:33.245 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:52:33 np0005464891 nova_compute[259907]: 2025-10-01 16:52:33.246 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:52:33 np0005464891 nova_compute[259907]: 2025-10-01 16:52:33.557 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:52:34 np0005464891 nova_compute[259907]: 2025-10-01 16:52:34.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:52:34 np0005464891 podman[288021]: 2025-10-01 16:52:34.990049026 +0000 UTC m=+0.102804539 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:52:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 33 KiB/s wr, 100 op/s
Oct  1 12:52:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:52:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Oct  1 12:52:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Oct  1 12:52:36 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Oct  1 12:52:36 np0005464891 nova_compute[259907]: 2025-10-01 16:52:36.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:52:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/854175788' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:52:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:52:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/854175788' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:52:36 np0005464891 nova_compute[259907]: 2025-10-01 16:52:36.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 28 KiB/s wr, 84 op/s
Oct  1 12:52:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Oct  1 12:52:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Oct  1 12:52:37 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Oct  1 12:52:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:52:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2036503091' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:52:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Oct  1 12:52:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Oct  1 12:52:38 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Oct  1 12:52:38 np0005464891 podman[288048]: 2025-10-01 16:52:38.964679394 +0000 UTC m=+0.073633622 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Oct  1 12:52:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 27 KiB/s wr, 79 op/s
Oct  1 12:52:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Oct  1 12:52:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Oct  1 12:52:39 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Oct  1 12:52:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Oct  1 12:52:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Oct  1 12:52:40 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Oct  1 12:52:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 6.5 KiB/s wr, 136 op/s
Oct  1 12:52:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:52:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Oct  1 12:52:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Oct  1 12:52:41 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Oct  1 12:52:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:52:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/617647740' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:52:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:52:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/617647740' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:52:41 np0005464891 nova_compute[259907]: 2025-10-01 16:52:41.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:41 np0005464891 nova_compute[259907]: 2025-10-01 16:52:41.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:52:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:52:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:52:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:52:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:52:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:52:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Oct  1 12:52:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Oct  1 12:52:42 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Oct  1 12:52:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:52:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/47629806' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:52:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:52:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/47629806' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:52:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 6.0 KiB/s wr, 102 op/s
Oct  1 12:52:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:52:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2160832941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:52:43 np0005464891 podman[288068]: 2025-10-01 16:52:43.957314869 +0000 UTC m=+0.067784073 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  1 12:52:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Oct  1 12:52:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Oct  1 12:52:44 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Oct  1 12:52:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 167 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 6.6 KiB/s wr, 154 op/s
Oct  1 12:52:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:52:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2931911636' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:52:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:52:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2931911636' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:52:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Oct  1 12:52:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Oct  1 12:52:45 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Oct  1 12:52:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:52:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Oct  1 12:52:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Oct  1 12:52:46 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Oct  1 12:52:46 np0005464891 nova_compute[259907]: 2025-10-01 16:52:46.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:46 np0005464891 nova_compute[259907]: 2025-10-01 16:52:46.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:46 np0005464891 ovn_controller[152409]: 2025-10-01T16:52:46Z|00130|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Oct  1 12:52:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 167 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 6.7 KiB/s wr, 155 op/s
Oct  1 12:52:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Oct  1 12:52:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Oct  1 12:52:47 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Oct  1 12:52:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:52:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2120907849' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:52:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:52:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2120907849' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:52:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 167 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 7.6 KiB/s wr, 165 op/s
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.181 2 DEBUG oslo_concurrency.lockutils [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Acquiring lock "5076fb4d-3680-4a43-b137-762db8ee9de6" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.182 2 DEBUG oslo_concurrency.lockutils [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.200 2 DEBUG nova.objects.instance [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lazy-loading 'flavor' on Instance uuid 5076fb4d-3680-4a43-b137-762db8ee9de6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.223 2 INFO nova.virt.libvirt.driver [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Ignoring supplied device name: /dev/vdb#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.258 2 DEBUG oslo_concurrency.lockutils [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.648 2 DEBUG oslo_concurrency.lockutils [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Acquiring lock "5076fb4d-3680-4a43-b137-762db8ee9de6" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.649 2 DEBUG oslo_concurrency.lockutils [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.649 2 INFO nova.compute.manager [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Attaching volume f800e7c1-fdee-454a-a810-e7f3d43f1df4 to /dev/vdb#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.874 2 DEBUG os_brick.utils [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.876 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.889 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.889 741 DEBUG oslo.privsep.daemon [-] privsep: reply[3a9dd865-47cd-4b78-95b5-2f42cf9d4778]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.891 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.922 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.922 741 DEBUG oslo.privsep.daemon [-] privsep: reply[31725642-3b59-4dc0-9134-8bc47907a81b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.924 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.933 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.934 741 DEBUG oslo.privsep.daemon [-] privsep: reply[7e44d655-a46e-49b1-9bab-e7d045e2ff9b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.936 741 DEBUG oslo.privsep.daemon [-] privsep: reply[2ddfb5b8-2969-4723-884c-27d758e4b39f]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.936 2 DEBUG oslo_concurrency.processutils [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.963 2 DEBUG oslo_concurrency.processutils [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.968 2 DEBUG os_brick.initiator.connectors.lightos [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.968 2 DEBUG os_brick.initiator.connectors.lightos [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.969 2 DEBUG os_brick.initiator.connectors.lightos [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.970 2 DEBUG os_brick.utils [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] <== get_connector_properties: return (94ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:52:49 np0005464891 nova_compute[259907]: 2025-10-01 16:52:49.970 2 DEBUG nova.virt.block_device [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Updating existing volume attachment record: b172427d-6cb8-4a34-8404-ca2c393d0b73 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:52:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:52:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2384934938' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:52:50 np0005464891 nova_compute[259907]: 2025-10-01 16:52:50.982 2 DEBUG nova.objects.instance [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lazy-loading 'flavor' on Instance uuid 5076fb4d-3680-4a43-b137-762db8ee9de6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:52:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 167 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 5.3 KiB/s wr, 61 op/s
Oct  1 12:52:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:52:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Oct  1 12:52:51 np0005464891 nova_compute[259907]: 2025-10-01 16:52:51.168 2 DEBUG nova.virt.libvirt.driver [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Attempting to attach volume f800e7c1-fdee-454a-a810-e7f3d43f1df4 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  1 12:52:51 np0005464891 nova_compute[259907]: 2025-10-01 16:52:51.171 2 DEBUG nova.virt.libvirt.guest [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] attach device xml: <disk type="network" device="disk">
Oct  1 12:52:51 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:52:51 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-f800e7c1-fdee-454a-a810-e7f3d43f1df4">
Oct  1 12:52:51 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:52:51 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:52:51 np0005464891 nova_compute[259907]:  <auth username="openstack">
Oct  1 12:52:51 np0005464891 nova_compute[259907]:    <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:52:51 np0005464891 nova_compute[259907]:  </auth>
Oct  1 12:52:51 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:52:51 np0005464891 nova_compute[259907]:  <serial>f800e7c1-fdee-454a-a810-e7f3d43f1df4</serial>
Oct  1 12:52:51 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:52:51 np0005464891 nova_compute[259907]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  1 12:52:51 np0005464891 nova_compute[259907]: 2025-10-01 16:52:51.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Oct  1 12:52:51 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Oct  1 12:52:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:52:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1517545820' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:52:51 np0005464891 nova_compute[259907]: 2025-10-01 16:52:51.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:52 np0005464891 nova_compute[259907]: 2025-10-01 16:52:52.224 2 DEBUG nova.virt.libvirt.driver [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:52:52 np0005464891 nova_compute[259907]: 2025-10-01 16:52:52.225 2 DEBUG nova.virt.libvirt.driver [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:52:52 np0005464891 nova_compute[259907]: 2025-10-01 16:52:52.226 2 DEBUG nova.virt.libvirt.driver [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:52:52 np0005464891 nova_compute[259907]: 2025-10-01 16:52:52.226 2 DEBUG nova.virt.libvirt.driver [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] No VIF found with MAC fa:16:3e:26:23:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:52:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 167 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.7 KiB/s wr, 53 op/s
Oct  1 12:52:53 np0005464891 nova_compute[259907]: 2025-10-01 16:52:53.396 2 DEBUG oslo_concurrency.lockutils [None req-64e46f9e-9cfa-4be0-a0f4-a9471fc88a51 a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:52:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Oct  1 12:52:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Oct  1 12:52:53 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Oct  1 12:52:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Oct  1 12:52:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Oct  1 12:52:54 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Oct  1 12:52:54 np0005464891 podman[288115]: 2025-10-01 16:52:54.942293713 +0000 UTC m=+0.052644163 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 12:52:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 167 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 10 KiB/s wr, 90 op/s
Oct  1 12:52:55 np0005464891 nova_compute[259907]: 2025-10-01 16:52:55.732 2 DEBUG nova.compute.manager [req-e94bb6e6-7489-4b89-887a-1eafb4d608b5 req-41dd8cb0-de64-40e2-aa2a-f564c5c4b5f1 4a6d461bdac245229e2e40492503a6e4 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Received event volume-extended-f800e7c1-fdee-454a-a810-e7f3d43f1df4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:52:55 np0005464891 nova_compute[259907]: 2025-10-01 16:52:55.747 2 DEBUG nova.compute.manager [req-e94bb6e6-7489-4b89-887a-1eafb4d608b5 req-41dd8cb0-de64-40e2-aa2a-f564c5c4b5f1 4a6d461bdac245229e2e40492503a6e4 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Handling volume-extended event for volume f800e7c1-fdee-454a-a810-e7f3d43f1df4 extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896#033[00m
Oct  1 12:52:55 np0005464891 nova_compute[259907]: 2025-10-01 16:52:55.760 2 INFO nova.compute.manager [req-e94bb6e6-7489-4b89-887a-1eafb4d608b5 req-41dd8cb0-de64-40e2-aa2a-f564c5c4b5f1 4a6d461bdac245229e2e40492503a6e4 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Cinder extended volume f800e7c1-fdee-454a-a810-e7f3d43f1df4; extending it to detect new size#033[00m
Oct  1 12:52:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Oct  1 12:52:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Oct  1 12:52:55 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Oct  1 12:52:55 np0005464891 nova_compute[259907]: 2025-10-01 16:52:55.915 2 DEBUG nova.virt.libvirt.driver [req-e94bb6e6-7489-4b89-887a-1eafb4d608b5 req-41dd8cb0-de64-40e2-aa2a-f564c5c4b5f1 4a6d461bdac245229e2e40492503a6e4 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Resizing target device vdb to 2147483648 _resize_attached_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2756#033[00m
Oct  1 12:52:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:52:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Oct  1 12:52:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Oct  1 12:52:56 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Oct  1 12:52:56 np0005464891 nova_compute[259907]: 2025-10-01 16:52:56.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:52:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/589283492' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:52:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:52:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/589283492' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:52:56 np0005464891 nova_compute[259907]: 2025-10-01 16:52:56.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:57 np0005464891 nova_compute[259907]: 2025-10-01 16:52:57.012 2 DEBUG oslo_concurrency.lockutils [None req-ceaba222-027f-4efd-adcd-2723ed0e6afd a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Acquiring lock "5076fb4d-3680-4a43-b137-762db8ee9de6" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:52:57 np0005464891 nova_compute[259907]: 2025-10-01 16:52:57.013 2 DEBUG oslo_concurrency.lockutils [None req-ceaba222-027f-4efd-adcd-2723ed0e6afd a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:52:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 167 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 9.0 KiB/s wr, 69 op/s
Oct  1 12:52:57 np0005464891 nova_compute[259907]: 2025-10-01 16:52:57.039 2 INFO nova.compute.manager [None req-ceaba222-027f-4efd-adcd-2723ed0e6afd a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Detaching volume f800e7c1-fdee-454a-a810-e7f3d43f1df4#033[00m
Oct  1 12:52:57 np0005464891 nova_compute[259907]: 2025-10-01 16:52:57.206 2 INFO nova.virt.block_device [None req-ceaba222-027f-4efd-adcd-2723ed0e6afd a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Attempting to driver detach volume f800e7c1-fdee-454a-a810-e7f3d43f1df4 from mountpoint /dev/vdb#033[00m
Oct  1 12:52:57 np0005464891 nova_compute[259907]: 2025-10-01 16:52:57.217 2 DEBUG nova.virt.libvirt.driver [None req-ceaba222-027f-4efd-adcd-2723ed0e6afd a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Attempting to detach device vdb from instance 5076fb4d-3680-4a43-b137-762db8ee9de6 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  1 12:52:57 np0005464891 nova_compute[259907]: 2025-10-01 16:52:57.218 2 DEBUG nova.virt.libvirt.guest [None req-ceaba222-027f-4efd-adcd-2723ed0e6afd a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:52:57 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:52:57 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-f800e7c1-fdee-454a-a810-e7f3d43f1df4">
Oct  1 12:52:57 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:52:57 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:52:57 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:52:57 np0005464891 nova_compute[259907]:  <serial>f800e7c1-fdee-454a-a810-e7f3d43f1df4</serial>
Oct  1 12:52:57 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:52:57 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:52:57 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:52:57 np0005464891 nova_compute[259907]: 2025-10-01 16:52:57.227 2 INFO nova.virt.libvirt.driver [None req-ceaba222-027f-4efd-adcd-2723ed0e6afd a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Successfully detached device vdb from instance 5076fb4d-3680-4a43-b137-762db8ee9de6 from the persistent domain config.#033[00m
Oct  1 12:52:57 np0005464891 nova_compute[259907]: 2025-10-01 16:52:57.228 2 DEBUG nova.virt.libvirt.driver [None req-ceaba222-027f-4efd-adcd-2723ed0e6afd a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 5076fb4d-3680-4a43-b137-762db8ee9de6 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  1 12:52:57 np0005464891 nova_compute[259907]: 2025-10-01 16:52:57.228 2 DEBUG nova.virt.libvirt.guest [None req-ceaba222-027f-4efd-adcd-2723ed0e6afd a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:52:57 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:52:57 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-f800e7c1-fdee-454a-a810-e7f3d43f1df4">
Oct  1 12:52:57 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:52:57 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:52:57 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:52:57 np0005464891 nova_compute[259907]:  <serial>f800e7c1-fdee-454a-a810-e7f3d43f1df4</serial>
Oct  1 12:52:57 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:52:57 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:52:57 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:52:57 np0005464891 nova_compute[259907]: 2025-10-01 16:52:57.358 2 DEBUG nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Received event <DeviceRemovedEvent: 1759337577.3583798, 5076fb4d-3680-4a43-b137-762db8ee9de6 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  1 12:52:57 np0005464891 nova_compute[259907]: 2025-10-01 16:52:57.361 2 DEBUG nova.virt.libvirt.driver [None req-ceaba222-027f-4efd-adcd-2723ed0e6afd a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 5076fb4d-3680-4a43-b137-762db8ee9de6 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  1 12:52:57 np0005464891 nova_compute[259907]: 2025-10-01 16:52:57.364 2 INFO nova.virt.libvirt.driver [None req-ceaba222-027f-4efd-adcd-2723ed0e6afd a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Successfully detached device vdb from instance 5076fb4d-3680-4a43-b137-762db8ee9de6 from the live domain config.#033[00m
Oct  1 12:52:57 np0005464891 nova_compute[259907]: 2025-10-01 16:52:57.627 2 DEBUG nova.objects.instance [None req-ceaba222-027f-4efd-adcd-2723ed0e6afd a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lazy-loading 'flavor' on Instance uuid 5076fb4d-3680-4a43-b137-762db8ee9de6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:52:57 np0005464891 nova_compute[259907]: 2025-10-01 16:52:57.719 2 DEBUG oslo_concurrency.lockutils [None req-ceaba222-027f-4efd-adcd-2723ed0e6afd a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:52:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Oct  1 12:52:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Oct  1 12:52:57 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.390 2 DEBUG oslo_concurrency.lockutils [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Acquiring lock "5076fb4d-3680-4a43-b137-762db8ee9de6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.391 2 DEBUG oslo_concurrency.lockutils [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.391 2 DEBUG oslo_concurrency.lockutils [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Acquiring lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.391 2 DEBUG oslo_concurrency.lockutils [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.392 2 DEBUG oslo_concurrency.lockutils [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.393 2 INFO nova.compute.manager [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Terminating instance#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.394 2 DEBUG nova.compute.manager [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:52:58 np0005464891 kernel: tapf842220e-e0 (unregistering): left promiscuous mode
Oct  1 12:52:58 np0005464891 NetworkManager[44940]: <info>  [1759337578.4517] device (tapf842220e-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:52:58 np0005464891 ovn_controller[152409]: 2025-10-01T16:52:58Z|00131|binding|INFO|Releasing lport f842220e-e045-41b3-a476-251d10fab2e1 from this chassis (sb_readonly=0)
Oct  1 12:52:58 np0005464891 ovn_controller[152409]: 2025-10-01T16:52:58Z|00132|binding|INFO|Setting lport f842220e-e045-41b3-a476-251d10fab2e1 down in Southbound
Oct  1 12:52:58 np0005464891 ovn_controller[152409]: 2025-10-01T16:52:58Z|00133|binding|INFO|Removing iface tapf842220e-e0 ovn-installed in OVS
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:58 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:58.517 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:23:21 10.100.0.12'], port_security=['fa:16:3e:26:23:21 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5076fb4d-3680-4a43-b137-762db8ee9de6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-077f8413-89f8-4043-83f9-97c1e959d04f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '471bace20aee4e2a82d226b5f69cdfd8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a76c9ec2-ad66-423d-8fe0-3d505aabf592', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1ca07cc-10ad-454f-ae6c-4a35cf8c54ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=f842220e-e045-41b3-a476-251d10fab2e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:52:58 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:58.522 162546 INFO neutron.agent.ovn.metadata.agent [-] Port f842220e-e045-41b3-a476-251d10fab2e1 in datapath 077f8413-89f8-4043-83f9-97c1e959d04f unbound from our chassis#033[00m
Oct  1 12:52:58 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:58.524 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 077f8413-89f8-4043-83f9-97c1e959d04f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:52:58 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:58.526 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[50b05017-4313-4a5c-988d-b3f79aca5ba1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:58 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:58.527 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f namespace which is not needed anymore#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:58 np0005464891 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct  1 12:52:58 np0005464891 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 14.945s CPU time.
Oct  1 12:52:58 np0005464891 systemd-machined[214891]: Machine qemu-13-instance-0000000d terminated.
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.635 2 INFO nova.virt.libvirt.driver [-] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Instance destroyed successfully.#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.636 2 DEBUG nova.objects.instance [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lazy-loading 'resources' on Instance uuid 5076fb4d-3680-4a43-b137-762db8ee9de6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.649 2 DEBUG nova.virt.libvirt.vif [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:52:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-935559608',display_name='tempest-VolumesExtendAttachedTest-instance-935559608',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-935559608',id=13,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMBKSGuNZ8Zgd5elFxHZA0Lxnv6DxiDS0oT75+y2FS5fbNAyTR80lPH+T6Uxmfz/PNJJ1He3Xp3l5520kqNVdYDjlXhExX0PrjfyD6Z59A8kiEgxGP1TRUqgjtHAmnVanw==',key_name='tempest-keypair-1602620396',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:52:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='471bace20aee4e2a82d226b5f69cdfd8',ramdisk_id='',reservation_id='r-jqmeyndu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesExtendAttachedTest-92058092',owner_user_name='tempest-VolumesExtendAttachedTest-92058092-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:52:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a7aa882d4d1e40a9aeef4f8bbd50372a',uuid=5076fb4d-3680-4a43-b137-762db8ee9de6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f842220e-e045-41b3-a476-251d10fab2e1", "address": "fa:16:3e:26:23:21", "network": {"id": "077f8413-89f8-4043-83f9-97c1e959d04f", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-295665754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "471bace20aee4e2a82d226b5f69cdfd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf842220e-e0", "ovs_interfaceid": "f842220e-e045-41b3-a476-251d10fab2e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.650 2 DEBUG nova.network.os_vif_util [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Converting VIF {"id": "f842220e-e045-41b3-a476-251d10fab2e1", "address": "fa:16:3e:26:23:21", "network": {"id": "077f8413-89f8-4043-83f9-97c1e959d04f", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-295665754-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "471bace20aee4e2a82d226b5f69cdfd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf842220e-e0", "ovs_interfaceid": "f842220e-e045-41b3-a476-251d10fab2e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.651 2 DEBUG nova.network.os_vif_util [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:23:21,bridge_name='br-int',has_traffic_filtering=True,id=f842220e-e045-41b3-a476-251d10fab2e1,network=Network(077f8413-89f8-4043-83f9-97c1e959d04f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf842220e-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.651 2 DEBUG os_vif [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:23:21,bridge_name='br-int',has_traffic_filtering=True,id=f842220e-e045-41b3-a476-251d10fab2e1,network=Network(077f8413-89f8-4043-83f9-97c1e959d04f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf842220e-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.653 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf842220e-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.664 2 INFO os_vif [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:23:21,bridge_name='br-int',has_traffic_filtering=True,id=f842220e-e045-41b3-a476-251d10fab2e1,network=Network(077f8413-89f8-4043-83f9-97c1e959d04f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf842220e-e0')#033[00m
Oct  1 12:52:58 np0005464891 neutron-haproxy-ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f[287940]: [NOTICE]   (287944) : haproxy version is 2.8.14-c23fe91
Oct  1 12:52:58 np0005464891 neutron-haproxy-ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f[287940]: [NOTICE]   (287944) : path to executable is /usr/sbin/haproxy
Oct  1 12:52:58 np0005464891 neutron-haproxy-ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f[287940]: [WARNING]  (287944) : Exiting Master process...
Oct  1 12:52:58 np0005464891 neutron-haproxy-ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f[287940]: [WARNING]  (287944) : Exiting Master process...
Oct  1 12:52:58 np0005464891 neutron-haproxy-ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f[287940]: [ALERT]    (287944) : Current worker (287946) exited with code 143 (Terminated)
Oct  1 12:52:58 np0005464891 neutron-haproxy-ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f[287940]: [WARNING]  (287944) : All workers exited. Exiting... (0)
Oct  1 12:52:58 np0005464891 systemd[1]: libpod-6e205f7ff5b145fc190282f7a62b093cb59141cce8a824c5887754e7f8eebab3.scope: Deactivated successfully.
Oct  1 12:52:58 np0005464891 podman[288165]: 2025-10-01 16:52:58.7008021 +0000 UTC m=+0.057229848 container died 6e205f7ff5b145fc190282f7a62b093cb59141cce8a824c5887754e7f8eebab3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:52:58 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e205f7ff5b145fc190282f7a62b093cb59141cce8a824c5887754e7f8eebab3-userdata-shm.mount: Deactivated successfully.
Oct  1 12:52:58 np0005464891 systemd[1]: var-lib-containers-storage-overlay-26e6aef1525df43c1e9c6155538f01c46e682439cc5874930662430ae3abb423-merged.mount: Deactivated successfully.
Oct  1 12:52:58 np0005464891 podman[288165]: 2025-10-01 16:52:58.74259181 +0000 UTC m=+0.099019558 container cleanup 6e205f7ff5b145fc190282f7a62b093cb59141cce8a824c5887754e7f8eebab3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:52:58 np0005464891 systemd[1]: libpod-conmon-6e205f7ff5b145fc190282f7a62b093cb59141cce8a824c5887754e7f8eebab3.scope: Deactivated successfully.
Oct  1 12:52:58 np0005464891 podman[288216]: 2025-10-01 16:52:58.809821606 +0000 UTC m=+0.042851119 container remove 6e205f7ff5b145fc190282f7a62b093cb59141cce8a824c5887754e7f8eebab3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  1 12:52:58 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:58.816 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[7468aa47-6bb9-4549-93cb-50d87a58dfdc]: (4, ('Wed Oct  1 04:52:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f (6e205f7ff5b145fc190282f7a62b093cb59141cce8a824c5887754e7f8eebab3)\n6e205f7ff5b145fc190282f7a62b093cb59141cce8a824c5887754e7f8eebab3\nWed Oct  1 04:52:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f (6e205f7ff5b145fc190282f7a62b093cb59141cce8a824c5887754e7f8eebab3)\n6e205f7ff5b145fc190282f7a62b093cb59141cce8a824c5887754e7f8eebab3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:58 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:58.818 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[7726d4a1-071b-4dcb-8dae-196b355a4c4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:58 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:58.819 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap077f8413-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:58 np0005464891 kernel: tap077f8413-80: left promiscuous mode
Oct  1 12:52:58 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:52:58 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:58.842 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f80c24-5526-463f-8359-9cb18426764b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:58 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:58.875 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[96797885-74c7-4730-90c7-87740a162ee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:58 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:58.876 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[7b37f350-7d98-40d4-8b0a-5578bf252cad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:58 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:58.891 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[432e3aef-629a-45a0-955b-b1975bf9ff4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448201, 'reachable_time': 22935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288233, 'error': None, 'target': 'ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:58 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:58.896 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-077f8413-89f8-4043-83f9-97c1e959d04f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:52:58 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:52:58.896 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[ae269027-0bd7-4b84-889d-b270b650736e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:52:58 np0005464891 systemd[1]: run-netns-ovnmeta\x2d077f8413\x2d89f8\x2d4043\x2d83f9\x2d97c1e959d04f.mount: Deactivated successfully.
Oct  1 12:52:59 np0005464891 nova_compute[259907]: 2025-10-01 16:52:58.999 2 INFO nova.virt.libvirt.driver [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Deleting instance files /var/lib/nova/instances/5076fb4d-3680-4a43-b137-762db8ee9de6_del#033[00m
Oct  1 12:52:59 np0005464891 nova_compute[259907]: 2025-10-01 16:52:59.000 2 INFO nova.virt.libvirt.driver [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Deletion of /var/lib/nova/instances/5076fb4d-3680-4a43-b137-762db8ee9de6_del complete#033[00m
Oct  1 12:52:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 167 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 6.2 KiB/s wr, 75 op/s
Oct  1 12:52:59 np0005464891 nova_compute[259907]: 2025-10-01 16:52:59.076 2 INFO nova.compute.manager [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:52:59 np0005464891 nova_compute[259907]: 2025-10-01 16:52:59.077 2 DEBUG oslo.service.loopingcall [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:52:59 np0005464891 nova_compute[259907]: 2025-10-01 16:52:59.077 2 DEBUG nova.compute.manager [-] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:52:59 np0005464891 nova_compute[259907]: 2025-10-01 16:52:59.077 2 DEBUG nova.network.neutron [-] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:52:59 np0005464891 nova_compute[259907]: 2025-10-01 16:52:59.285 2 DEBUG nova.compute.manager [req-39fa7669-0c4d-486b-9030-cab882e0ce6c req-3dcf8245-446a-4010-9a9b-c2b569c4bd47 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Received event network-vif-unplugged-f842220e-e045-41b3-a476-251d10fab2e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:52:59 np0005464891 nova_compute[259907]: 2025-10-01 16:52:59.286 2 DEBUG oslo_concurrency.lockutils [req-39fa7669-0c4d-486b-9030-cab882e0ce6c req-3dcf8245-446a-4010-9a9b-c2b569c4bd47 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:52:59 np0005464891 nova_compute[259907]: 2025-10-01 16:52:59.287 2 DEBUG oslo_concurrency.lockutils [req-39fa7669-0c4d-486b-9030-cab882e0ce6c req-3dcf8245-446a-4010-9a9b-c2b569c4bd47 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:52:59 np0005464891 nova_compute[259907]: 2025-10-01 16:52:59.287 2 DEBUG oslo_concurrency.lockutils [req-39fa7669-0c4d-486b-9030-cab882e0ce6c req-3dcf8245-446a-4010-9a9b-c2b569c4bd47 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:52:59 np0005464891 nova_compute[259907]: 2025-10-01 16:52:59.288 2 DEBUG nova.compute.manager [req-39fa7669-0c4d-486b-9030-cab882e0ce6c req-3dcf8245-446a-4010-9a9b-c2b569c4bd47 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] No waiting events found dispatching network-vif-unplugged-f842220e-e045-41b3-a476-251d10fab2e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:52:59 np0005464891 nova_compute[259907]: 2025-10-01 16:52:59.288 2 DEBUG nova.compute.manager [req-39fa7669-0c4d-486b-9030-cab882e0ce6c req-3dcf8245-446a-4010-9a9b-c2b569c4bd47 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Received event network-vif-unplugged-f842220e-e045-41b3-a476-251d10fab2e1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:52:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:52:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3834018917' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:52:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:52:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3834018917' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:52:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Oct  1 12:52:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Oct  1 12:52:59 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Oct  1 12:53:00 np0005464891 nova_compute[259907]: 2025-10-01 16:53:00.182 2 DEBUG nova.network.neutron [-] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:53:00 np0005464891 nova_compute[259907]: 2025-10-01 16:53:00.204 2 INFO nova.compute.manager [-] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Took 1.13 seconds to deallocate network for instance.#033[00m
Oct  1 12:53:00 np0005464891 nova_compute[259907]: 2025-10-01 16:53:00.279 2 DEBUG oslo_concurrency.lockutils [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:53:00 np0005464891 nova_compute[259907]: 2025-10-01 16:53:00.280 2 DEBUG oslo_concurrency.lockutils [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:53:00 np0005464891 nova_compute[259907]: 2025-10-01 16:53:00.344 2 DEBUG oslo_concurrency.processutils [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:53:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:53:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1197786631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:53:00 np0005464891 nova_compute[259907]: 2025-10-01 16:53:00.785 2 DEBUG oslo_concurrency.processutils [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:53:00 np0005464891 nova_compute[259907]: 2025-10-01 16:53:00.791 2 DEBUG nova.compute.provider_tree [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:53:00 np0005464891 nova_compute[259907]: 2025-10-01 16:53:00.815 2 DEBUG nova.scheduler.client.report [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:53:00 np0005464891 nova_compute[259907]: 2025-10-01 16:53:00.835 2 DEBUG oslo_concurrency.lockutils [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:53:00 np0005464891 nova_compute[259907]: 2025-10-01 16:53:00.875 2 INFO nova.scheduler.client.report [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Deleted allocations for instance 5076fb4d-3680-4a43-b137-762db8ee9de6#033[00m
Oct  1 12:53:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Oct  1 12:53:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Oct  1 12:53:00 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Oct  1 12:53:00 np0005464891 nova_compute[259907]: 2025-10-01 16:53:00.985 2 DEBUG oslo_concurrency.lockutils [None req-f007706c-ce9b-444a-893d-fe73d5ef5c5b a7aa882d4d1e40a9aeef4f8bbd50372a 471bace20aee4e2a82d226b5f69cdfd8 - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:53:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 102 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 171 KiB/s rd, 17 KiB/s wr, 240 op/s
Oct  1 12:53:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:53:01 np0005464891 nova_compute[259907]: 2025-10-01 16:53:01.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:01 np0005464891 nova_compute[259907]: 2025-10-01 16:53:01.357 2 DEBUG nova.compute.manager [req-6816bc0f-2e77-461a-b5ef-42c11b7e5502 req-3ac17ed1-a446-4c43-85b0-fa88fe6680a9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Received event network-vif-plugged-f842220e-e045-41b3-a476-251d10fab2e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:53:01 np0005464891 nova_compute[259907]: 2025-10-01 16:53:01.358 2 DEBUG oslo_concurrency.lockutils [req-6816bc0f-2e77-461a-b5ef-42c11b7e5502 req-3ac17ed1-a446-4c43-85b0-fa88fe6680a9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:53:01 np0005464891 nova_compute[259907]: 2025-10-01 16:53:01.358 2 DEBUG oslo_concurrency.lockutils [req-6816bc0f-2e77-461a-b5ef-42c11b7e5502 req-3ac17ed1-a446-4c43-85b0-fa88fe6680a9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:53:01 np0005464891 nova_compute[259907]: 2025-10-01 16:53:01.359 2 DEBUG oslo_concurrency.lockutils [req-6816bc0f-2e77-461a-b5ef-42c11b7e5502 req-3ac17ed1-a446-4c43-85b0-fa88fe6680a9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "5076fb4d-3680-4a43-b137-762db8ee9de6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:53:01 np0005464891 nova_compute[259907]: 2025-10-01 16:53:01.359 2 DEBUG nova.compute.manager [req-6816bc0f-2e77-461a-b5ef-42c11b7e5502 req-3ac17ed1-a446-4c43-85b0-fa88fe6680a9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] No waiting events found dispatching network-vif-plugged-f842220e-e045-41b3-a476-251d10fab2e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:53:01 np0005464891 nova_compute[259907]: 2025-10-01 16:53:01.359 2 WARNING nova.compute.manager [req-6816bc0f-2e77-461a-b5ef-42c11b7e5502 req-3ac17ed1-a446-4c43-85b0-fa88fe6680a9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Received unexpected event network-vif-plugged-f842220e-e045-41b3-a476-251d10fab2e1 for instance with vm_state deleted and task_state None.#033[00m
Oct  1 12:53:01 np0005464891 nova_compute[259907]: 2025-10-01 16:53:01.359 2 DEBUG nova.compute.manager [req-6816bc0f-2e77-461a-b5ef-42c11b7e5502 req-3ac17ed1-a446-4c43-85b0-fa88fe6680a9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Received event network-vif-deleted-f842220e-e045-41b3-a476-251d10fab2e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:53:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Oct  1 12:53:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Oct  1 12:53:02 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Oct  1 12:53:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 88 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 213 KiB/s rd, 16 KiB/s wr, 291 op/s
Oct  1 12:53:03 np0005464891 nova_compute[259907]: 2025-10-01 16:53:03.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2831485330' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2831485330' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2225481346' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2225481346' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:53:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:53:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:53:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:53:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:53:05 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:53:05 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 999e3aa2-aee3-46f3-b75f-9204c0fdbed4 does not exist
Oct  1 12:53:05 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 22dcabbb-42a8-4c5a-b699-5ea31133c1ce does not exist
Oct  1 12:53:05 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 2f9c3804-a278-4896-848b-e83912f65145 does not exist
Oct  1 12:53:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:53:05 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:53:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:53:05 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:53:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:53:05 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:53:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 19 KiB/s wr, 385 op/s
Oct  1 12:53:05 np0005464891 podman[288411]: 2025-10-01 16:53:05.179544053 +0000 UTC m=+0.084594147 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct  1 12:53:05 np0005464891 podman[288554]: 2025-10-01 16:53:05.569723956 +0000 UTC m=+0.024889074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:53:05 np0005464891 podman[288554]: 2025-10-01 16:53:05.682719128 +0000 UTC m=+0.137884156 container create 772554b8a837d877cca6f819fdcdf3da0fd9b62f40d1cc9e64aea721e8afceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 12:53:05 np0005464891 systemd[1]: Started libpod-conmon-772554b8a837d877cca6f819fdcdf3da0fd9b62f40d1cc9e64aea721e8afceb2.scope.
Oct  1 12:53:05 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:53:05 np0005464891 podman[288554]: 2025-10-01 16:53:05.790000737 +0000 UTC m=+0.245165785 container init 772554b8a837d877cca6f819fdcdf3da0fd9b62f40d1cc9e64aea721e8afceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:53:05 np0005464891 podman[288554]: 2025-10-01 16:53:05.80192954 +0000 UTC m=+0.257094578 container start 772554b8a837d877cca6f819fdcdf3da0fd9b62f40d1cc9e64aea721e8afceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct  1 12:53:05 np0005464891 angry_saha[288570]: 167 167
Oct  1 12:53:05 np0005464891 podman[288554]: 2025-10-01 16:53:05.809508114 +0000 UTC m=+0.264673152 container attach 772554b8a837d877cca6f819fdcdf3da0fd9b62f40d1cc9e64aea721e8afceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 12:53:05 np0005464891 systemd[1]: libpod-772554b8a837d877cca6f819fdcdf3da0fd9b62f40d1cc9e64aea721e8afceb2.scope: Deactivated successfully.
Oct  1 12:53:05 np0005464891 podman[288554]: 2025-10-01 16:53:05.810415929 +0000 UTC m=+0.265580997 container died 772554b8a837d877cca6f819fdcdf3da0fd9b62f40d1cc9e64aea721e8afceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 12:53:05 np0005464891 systemd[1]: var-lib-containers-storage-overlay-3a4c121d5ae6cc476a7990fec506a31d678503efcb8b8d8c59087a128ae89701-merged.mount: Deactivated successfully.
Oct  1 12:53:05 np0005464891 podman[288554]: 2025-10-01 16:53:05.866089154 +0000 UTC m=+0.321254192 container remove 772554b8a837d877cca6f819fdcdf3da0fd9b62f40d1cc9e64aea721e8afceb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 12:53:05 np0005464891 systemd[1]: libpod-conmon-772554b8a837d877cca6f819fdcdf3da0fd9b62f40d1cc9e64aea721e8afceb2.scope: Deactivated successfully.
Oct  1 12:53:06 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:53:06 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:53:06 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:53:06 np0005464891 podman[288592]: 2025-10-01 16:53:06.052749447 +0000 UTC m=+0.047032812 container create 78a80376ed701d957645ce1d3dcb7c0cac6b5ef8357721c1820854c5497616fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 12:53:06 np0005464891 systemd[1]: Started libpod-conmon-78a80376ed701d957645ce1d3dcb7c0cac6b5ef8357721c1820854c5497616fa.scope.
Oct  1 12:53:06 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:53:06 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452b161f11d9561a9d153d3ae0c7009474de2d13f74e8b72543297d81cb0a857/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:53:06 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452b161f11d9561a9d153d3ae0c7009474de2d13f74e8b72543297d81cb0a857/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:53:06 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452b161f11d9561a9d153d3ae0c7009474de2d13f74e8b72543297d81cb0a857/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:53:06 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452b161f11d9561a9d153d3ae0c7009474de2d13f74e8b72543297d81cb0a857/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:53:06 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452b161f11d9561a9d153d3ae0c7009474de2d13f74e8b72543297d81cb0a857/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:53:06 np0005464891 podman[288592]: 2025-10-01 16:53:06.032233373 +0000 UTC m=+0.026516758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:53:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:53:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Oct  1 12:53:06 np0005464891 podman[288592]: 2025-10-01 16:53:06.142198004 +0000 UTC m=+0.136481369 container init 78a80376ed701d957645ce1d3dcb7c0cac6b5ef8357721c1820854c5497616fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:53:06 np0005464891 podman[288592]: 2025-10-01 16:53:06.15019394 +0000 UTC m=+0.144477305 container start 78a80376ed701d957645ce1d3dcb7c0cac6b5ef8357721c1820854c5497616fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:53:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Oct  1 12:53:06 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Oct  1 12:53:06 np0005464891 podman[288592]: 2025-10-01 16:53:06.156563052 +0000 UTC m=+0.150846447 container attach 78a80376ed701d957645ce1d3dcb7c0cac6b5ef8357721c1820854c5497616fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wescoff, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 12:53:06 np0005464891 nova_compute[259907]: 2025-10-01 16:53:06.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 158 KiB/s rd, 7.2 KiB/s wr, 209 op/s
Oct  1 12:53:07 np0005464891 boring_wescoff[288609]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:53:07 np0005464891 boring_wescoff[288609]: --> relative data size: 1.0
Oct  1 12:53:07 np0005464891 boring_wescoff[288609]: --> All data devices are unavailable
Oct  1 12:53:07 np0005464891 nova_compute[259907]: 2025-10-01 16:53:07.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:07 np0005464891 systemd[1]: libpod-78a80376ed701d957645ce1d3dcb7c0cac6b5ef8357721c1820854c5497616fa.scope: Deactivated successfully.
Oct  1 12:53:07 np0005464891 podman[288592]: 2025-10-01 16:53:07.299785603 +0000 UTC m=+1.294068968 container died 78a80376ed701d957645ce1d3dcb7c0cac6b5ef8357721c1820854c5497616fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:53:07 np0005464891 systemd[1]: libpod-78a80376ed701d957645ce1d3dcb7c0cac6b5ef8357721c1820854c5497616fa.scope: Consumed 1.087s CPU time.
Oct  1 12:53:07 np0005464891 systemd[1]: var-lib-containers-storage-overlay-452b161f11d9561a9d153d3ae0c7009474de2d13f74e8b72543297d81cb0a857-merged.mount: Deactivated successfully.
Oct  1 12:53:07 np0005464891 nova_compute[259907]: 2025-10-01 16:53:07.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:07 np0005464891 podman[288592]: 2025-10-01 16:53:07.494265728 +0000 UTC m=+1.488549083 container remove 78a80376ed701d957645ce1d3dcb7c0cac6b5ef8357721c1820854c5497616fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 12:53:07 np0005464891 systemd[1]: libpod-conmon-78a80376ed701d957645ce1d3dcb7c0cac6b5ef8357721c1820854c5497616fa.scope: Deactivated successfully.
Oct  1 12:53:08 np0005464891 podman[288788]: 2025-10-01 16:53:08.197815959 +0000 UTC m=+0.046558630 container create 6b81c74e7b411a3597ab2acad62743f4dcccd8cb58c4a23a13afba32845aab14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:53:08 np0005464891 systemd[1]: Started libpod-conmon-6b81c74e7b411a3597ab2acad62743f4dcccd8cb58c4a23a13afba32845aab14.scope.
Oct  1 12:53:08 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:53:08 np0005464891 podman[288788]: 2025-10-01 16:53:08.264965223 +0000 UTC m=+0.113707934 container init 6b81c74e7b411a3597ab2acad62743f4dcccd8cb58c4a23a13afba32845aab14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:53:08 np0005464891 podman[288788]: 2025-10-01 16:53:08.175902486 +0000 UTC m=+0.024645197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:53:08 np0005464891 podman[288788]: 2025-10-01 16:53:08.272211868 +0000 UTC m=+0.120954549 container start 6b81c74e7b411a3597ab2acad62743f4dcccd8cb58c4a23a13afba32845aab14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:53:08 np0005464891 podman[288788]: 2025-10-01 16:53:08.27558042 +0000 UTC m=+0.124323111 container attach 6b81c74e7b411a3597ab2acad62743f4dcccd8cb58c4a23a13afba32845aab14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 12:53:08 np0005464891 practical_lovelace[288804]: 167 167
Oct  1 12:53:08 np0005464891 systemd[1]: libpod-6b81c74e7b411a3597ab2acad62743f4dcccd8cb58c4a23a13afba32845aab14.scope: Deactivated successfully.
Oct  1 12:53:08 np0005464891 podman[288788]: 2025-10-01 16:53:08.280318638 +0000 UTC m=+0.129061309 container died 6b81c74e7b411a3597ab2acad62743f4dcccd8cb58c4a23a13afba32845aab14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:53:08 np0005464891 systemd[1]: var-lib-containers-storage-overlay-43283b14b10bc9a8c8be0fa5443d6ab6a1690a99c5765ef2f064dfcfeca530a8-merged.mount: Deactivated successfully.
Oct  1 12:53:08 np0005464891 podman[288788]: 2025-10-01 16:53:08.325781357 +0000 UTC m=+0.174524028 container remove 6b81c74e7b411a3597ab2acad62743f4dcccd8cb58c4a23a13afba32845aab14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:53:08 np0005464891 systemd[1]: libpod-conmon-6b81c74e7b411a3597ab2acad62743f4dcccd8cb58c4a23a13afba32845aab14.scope: Deactivated successfully.
Oct  1 12:53:08 np0005464891 podman[288827]: 2025-10-01 16:53:08.497688362 +0000 UTC m=+0.060101785 container create a4c492c820bc50d15538a251e5c778a5b94aa117a604c2b74d1fa8868386cd44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_euclid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:53:08 np0005464891 systemd[1]: Started libpod-conmon-a4c492c820bc50d15538a251e5c778a5b94aa117a604c2b74d1fa8868386cd44.scope.
Oct  1 12:53:08 np0005464891 podman[288827]: 2025-10-01 16:53:08.468933345 +0000 UTC m=+0.031346808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:53:08 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:53:08 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95a43d12346b57fddf174781762bdbd349b2f6b3d50ced0e8a401aaaf9b9705/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:53:08 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95a43d12346b57fddf174781762bdbd349b2f6b3d50ced0e8a401aaaf9b9705/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:53:08 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95a43d12346b57fddf174781762bdbd349b2f6b3d50ced0e8a401aaaf9b9705/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:53:08 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95a43d12346b57fddf174781762bdbd349b2f6b3d50ced0e8a401aaaf9b9705/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:53:08 np0005464891 podman[288827]: 2025-10-01 16:53:08.62198883 +0000 UTC m=+0.184402293 container init a4c492c820bc50d15538a251e5c778a5b94aa117a604c2b74d1fa8868386cd44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_euclid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 12:53:08 np0005464891 podman[288827]: 2025-10-01 16:53:08.634676033 +0000 UTC m=+0.197089406 container start a4c492c820bc50d15538a251e5c778a5b94aa117a604c2b74d1fa8868386cd44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:53:08 np0005464891 podman[288827]: 2025-10-01 16:53:08.641194859 +0000 UTC m=+0.203608322 container attach a4c492c820bc50d15538a251e5c778a5b94aa117a604c2b74d1fa8868386cd44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 12:53:08 np0005464891 nova_compute[259907]: 2025-10-01 16:53:08.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 5.9 KiB/s wr, 164 op/s
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]: {
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:    "0": [
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:        {
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "devices": [
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "/dev/loop3"
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            ],
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "lv_name": "ceph_lv0",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "lv_size": "21470642176",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "name": "ceph_lv0",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "tags": {
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.cluster_name": "ceph",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.crush_device_class": "",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.encrypted": "0",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.osd_id": "0",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.type": "block",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.vdo": "0"
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            },
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "type": "block",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "vg_name": "ceph_vg0"
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:        }
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:    ],
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:    "1": [
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:        {
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "devices": [
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "/dev/loop4"
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            ],
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "lv_name": "ceph_lv1",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "lv_size": "21470642176",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "name": "ceph_lv1",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "tags": {
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.cluster_name": "ceph",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.crush_device_class": "",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.encrypted": "0",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.osd_id": "1",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.type": "block",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.vdo": "0"
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            },
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "type": "block",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "vg_name": "ceph_vg1"
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:        }
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:    ],
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:    "2": [
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:        {
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "devices": [
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "/dev/loop5"
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            ],
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "lv_name": "ceph_lv2",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "lv_size": "21470642176",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "name": "ceph_lv2",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "tags": {
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.cluster_name": "ceph",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.crush_device_class": "",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.encrypted": "0",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.osd_id": "2",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.type": "block",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:                "ceph.vdo": "0"
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            },
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "type": "block",
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:            "vg_name": "ceph_vg2"
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:        }
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]:    ]
Oct  1 12:53:09 np0005464891 elegant_euclid[288843]: }
Oct  1 12:53:09 np0005464891 systemd[1]: libpod-a4c492c820bc50d15538a251e5c778a5b94aa117a604c2b74d1fa8868386cd44.scope: Deactivated successfully.
Oct  1 12:53:09 np0005464891 podman[288827]: 2025-10-01 16:53:09.508938366 +0000 UTC m=+1.071351739 container died a4c492c820bc50d15538a251e5c778a5b94aa117a604c2b74d1fa8868386cd44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:53:09 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b95a43d12346b57fddf174781762bdbd349b2f6b3d50ced0e8a401aaaf9b9705-merged.mount: Deactivated successfully.
Oct  1 12:53:10 np0005464891 podman[288827]: 2025-10-01 16:53:10.158236171 +0000 UTC m=+1.720649544 container remove a4c492c820bc50d15538a251e5c778a5b94aa117a604c2b74d1fa8868386cd44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_euclid, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:53:10 np0005464891 systemd[1]: libpod-conmon-a4c492c820bc50d15538a251e5c778a5b94aa117a604c2b74d1fa8868386cd44.scope: Deactivated successfully.
Oct  1 12:53:10 np0005464891 podman[288852]: 2025-10-01 16:53:10.238713985 +0000 UTC m=+0.700001355 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:53:10 np0005464891 podman[289024]: 2025-10-01 16:53:10.862603144 +0000 UTC m=+0.054974386 container create 708c1ae3b2a7c244e52c99b0fb959cd112be19e548f0dec1b71d0d344d133afc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 12:53:10 np0005464891 systemd[1]: Started libpod-conmon-708c1ae3b2a7c244e52c99b0fb959cd112be19e548f0dec1b71d0d344d133afc.scope.
Oct  1 12:53:10 np0005464891 podman[289024]: 2025-10-01 16:53:10.838819411 +0000 UTC m=+0.031190723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:53:10 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:53:10 np0005464891 podman[289024]: 2025-10-01 16:53:10.977079317 +0000 UTC m=+0.169450589 container init 708c1ae3b2a7c244e52c99b0fb959cd112be19e548f0dec1b71d0d344d133afc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_driscoll, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  1 12:53:10 np0005464891 podman[289024]: 2025-10-01 16:53:10.9856916 +0000 UTC m=+0.178062862 container start 708c1ae3b2a7c244e52c99b0fb959cd112be19e548f0dec1b71d0d344d133afc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_driscoll, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:53:10 np0005464891 fervent_driscoll[289041]: 167 167
Oct  1 12:53:10 np0005464891 systemd[1]: libpod-708c1ae3b2a7c244e52c99b0fb959cd112be19e548f0dec1b71d0d344d133afc.scope: Deactivated successfully.
Oct  1 12:53:10 np0005464891 conmon[289041]: conmon 708c1ae3b2a7c244e52c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-708c1ae3b2a7c244e52c99b0fb959cd112be19e548f0dec1b71d0d344d133afc.scope/container/memory.events
Oct  1 12:53:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/48204377' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:11 np0005464891 podman[289024]: 2025-10-01 16:53:11.007956811 +0000 UTC m=+0.200328073 container attach 708c1ae3b2a7c244e52c99b0fb959cd112be19e548f0dec1b71d0d344d133afc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_driscoll, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:53:11 np0005464891 podman[289024]: 2025-10-01 16:53:11.008623769 +0000 UTC m=+0.200995011 container died 708c1ae3b2a7c244e52c99b0fb959cd112be19e548f0dec1b71d0d344d133afc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_driscoll, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:53:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/48204377' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 4.9 KiB/s wr, 112 op/s
Oct  1 12:53:11 np0005464891 systemd[1]: var-lib-containers-storage-overlay-526274c28e64f06fccc5c26cad6d1a828c4c9e901ac7898a7c3ca4f471fc3ffe-merged.mount: Deactivated successfully.
Oct  1 12:53:11 np0005464891 podman[289024]: 2025-10-01 16:53:11.054763916 +0000 UTC m=+0.247135178 container remove 708c1ae3b2a7c244e52c99b0fb959cd112be19e548f0dec1b71d0d344d133afc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 12:53:11 np0005464891 systemd[1]: libpod-conmon-708c1ae3b2a7c244e52c99b0fb959cd112be19e548f0dec1b71d0d344d133afc.scope: Deactivated successfully.
Oct  1 12:53:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:53:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Oct  1 12:53:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Oct  1 12:53:11 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Oct  1 12:53:11 np0005464891 podman[289064]: 2025-10-01 16:53:11.236781525 +0000 UTC m=+0.062131170 container create 49400a960ff554a73c2cda42c4b1a5cdf5ddcc80cf33d03e54aa277f298f4152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_taussig, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:53:11 np0005464891 nova_compute[259907]: 2025-10-01 16:53:11.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:11 np0005464891 systemd[1]: Started libpod-conmon-49400a960ff554a73c2cda42c4b1a5cdf5ddcc80cf33d03e54aa277f298f4152.scope.
Oct  1 12:53:11 np0005464891 podman[289064]: 2025-10-01 16:53:11.21295905 +0000 UTC m=+0.038308765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:53:11 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:53:11 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b5d036c3c25b08881428c0b38c17aad6b1a46827c41ddacf056e7fc3230903/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:53:11 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b5d036c3c25b08881428c0b38c17aad6b1a46827c41ddacf056e7fc3230903/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:53:11 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b5d036c3c25b08881428c0b38c17aad6b1a46827c41ddacf056e7fc3230903/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:53:11 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b5d036c3c25b08881428c0b38c17aad6b1a46827c41ddacf056e7fc3230903/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:53:11 np0005464891 podman[289064]: 2025-10-01 16:53:11.337095915 +0000 UTC m=+0.162445610 container init 49400a960ff554a73c2cda42c4b1a5cdf5ddcc80cf33d03e54aa277f298f4152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:53:11 np0005464891 podman[289064]: 2025-10-01 16:53:11.353324693 +0000 UTC m=+0.178674348 container start 49400a960ff554a73c2cda42c4b1a5cdf5ddcc80cf33d03e54aa277f298f4152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct  1 12:53:11 np0005464891 podman[289064]: 2025-10-01 16:53:11.358070362 +0000 UTC m=+0.183419987 container attach 49400a960ff554a73c2cda42c4b1a5cdf5ddcc80cf33d03e54aa277f298f4152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_taussig, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:53:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:53:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1716522168' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:53:12
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'vms', '.mgr', 'images', 'backups', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta']
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:53:12 np0005464891 zen_taussig[289080]: {
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:        "osd_id": 2,
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:        "type": "bluestore"
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:    },
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:        "osd_id": 0,
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:        "type": "bluestore"
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:    },
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:        "osd_id": 1,
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:        "type": "bluestore"
Oct  1 12:53:12 np0005464891 zen_taussig[289080]:    }
Oct  1 12:53:12 np0005464891 zen_taussig[289080]: }
Oct  1 12:53:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2991563550' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2991563550' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:12 np0005464891 systemd[1]: libpod-49400a960ff554a73c2cda42c4b1a5cdf5ddcc80cf33d03e54aa277f298f4152.scope: Deactivated successfully.
Oct  1 12:53:12 np0005464891 podman[289064]: 2025-10-01 16:53:12.333383785 +0000 UTC m=+1.158733450 container died 49400a960ff554a73c2cda42c4b1a5cdf5ddcc80cf33d03e54aa277f298f4152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:53:12 np0005464891 systemd[1]: var-lib-containers-storage-overlay-47b5d036c3c25b08881428c0b38c17aad6b1a46827c41ddacf056e7fc3230903-merged.mount: Deactivated successfully.
Oct  1 12:53:12 np0005464891 podman[289064]: 2025-10-01 16:53:12.431335152 +0000 UTC m=+1.256684787 container remove 49400a960ff554a73c2cda42c4b1a5cdf5ddcc80cf33d03e54aa277f298f4152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_taussig, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:53:12 np0005464891 systemd[1]: libpod-conmon-49400a960ff554a73c2cda42c4b1a5cdf5ddcc80cf33d03e54aa277f298f4152.scope: Deactivated successfully.
Oct  1 12:53:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:53:12.455 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:53:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:53:12.457 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:53:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:53:12.457 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:53:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:53:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:53:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:53:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 84f050b7-f4db-4c66-87fb-2cfacfdf5933 does not exist
Oct  1 12:53:12 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 0d70c807-1652-4fdd-a66b-f290f563f551 does not exist
Oct  1 12:53:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Oct  1 12:53:12 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:53:12 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:53:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Oct  1 12:53:12 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Oct  1 12:53:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.8 KiB/s wr, 56 op/s
Oct  1 12:53:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/554316385' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/554316385' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Oct  1 12:53:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Oct  1 12:53:13 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Oct  1 12:53:13 np0005464891 nova_compute[259907]: 2025-10-01 16:53:13.632 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337578.6312652, 5076fb4d-3680-4a43-b137-762db8ee9de6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:53:13 np0005464891 nova_compute[259907]: 2025-10-01 16:53:13.633 2 INFO nova.compute.manager [-] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:53:13 np0005464891 nova_compute[259907]: 2025-10-01 16:53:13.652 2 DEBUG nova.compute.manager [None req-ce6e64e1-fe40-4638-9771-27cbae0d54a7 - - - - - -] [instance: 5076fb4d-3680-4a43-b137-762db8ee9de6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:53:13 np0005464891 nova_compute[259907]: 2025-10-01 16:53:13.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1093908502' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1093908502' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:14 np0005464891 podman[289175]: 2025-10-01 16:53:14.955433464 +0000 UTC m=+0.066250931 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3)
Oct  1 12:53:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 7.3 KiB/s wr, 99 op/s
Oct  1 12:53:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Oct  1 12:53:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Oct  1 12:53:15 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Oct  1 12:53:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:53:16 np0005464891 nova_compute[259907]: 2025-10-01 16:53:16.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3006659149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3006659149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Oct  1 12:53:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Oct  1 12:53:16 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Oct  1 12:53:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3132824241' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3132824241' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 8.0 KiB/s wr, 116 op/s
Oct  1 12:53:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:53:17.953 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:53:17 np0005464891 nova_compute[259907]: 2025-10-01 16:53:17.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:53:17.956 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:53:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:53:17.957 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:53:18 np0005464891 nova_compute[259907]: 2025-10-01 16:53:18.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 4.3 KiB/s wr, 145 op/s
Oct  1 12:53:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 5.3 KiB/s wr, 160 op/s
Oct  1 12:53:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:53:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Oct  1 12:53:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Oct  1 12:53:21 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Oct  1 12:53:21 np0005464891 nova_compute[259907]: 2025-10-01 16:53:21.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003470144925766751 of space, bias 1.0, pg target 0.10410434777300254 quantized to 32 (current 32)
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:53:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:53:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.1 KiB/s wr, 132 op/s
Oct  1 12:53:23 np0005464891 nova_compute[259907]: 2025-10-01 16:53:23.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3527892866' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3527892866' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 121 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.7 MiB/s wr, 171 op/s
Oct  1 12:53:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1025139177' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1025139177' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1558734256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1558734256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:53:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6715 writes, 30K keys, 6715 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6715 writes, 6715 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2011 writes, 9463 keys, 2011 commit groups, 1.0 writes per commit group, ingest: 12.05 MB, 0.02 MB/s#012Interval WAL: 2011 writes, 2011 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     33.5      1.01              0.12        16    0.063       0      0       0.0       0.0#012  L6      1/0    8.69 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     58.8     48.1      2.33              0.47        15    0.156     71K   8424       0.0       0.0#012 Sum      1/0    8.69 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     41.1     43.7      3.34              0.58        31    0.108     71K   8424       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6     43.4     44.9      0.98              0.21         8    0.123     23K   2631       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     58.8     48.1      2.33              0.47        15    0.156     71K   8424       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     33.5      1.01              0.12        15    0.067       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.033, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 3.3 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bddc5951f0#2 capacity: 304.00 MB usage: 15.43 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000118 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1005,14.84 MB,4.88238%) FilterBlock(32,208.36 KB,0.0669329%) IndexBlock(32,394.27 KB,0.126653%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 12:53:25 np0005464891 podman[289195]: 2025-10-01 16:53:25.953134149 +0000 UTC m=+0.059616872 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  1 12:53:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:53:26 np0005464891 nova_compute[259907]: 2025-10-01 16:53:26.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 121 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 145 op/s
Oct  1 12:53:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1297302432' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1297302432' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:27 np0005464891 nova_compute[259907]: 2025-10-01 16:53:27.799 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:53:27 np0005464891 nova_compute[259907]: 2025-10-01 16:53:27.803 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:53:27 np0005464891 nova_compute[259907]: 2025-10-01 16:53:27.803 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:53:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Oct  1 12:53:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Oct  1 12:53:28 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Oct  1 12:53:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/459999901' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/459999901' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:28 np0005464891 nova_compute[259907]: 2025-10-01 16:53:28.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:53:28 np0005464891 nova_compute[259907]: 2025-10-01 16:53:28.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 121 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.7 MiB/s wr, 95 op/s
Oct  1 12:53:30 np0005464891 nova_compute[259907]: 2025-10-01 16:53:30.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:53:30 np0005464891 nova_compute[259907]: 2025-10-01 16:53:30.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:53:30 np0005464891 nova_compute[259907]: 2025-10-01 16:53:30.882 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:53:30 np0005464891 nova_compute[259907]: 2025-10-01 16:53:30.883 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:53:30 np0005464891 nova_compute[259907]: 2025-10-01 16:53:30.883 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:53:30 np0005464891 nova_compute[259907]: 2025-10-01 16:53:30.883 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:53:30 np0005464891 nova_compute[259907]: 2025-10-01 16:53:30.883 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:53:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 2.2 MiB/s wr, 144 op/s
Oct  1 12:53:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:53:31 np0005464891 nova_compute[259907]: 2025-10-01 16:53:31.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:53:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115538812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:53:31 np0005464891 nova_compute[259907]: 2025-10-01 16:53:31.314 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:53:31 np0005464891 nova_compute[259907]: 2025-10-01 16:53:31.479 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:53:31 np0005464891 nova_compute[259907]: 2025-10-01 16:53:31.480 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4505MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:53:31 np0005464891 nova_compute[259907]: 2025-10-01 16:53:31.480 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:53:31 np0005464891 nova_compute[259907]: 2025-10-01 16:53:31.480 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:53:31 np0005464891 nova_compute[259907]: 2025-10-01 16:53:31.632 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:53:31 np0005464891 nova_compute[259907]: 2025-10-01 16:53:31.633 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:53:31 np0005464891 nova_compute[259907]: 2025-10-01 16:53:31.653 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:53:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:53:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2129180889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:53:32 np0005464891 nova_compute[259907]: 2025-10-01 16:53:32.078 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:53:32 np0005464891 nova_compute[259907]: 2025-10-01 16:53:32.085 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:53:32 np0005464891 nova_compute[259907]: 2025-10-01 16:53:32.140 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:53:32 np0005464891 nova_compute[259907]: 2025-10-01 16:53:32.194 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:53:32 np0005464891 nova_compute[259907]: 2025-10-01 16:53:32.195 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:53:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3451800457' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3451800457' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Oct  1 12:53:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Oct  1 12:53:32 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Oct  1 12:53:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2398352608' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2398352608' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 318 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 898 KiB/s wr, 152 op/s
Oct  1 12:53:33 np0005464891 nova_compute[259907]: 2025-10-01 16:53:33.194 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:53:33 np0005464891 nova_compute[259907]: 2025-10-01 16:53:33.195 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:53:33 np0005464891 nova_compute[259907]: 2025-10-01 16:53:33.195 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:53:33 np0005464891 nova_compute[259907]: 2025-10-01 16:53:33.242 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 12:53:33 np0005464891 nova_compute[259907]: 2025-10-01 16:53:33.242 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:53:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/307693736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/307693736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:33 np0005464891 nova_compute[259907]: 2025-10-01 16:53:33.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:53:33 np0005464891 nova_compute[259907]: 2025-10-01 16:53:33.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:33 np0005464891 nova_compute[259907]: 2025-10-01 16:53:33.880 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:53:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3842547857' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3842547857' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:34 np0005464891 nova_compute[259907]: 2025-10-01 16:53:34.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:53:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 318 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 148 KiB/s rd, 900 KiB/s wr, 204 op/s
Oct  1 12:53:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/829144260' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/829144260' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:36 np0005464891 podman[289258]: 2025-10-01 16:53:36.020185509 +0000 UTC m=+0.122946443 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 12:53:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:53:36 np0005464891 nova_compute[259907]: 2025-10-01 16:53:36.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1682774844' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1682774844' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 318 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 5.3 KiB/s wr, 156 op/s
Oct  1 12:53:38 np0005464891 nova_compute[259907]: 2025-10-01 16:53:38.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3967654994' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3967654994' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 5.1 KiB/s wr, 155 op/s
Oct  1 12:53:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Oct  1 12:53:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Oct  1 12:53:39 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Oct  1 12:53:40 np0005464891 podman[289284]: 2025-10-01 16:53:40.954979979 +0000 UTC m=+0.069474489 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  1 12:53:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 7.1 KiB/s wr, 158 op/s
Oct  1 12:53:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:53:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Oct  1 12:53:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Oct  1 12:53:41 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Oct  1 12:53:41 np0005464891 nova_compute[259907]: 2025-10-01 16:53:41.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3651136893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3651136893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:42 np0005464891 ovn_controller[152409]: 2025-10-01T16:53:42Z|00134|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct  1 12:53:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:53:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:53:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:53:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:53:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:53:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:53:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3533140140' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3533140140' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 4.4 KiB/s wr, 93 op/s
Oct  1 12:53:43 np0005464891 nova_compute[259907]: 2025-10-01 16:53:43.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Oct  1 12:53:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Oct  1 12:53:44 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Oct  1 12:53:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 10 KiB/s wr, 222 op/s
Oct  1 12:53:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Oct  1 12:53:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Oct  1 12:53:45 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Oct  1 12:53:45 np0005464891 podman[289304]: 2025-10-01 16:53:45.959513145 +0000 UTC m=+0.073088107 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 12:53:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:53:46 np0005464891 nova_compute[259907]: 2025-10-01 16:53:46.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 6.2 KiB/s wr, 151 op/s
Oct  1 12:53:48 np0005464891 nova_compute[259907]: 2025-10-01 16:53:48.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 6.0 KiB/s wr, 131 op/s
Oct  1 12:53:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Oct  1 12:53:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Oct  1 12:53:49 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Oct  1 12:53:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 9.4 KiB/s wr, 190 op/s
Oct  1 12:53:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:53:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3367390920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:53:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:53:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3367390920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:53:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:53:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Oct  1 12:53:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Oct  1 12:53:51 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Oct  1 12:53:51 np0005464891 nova_compute[259907]: 2025-10-01 16:53:51.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 6.3 KiB/s wr, 116 op/s
Oct  1 12:53:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Oct  1 12:53:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Oct  1 12:53:53 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Oct  1 12:53:53 np0005464891 nova_compute[259907]: 2025-10-01 16:53:53.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 9.0 KiB/s wr, 185 op/s
Oct  1 12:53:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:53:56 np0005464891 nova_compute[259907]: 2025-10-01 16:53:56.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:56 np0005464891 podman[289323]: 2025-10-01 16:53:56.953608352 +0000 UTC m=+0.056858617 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  1 12:53:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 7.0 KiB/s wr, 143 op/s
Oct  1 12:53:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Oct  1 12:53:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Oct  1 12:53:57 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Oct  1 12:53:58 np0005464891 nova_compute[259907]: 2025-10-01 16:53:58.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:53:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 5.2 KiB/s wr, 119 op/s
Oct  1 12:54:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 5.2 KiB/s wr, 114 op/s
Oct  1 12:54:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:54:01 np0005464891 nova_compute[259907]: 2025-10-01 16:54:01.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Oct  1 12:54:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Oct  1 12:54:01 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Oct  1 12:54:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:54:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1274331054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:54:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:54:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1274331054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:54:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 88 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 37 KiB/s wr, 147 op/s
Oct  1 12:54:03 np0005464891 nova_compute[259907]: 2025-10-01 16:54:03.149 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Acquiring lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:03 np0005464891 nova_compute[259907]: 2025-10-01 16:54:03.149 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:03 np0005464891 nova_compute[259907]: 2025-10-01 16:54:03.167 2 DEBUG nova.compute.manager [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:54:03 np0005464891 nova_compute[259907]: 2025-10-01 16:54:03.247 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:03 np0005464891 nova_compute[259907]: 2025-10-01 16:54:03.248 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:03 np0005464891 nova_compute[259907]: 2025-10-01 16:54:03.257 2 DEBUG nova.virt.hardware [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:54:03 np0005464891 nova_compute[259907]: 2025-10-01 16:54:03.258 2 INFO nova.compute.claims [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:54:03 np0005464891 nova_compute[259907]: 2025-10-01 16:54:03.396 2 DEBUG oslo_concurrency.processutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Oct  1 12:54:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Oct  1 12:54:03 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Oct  1 12:54:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:54:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1241244852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:54:03 np0005464891 nova_compute[259907]: 2025-10-01 16:54:03.876 2 DEBUG oslo_concurrency.processutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:03 np0005464891 nova_compute[259907]: 2025-10-01 16:54:03.884 2 DEBUG nova.compute.provider_tree [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:54:03 np0005464891 nova_compute[259907]: 2025-10-01 16:54:03.903 2 DEBUG nova.scheduler.client.report [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:54:03 np0005464891 nova_compute[259907]: 2025-10-01 16:54:03.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:03 np0005464891 nova_compute[259907]: 2025-10-01 16:54:03.939 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:03 np0005464891 nova_compute[259907]: 2025-10-01 16:54:03.940 2 DEBUG nova.compute.manager [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.069 2 DEBUG nova.compute.manager [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.070 2 DEBUG nova.network.neutron [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.103 2 INFO nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.125 2 DEBUG nova.compute.manager [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:54:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:54:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/547667813' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:54:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:54:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/547667813' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.200 2 DEBUG nova.compute.manager [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.201 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.202 2 INFO nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Creating image(s)#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.224 2 DEBUG nova.storage.rbd_utils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] rbd image ce0fbe07-9503-45c6-a10c-1c09f27dd045_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.245 2 DEBUG nova.storage.rbd_utils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] rbd image ce0fbe07-9503-45c6-a10c-1c09f27dd045_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.266 2 DEBUG nova.storage.rbd_utils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] rbd image ce0fbe07-9503-45c6-a10c-1c09f27dd045_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.269 2 DEBUG oslo_concurrency.processutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.295 2 DEBUG nova.policy [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '51f0df6e796a49c8b1e4f18f83b933f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0abf1cc99d79491f87a03f334eb255f1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.347 2 DEBUG oslo_concurrency.processutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.349 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Acquiring lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.349 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.350 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.372 2 DEBUG nova.storage.rbd_utils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] rbd image ce0fbe07-9503-45c6-a10c-1c09f27dd045_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.375 2 DEBUG oslo_concurrency.processutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa ce0fbe07-9503-45c6-a10c-1c09f27dd045_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.714 2 DEBUG oslo_concurrency.processutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa ce0fbe07-9503-45c6-a10c-1c09f27dd045_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.796 2 DEBUG nova.storage.rbd_utils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] resizing rbd image ce0fbe07-9503-45c6-a10c-1c09f27dd045_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.898 2 DEBUG nova.objects.instance [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lazy-loading 'migration_context' on Instance uuid ce0fbe07-9503-45c6-a10c-1c09f27dd045 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.912 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.913 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Ensure instance console log exists: /var/lib/nova/instances/ce0fbe07-9503-45c6-a10c-1c09f27dd045/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.913 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.913 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:04 np0005464891 nova_compute[259907]: 2025-10-01 16:54:04.914 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 121 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 175 KiB/s rd, 1.7 MiB/s wr, 234 op/s
Oct  1 12:54:05 np0005464891 nova_compute[259907]: 2025-10-01 16:54:05.916 2 DEBUG nova.network.neutron [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Successfully created port: 8ef42ea7-b750-44b5-9353-fbc089ba0eef _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:54:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:54:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Oct  1 12:54:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Oct  1 12:54:06 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Oct  1 12:54:06 np0005464891 nova_compute[259907]: 2025-10-01 16:54:06.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:54:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1157448344' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:54:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:54:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1157448344' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:54:06 np0005464891 podman[289530]: 2025-10-01 16:54:06.497556107 +0000 UTC m=+0.125234545 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  1 12:54:06 np0005464891 nova_compute[259907]: 2025-10-01 16:54:06.795 2 DEBUG nova.network.neutron [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Successfully updated port: 8ef42ea7-b750-44b5-9353-fbc089ba0eef _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:54:06 np0005464891 nova_compute[259907]: 2025-10-01 16:54:06.808 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Acquiring lock "refresh_cache-ce0fbe07-9503-45c6-a10c-1c09f27dd045" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:54:06 np0005464891 nova_compute[259907]: 2025-10-01 16:54:06.809 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Acquired lock "refresh_cache-ce0fbe07-9503-45c6-a10c-1c09f27dd045" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:54:06 np0005464891 nova_compute[259907]: 2025-10-01 16:54:06.809 2 DEBUG nova.network.neutron [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:54:06 np0005464891 nova_compute[259907]: 2025-10-01 16:54:06.957 2 DEBUG nova.compute.manager [req-50984531-4e07-429e-9a59-a9c196f458a1 req-b64ab668-b877-4bad-85e6-5a9e84cc396c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Received event network-changed-8ef42ea7-b750-44b5-9353-fbc089ba0eef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:54:06 np0005464891 nova_compute[259907]: 2025-10-01 16:54:06.957 2 DEBUG nova.compute.manager [req-50984531-4e07-429e-9a59-a9c196f458a1 req-b64ab668-b877-4bad-85e6-5a9e84cc396c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Refreshing instance network info cache due to event network-changed-8ef42ea7-b750-44b5-9353-fbc089ba0eef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:54:06 np0005464891 nova_compute[259907]: 2025-10-01 16:54:06.958 2 DEBUG oslo_concurrency.lockutils [req-50984531-4e07-429e-9a59-a9c196f458a1 req-b64ab668-b877-4bad-85e6-5a9e84cc396c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-ce0fbe07-9503-45c6-a10c-1c09f27dd045" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:54:07 np0005464891 nova_compute[259907]: 2025-10-01 16:54:07.045 2 DEBUG nova.network.neutron [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:54:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 121 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 144 KiB/s rd, 2.1 MiB/s wr, 196 op/s
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.201 2 DEBUG nova.network.neutron [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Updating instance_info_cache with network_info: [{"id": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "address": "fa:16:3e:ea:bc:ef", "network": {"id": "c9d562fc-0c1c-4b41-aa7c-4cb07be574c7", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-481427918-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0abf1cc99d79491f87a03f334eb255f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ef42ea7-b7", "ovs_interfaceid": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.222 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Releasing lock "refresh_cache-ce0fbe07-9503-45c6-a10c-1c09f27dd045" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.223 2 DEBUG nova.compute.manager [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Instance network_info: |[{"id": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "address": "fa:16:3e:ea:bc:ef", "network": {"id": "c9d562fc-0c1c-4b41-aa7c-4cb07be574c7", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-481427918-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0abf1cc99d79491f87a03f334eb255f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ef42ea7-b7", "ovs_interfaceid": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.224 2 DEBUG oslo_concurrency.lockutils [req-50984531-4e07-429e-9a59-a9c196f458a1 req-b64ab668-b877-4bad-85e6-5a9e84cc396c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-ce0fbe07-9503-45c6-a10c-1c09f27dd045" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.225 2 DEBUG nova.network.neutron [req-50984531-4e07-429e-9a59-a9c196f458a1 req-b64ab668-b877-4bad-85e6-5a9e84cc396c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Refreshing network info cache for port 8ef42ea7-b750-44b5-9353-fbc089ba0eef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.230 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Start _get_guest_xml network_info=[{"id": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "address": "fa:16:3e:ea:bc:ef", "network": {"id": "c9d562fc-0c1c-4b41-aa7c-4cb07be574c7", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-481427918-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0abf1cc99d79491f87a03f334eb255f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ef42ea7-b7", "ovs_interfaceid": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'image_id': 'f01c1e7c-fea3-4433-a44a-d71153552c78'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.236 2 WARNING nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.246 2 DEBUG nova.virt.libvirt.host [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.247 2 DEBUG nova.virt.libvirt.host [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.251 2 DEBUG nova.virt.libvirt.host [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.252 2 DEBUG nova.virt.libvirt.host [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.253 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.253 2 DEBUG nova.virt.hardware [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.254 2 DEBUG nova.virt.hardware [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.255 2 DEBUG nova.virt.hardware [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.255 2 DEBUG nova.virt.hardware [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.255 2 DEBUG nova.virt.hardware [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.256 2 DEBUG nova.virt.hardware [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.256 2 DEBUG nova.virt.hardware [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.257 2 DEBUG nova.virt.hardware [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.257 2 DEBUG nova.virt.hardware [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.258 2 DEBUG nova.virt.hardware [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.258 2 DEBUG nova.virt.hardware [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.263 2 DEBUG oslo_concurrency.processutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:54:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/211011416' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.727 2 DEBUG oslo_concurrency.processutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.748 2 DEBUG nova.storage.rbd_utils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] rbd image ce0fbe07-9503-45c6-a10c-1c09f27dd045_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.752 2 DEBUG oslo_concurrency.processutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:08 np0005464891 nova_compute[259907]: 2025-10-01 16:54:08.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 134 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 2.9 MiB/s wr, 195 op/s
Oct  1 12:54:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:54:09 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1447501668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.201 2 DEBUG oslo_concurrency.processutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.203 2 DEBUG nova.virt.libvirt.vif [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:54:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-804282757',display_name='tempest-TestEncryptedCinderVolumes-server-804282757',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-804282757',id=14,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1+uYIng/Nm/+oVaE3GM9Sm1tsjQriWZRlO6Bwtj76OMNUHXXErOUruu8mQcuHyP0af9JGljokaMhudZEWQrshT5dgncNDxJtUA3fyYEY0H2suKuHwykEs/LfW1SBu3vQ==',key_name='tempest-keypair-720287870',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0abf1cc99d79491f87a03f334eb255f1',ramdisk_id='',reservation_id='r-gdhsmpbc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1608655835',owner_user_name='tempest-TestEncryptedCinderVolumes-1608655835-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:54:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51f0df6e796a49c8b1e4f18f83b933f5',uuid=ce0fbe07-9503-45c6-a10c-1c09f27dd045,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "address": "fa:16:3e:ea:bc:ef", "network": {"id": "c9d562fc-0c1c-4b41-aa7c-4cb07be574c7", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-481427918-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0abf1cc99d79491f87a03f334eb255f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ef42ea7-b7", "ovs_interfaceid": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.203 2 DEBUG nova.network.os_vif_util [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Converting VIF {"id": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "address": "fa:16:3e:ea:bc:ef", "network": {"id": "c9d562fc-0c1c-4b41-aa7c-4cb07be574c7", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-481427918-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0abf1cc99d79491f87a03f334eb255f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ef42ea7-b7", "ovs_interfaceid": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.204 2 DEBUG nova.network.os_vif_util [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:bc:ef,bridge_name='br-int',has_traffic_filtering=True,id=8ef42ea7-b750-44b5-9353-fbc089ba0eef,network=Network(c9d562fc-0c1c-4b41-aa7c-4cb07be574c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ef42ea7-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.205 2 DEBUG nova.objects.instance [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lazy-loading 'pci_devices' on Instance uuid ce0fbe07-9503-45c6-a10c-1c09f27dd045 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.223 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  <uuid>ce0fbe07-9503-45c6-a10c-1c09f27dd045</uuid>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  <name>instance-0000000e</name>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-804282757</nova:name>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:54:08</nova:creationTime>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:        <nova:user uuid="51f0df6e796a49c8b1e4f18f83b933f5">tempest-TestEncryptedCinderVolumes-1608655835-project-member</nova:user>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:        <nova:project uuid="0abf1cc99d79491f87a03f334eb255f1">tempest-TestEncryptedCinderVolumes-1608655835</nova:project>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="f01c1e7c-fea3-4433-a44a-d71153552c78"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:        <nova:port uuid="8ef42ea7-b750-44b5-9353-fbc089ba0eef">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <entry name="serial">ce0fbe07-9503-45c6-a10c-1c09f27dd045</entry>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <entry name="uuid">ce0fbe07-9503-45c6-a10c-1c09f27dd045</entry>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/ce0fbe07-9503-45c6-a10c-1c09f27dd045_disk">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/ce0fbe07-9503-45c6-a10c-1c09f27dd045_disk.config">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:ea:bc:ef"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <target dev="tap8ef42ea7-b7"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/ce0fbe07-9503-45c6-a10c-1c09f27dd045/console.log" append="off"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:54:09 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:54:09 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:54:09 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:54:09 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.225 2 DEBUG nova.compute.manager [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Preparing to wait for external event network-vif-plugged-8ef42ea7-b750-44b5-9353-fbc089ba0eef prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.225 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Acquiring lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.225 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.226 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.226 2 DEBUG nova.virt.libvirt.vif [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:54:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-804282757',display_name='tempest-TestEncryptedCinderVolumes-server-804282757',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-804282757',id=14,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1+uYIng/Nm/+oVaE3GM9Sm1tsjQriWZRlO6Bwtj76OMNUHXXErOUruu8mQcuHyP0af9JGljokaMhudZEWQrshT5dgncNDxJtUA3fyYEY0H2suKuHwykEs/LfW1SBu3vQ==',key_name='tempest-keypair-720287870',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0abf1cc99d79491f87a03f334eb255f1',ramdisk_id='',reservation_id='r-gdhsmpbc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1608655835',owner_user_name='tempest-TestEncryptedCinderVolumes-1608655835-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:54:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51f0df6e796a49c8b1e4f18f83b933f5',uuid=ce0fbe07-9503-45c6-a10c-1c09f27dd045,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "address": "fa:16:3e:ea:bc:ef", "network": {"id": "c9d562fc-0c1c-4b41-aa7c-4cb07be574c7", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-481427918-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0abf1cc99d79491f87a03f334eb255f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ef42ea7-b7", "ovs_interfaceid": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.227 2 DEBUG nova.network.os_vif_util [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Converting VIF {"id": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "address": "fa:16:3e:ea:bc:ef", "network": {"id": "c9d562fc-0c1c-4b41-aa7c-4cb07be574c7", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-481427918-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0abf1cc99d79491f87a03f334eb255f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ef42ea7-b7", "ovs_interfaceid": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.227 2 DEBUG nova.network.os_vif_util [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:bc:ef,bridge_name='br-int',has_traffic_filtering=True,id=8ef42ea7-b750-44b5-9353-fbc089ba0eef,network=Network(c9d562fc-0c1c-4b41-aa7c-4cb07be574c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ef42ea7-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.228 2 DEBUG os_vif [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:bc:ef,bridge_name='br-int',has_traffic_filtering=True,id=8ef42ea7-b750-44b5-9353-fbc089ba0eef,network=Network(c9d562fc-0c1c-4b41-aa7c-4cb07be574c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ef42ea7-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ef42ea7-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.237 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ef42ea7-b7, col_values=(('external_ids', {'iface-id': '8ef42ea7-b750-44b5-9353-fbc089ba0eef', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:bc:ef', 'vm-uuid': 'ce0fbe07-9503-45c6-a10c-1c09f27dd045'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:09 np0005464891 NetworkManager[44940]: <info>  [1759337649.2399] manager: (tap8ef42ea7-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.247 2 INFO os_vif [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:bc:ef,bridge_name='br-int',has_traffic_filtering=True,id=8ef42ea7-b750-44b5-9353-fbc089ba0eef,network=Network(c9d562fc-0c1c-4b41-aa7c-4cb07be574c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ef42ea7-b7')#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.298 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.298 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.299 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] No VIF found with MAC fa:16:3e:ea:bc:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.299 2 INFO nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Using config drive#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.325 2 DEBUG nova.storage.rbd_utils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] rbd image ce0fbe07-9503-45c6-a10c-1c09f27dd045_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.875 2 DEBUG nova.network.neutron [req-50984531-4e07-429e-9a59-a9c196f458a1 req-b64ab668-b877-4bad-85e6-5a9e84cc396c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Updated VIF entry in instance network info cache for port 8ef42ea7-b750-44b5-9353-fbc089ba0eef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.875 2 DEBUG nova.network.neutron [req-50984531-4e07-429e-9a59-a9c196f458a1 req-b64ab668-b877-4bad-85e6-5a9e84cc396c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Updating instance_info_cache with network_info: [{"id": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "address": "fa:16:3e:ea:bc:ef", "network": {"id": "c9d562fc-0c1c-4b41-aa7c-4cb07be574c7", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-481427918-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0abf1cc99d79491f87a03f334eb255f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ef42ea7-b7", "ovs_interfaceid": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:54:09 np0005464891 nova_compute[259907]: 2025-10-01 16:54:09.895 2 DEBUG oslo_concurrency.lockutils [req-50984531-4e07-429e-9a59-a9c196f458a1 req-b64ab668-b877-4bad-85e6-5a9e84cc396c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-ce0fbe07-9503-45c6-a10c-1c09f27dd045" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.003 2 INFO nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Creating config drive at /var/lib/nova/instances/ce0fbe07-9503-45c6-a10c-1c09f27dd045/disk.config#033[00m
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.011 2 DEBUG oslo_concurrency.processutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce0fbe07-9503-45c6-a10c-1c09f27dd045/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt9cb3jq2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.141 2 DEBUG oslo_concurrency.processutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce0fbe07-9503-45c6-a10c-1c09f27dd045/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt9cb3jq2" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.169 2 DEBUG nova.storage.rbd_utils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] rbd image ce0fbe07-9503-45c6-a10c-1c09f27dd045_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.172 2 DEBUG oslo_concurrency.processutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce0fbe07-9503-45c6-a10c-1c09f27dd045/disk.config ce0fbe07-9503-45c6-a10c-1c09f27dd045_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.309 2 DEBUG oslo_concurrency.processutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce0fbe07-9503-45c6-a10c-1c09f27dd045/disk.config ce0fbe07-9503-45c6-a10c-1c09f27dd045_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.310 2 INFO nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Deleting local config drive /var/lib/nova/instances/ce0fbe07-9503-45c6-a10c-1c09f27dd045/disk.config because it was imported into RBD.#033[00m
Oct  1 12:54:10 np0005464891 kernel: tap8ef42ea7-b7: entered promiscuous mode
Oct  1 12:54:10 np0005464891 NetworkManager[44940]: <info>  [1759337650.3664] manager: (tap8ef42ea7-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/82)
Oct  1 12:54:10 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:10Z|00135|binding|INFO|Claiming lport 8ef42ea7-b750-44b5-9353-fbc089ba0eef for this chassis.
Oct  1 12:54:10 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:10Z|00136|binding|INFO|8ef42ea7-b750-44b5-9353-fbc089ba0eef: Claiming fa:16:3e:ea:bc:ef 10.100.0.5
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.382 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:bc:ef 10.100.0.5'], port_security=['fa:16:3e:ea:bc:ef 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ce0fbe07-9503-45c6-a10c-1c09f27dd045', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0abf1cc99d79491f87a03f334eb255f1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c30cafeb-2af0-4af1-bd27-6551ccb4bcc6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=41ea2ae0-f911-4d79-a8de-235bf805e7ec, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=8ef42ea7-b750-44b5-9353-fbc089ba0eef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.384 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 8ef42ea7-b750-44b5-9353-fbc089ba0eef in datapath c9d562fc-0c1c-4b41-aa7c-4cb07be574c7 bound to our chassis#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.386 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c9d562fc-0c1c-4b41-aa7c-4cb07be574c7#033[00m
Oct  1 12:54:10 np0005464891 systemd-udevd[289692]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:54:10 np0005464891 systemd-machined[214891]: New machine qemu-14-instance-0000000e.
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.400 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3cfc1c11-2f4e-422d-9beb-a54e3a2fd58e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.401 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc9d562fc-01 in ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.404 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc9d562fc-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.404 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[4d292c2b-3d90-40de-9d33-44f485eb3280]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.405 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[85e2ad5f-6707-4ef6-9cc5-d1d480bf60bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:10 np0005464891 NetworkManager[44940]: <info>  [1759337650.4171] device (tap8ef42ea7-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:54:10 np0005464891 NetworkManager[44940]: <info>  [1759337650.4221] device (tap8ef42ea7-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.424 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9c8a83-e8a3-4376-922d-194f6529ece1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:10 np0005464891 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.458 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[0d326720-2d59-43a4-98d9-e3acba10d80f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:10 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:10Z|00137|binding|INFO|Setting lport 8ef42ea7-b750-44b5-9353-fbc089ba0eef ovn-installed in OVS
Oct  1 12:54:10 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:10Z|00138|binding|INFO|Setting lport 8ef42ea7-b750-44b5-9353-fbc089ba0eef up in Southbound
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.497 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[cb956350-70f5-4408-a892-ab94020f9f4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:10 np0005464891 systemd-udevd[289695]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:54:10 np0005464891 NetworkManager[44940]: <info>  [1759337650.5074] manager: (tapc9d562fc-00): new Veth device (/org/freedesktop/NetworkManager/Devices/83)
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.508 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[485b9b67-a135-49fd-ba53-633912d78922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.546 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f4cd3b-28a7-48e5-ac4e-f3c72c967e31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.551 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[e037b9f7-7c70-4ed8-8dc8-abf407427fed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:10 np0005464891 NetworkManager[44940]: <info>  [1759337650.5771] device (tapc9d562fc-00): carrier: link connected
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.583 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[6a341533-3b2f-4b8f-8c99-5e9b738ff890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.602 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5b990d-f934-4b97-8ba5-10c6b84fbc1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc9d562fc-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:3a:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459914, 'reachable_time': 25302, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289724, 'error': None, 'target': 'ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.621 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[a7233471-011c-4d29-baf9-9378dd2a0fa3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3d:3a7b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459914, 'tstamp': 459914}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289725, 'error': None, 'target': 'ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.641 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d3178215-a0da-4c3b-a26e-dc4d6abadaa3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc9d562fc-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:3a:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459914, 'reachable_time': 25302, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289726, 'error': None, 'target': 'ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.672 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ef291e70-12ef-4355-9c68-f59c0e0265f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.734 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[acce21c8-a810-4c2a-a7b2-95f3fe50a12f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.735 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9d562fc-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.736 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.736 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9d562fc-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.737 2 DEBUG nova.compute.manager [req-1af61870-a163-4e61-a4cd-3879c15cf4a8 req-22959029-cbc1-4808-9c58-d702be9ece7d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Received event network-vif-plugged-8ef42ea7-b750-44b5-9353-fbc089ba0eef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.738 2 DEBUG oslo_concurrency.lockutils [req-1af61870-a163-4e61-a4cd-3879c15cf4a8 req-22959029-cbc1-4808-9c58-d702be9ece7d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.738 2 DEBUG oslo_concurrency.lockutils [req-1af61870-a163-4e61-a4cd-3879c15cf4a8 req-22959029-cbc1-4808-9c58-d702be9ece7d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:10 np0005464891 NetworkManager[44940]: <info>  [1759337650.7386] manager: (tapc9d562fc-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.738 2 DEBUG oslo_concurrency.lockutils [req-1af61870-a163-4e61-a4cd-3879c15cf4a8 req-22959029-cbc1-4808-9c58-d702be9ece7d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.738 2 DEBUG nova.compute.manager [req-1af61870-a163-4e61-a4cd-3879c15cf4a8 req-22959029-cbc1-4808-9c58-d702be9ece7d af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Processing event network-vif-plugged-8ef42ea7-b750-44b5-9353-fbc089ba0eef _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:10 np0005464891 kernel: tapc9d562fc-00: entered promiscuous mode
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.741 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc9d562fc-00, col_values=(('external_ids', {'iface-id': 'fe368bf3-4945-4229-95fa-fd16f8dbee93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:10 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:10Z|00139|binding|INFO|Releasing lport fe368bf3-4945-4229-95fa-fd16f8dbee93 from this chassis (sb_readonly=0)
Oct  1 12:54:10 np0005464891 nova_compute[259907]: 2025-10-01 16:54:10.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.760 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c9d562fc-0c1c-4b41-aa7c-4cb07be574c7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c9d562fc-0c1c-4b41-aa7c-4cb07be574c7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.761 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[98d21853-82eb-40e1-9a70-9eaa9e7b86db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.761 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/c9d562fc-0c1c-4b41-aa7c-4cb07be574c7.pid.haproxy
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID c9d562fc-0c1c-4b41-aa7c-4cb07be574c7
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:54:10 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:10.762 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7', 'env', 'PROCESS_TAG=haproxy-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c9d562fc-0c1c-4b41-aa7c-4cb07be574c7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:54:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 134 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 2.7 MiB/s wr, 127 op/s
Oct  1 12:54:11 np0005464891 podman[289800]: 2025-10-01 16:54:11.123035552 +0000 UTC m=+0.054927676 container create f52886f1c11c0476ad2ae2d3a21de8c8955a097c6060ffb2cc80bc1448c22863 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  1 12:54:11 np0005464891 systemd[1]: Started libpod-conmon-f52886f1c11c0476ad2ae2d3a21de8c8955a097c6060ffb2cc80bc1448c22863.scope.
Oct  1 12:54:11 np0005464891 podman[289800]: 2025-10-01 16:54:11.089096025 +0000 UTC m=+0.020988169 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:54:11 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:54:11 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96bbee777dc2df8955b3662960af3d1294adf681d3f83e062c8c58834cc3094a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:54:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:54:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Oct  1 12:54:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Oct  1 12:54:11 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Oct  1 12:54:11 np0005464891 podman[289800]: 2025-10-01 16:54:11.211947734 +0000 UTC m=+0.143839868 container init f52886f1c11c0476ad2ae2d3a21de8c8955a097c6060ffb2cc80bc1448c22863 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  1 12:54:11 np0005464891 podman[289800]: 2025-10-01 16:54:11.219725445 +0000 UTC m=+0.151617559 container start f52886f1c11c0476ad2ae2d3a21de8c8955a097c6060ffb2cc80bc1448c22863 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  1 12:54:11 np0005464891 neutron-haproxy-ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7[289816]: [NOTICE]   (289838) : New worker (289841) forked
Oct  1 12:54:11 np0005464891 neutron-haproxy-ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7[289816]: [NOTICE]   (289838) : Loading success.
Oct  1 12:54:11 np0005464891 podman[289813]: 2025-10-01 16:54:11.255385127 +0000 UTC m=+0.085033998 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.341 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337651.340805, ce0fbe07-9503-45c6-a10c-1c09f27dd045 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.341 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] VM Started (Lifecycle Event)#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.343 2 DEBUG nova.compute.manager [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.349 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.352 2 INFO nova.virt.libvirt.driver [-] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Instance spawned successfully.#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.353 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.386 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.397 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.401 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.402 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.403 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.403 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.404 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.404 2 DEBUG nova.virt.libvirt.driver [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.466 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.467 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337651.340885, ce0fbe07-9503-45c6-a10c-1c09f27dd045 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.468 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.490 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.494 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337651.3475559, ce0fbe07-9503-45c6-a10c-1c09f27dd045 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.495 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.506 2 INFO nova.compute.manager [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Took 7.30 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.507 2 DEBUG nova.compute.manager [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.518 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.523 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.551 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.583 2 INFO nova.compute.manager [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Took 8.37 seconds to build instance.#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.600 2 DEBUG oslo_concurrency.lockutils [None req-c99a699c-6664-4b2e-9ecd-d4f0f2fd2866 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.945 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.946 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:11 np0005464891 nova_compute[259907]: 2025-10-01 16:54:11.961 2 DEBUG nova.compute.manager [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.031 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.031 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.037 2 DEBUG nova.virt.hardware [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.037 2 INFO nova.compute.claims [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:54:12
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'vms', 'default.rgw.log', 'default.rgw.meta', 'images']
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.163 2 DEBUG oslo_concurrency.processutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:54:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:54:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:12.456 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:12.457 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:12.457 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:54:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2495165203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.670 2 DEBUG oslo_concurrency.processutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.677 2 DEBUG nova.compute.provider_tree [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.708 2 DEBUG nova.scheduler.client.report [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.759 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.760 2 DEBUG nova.compute.manager [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.804 2 DEBUG nova.compute.manager [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.805 2 DEBUG nova.network.neutron [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.828 2 INFO nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.852 2 DEBUG nova.compute.manager [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.947 2 DEBUG nova.compute.manager [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.949 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.950 2 INFO nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Creating image(s)#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.967 2 DEBUG nova.storage.rbd_utils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] rbd image eef473c3-8fff-4cd4-a5f8-ef9b89b7439a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:54:12 np0005464891 nova_compute[259907]: 2025-10-01 16:54:12.985 2 DEBUG nova.storage.rbd_utils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] rbd image eef473c3-8fff-4cd4-a5f8-ef9b89b7439a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.008 2 DEBUG nova.storage.rbd_utils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] rbd image eef473c3-8fff-4cd4-a5f8-ef9b89b7439a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.012 2 DEBUG oslo_concurrency.processutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 134 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 93 op/s
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.096 2 DEBUG oslo_concurrency.processutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.098 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.099 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.099 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.137 2 DEBUG nova.storage.rbd_utils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] rbd image eef473c3-8fff-4cd4-a5f8-ef9b89b7439a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.143 2 DEBUG oslo_concurrency.processutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa eef473c3-8fff-4cd4-a5f8-ef9b89b7439a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.219 2 DEBUG nova.compute.manager [req-4073e7ac-b18b-412b-ae2b-53168f967a16 req-ed9f2495-5397-41ba-aa69-26224b246850 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Received event network-vif-plugged-8ef42ea7-b750-44b5-9353-fbc089ba0eef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.219 2 DEBUG oslo_concurrency.lockutils [req-4073e7ac-b18b-412b-ae2b-53168f967a16 req-ed9f2495-5397-41ba-aa69-26224b246850 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.219 2 DEBUG oslo_concurrency.lockutils [req-4073e7ac-b18b-412b-ae2b-53168f967a16 req-ed9f2495-5397-41ba-aa69-26224b246850 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.220 2 DEBUG oslo_concurrency.lockutils [req-4073e7ac-b18b-412b-ae2b-53168f967a16 req-ed9f2495-5397-41ba-aa69-26224b246850 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.220 2 DEBUG nova.compute.manager [req-4073e7ac-b18b-412b-ae2b-53168f967a16 req-ed9f2495-5397-41ba-aa69-26224b246850 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] No waiting events found dispatching network-vif-plugged-8ef42ea7-b750-44b5-9353-fbc089ba0eef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.220 2 WARNING nova.compute.manager [req-4073e7ac-b18b-412b-ae2b-53168f967a16 req-ed9f2495-5397-41ba-aa69-26224b246850 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Received unexpected event network-vif-plugged-8ef42ea7-b750-44b5-9353-fbc089ba0eef for instance with vm_state active and task_state None.#033[00m
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.233 2 DEBUG nova.policy [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9dcf2401f8724e5b8337ca100dda75db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f6195d07ebe4991a5be01fb7ba2afdc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:54:13 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:54:13 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 1c8f4e23-fe02-4898-b13d-e86e7701288b does not exist
Oct  1 12:54:13 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev c1170214-ccf0-46ac-b099-46e2fb28329b does not exist
Oct  1 12:54:13 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev ce88ec50-59ec-4455-84ea-b662e2cc54aa does not exist
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:54:13.560416) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337653560507, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2846, "num_deletes": 551, "total_data_size": 3519845, "memory_usage": 3584128, "flush_reason": "Manual Compaction"}
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337653599110, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3455731, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28839, "largest_seqno": 31683, "table_properties": {"data_size": 3442889, "index_size": 8056, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3717, "raw_key_size": 31089, "raw_average_key_size": 21, "raw_value_size": 3414915, "raw_average_value_size": 2329, "num_data_blocks": 346, "num_entries": 1466, "num_filter_entries": 1466, "num_deletions": 551, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759337505, "oldest_key_time": 1759337505, "file_creation_time": 1759337653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 38725 microseconds, and 6840 cpu microseconds.
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:54:13.599155) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3455731 bytes OK
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:54:13.599176) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:54:13.601775) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:54:13.601801) EVENT_LOG_v1 {"time_micros": 1759337653601795, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:54:13.601823) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3506296, prev total WAL file size 3506296, number of live WAL files 2.
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:54:13.602716) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3374KB)], [62(8895KB)]
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337653602784, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12564554, "oldest_snapshot_seqno": -1}
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.611 2 DEBUG oslo_concurrency.processutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa eef473c3-8fff-4cd4-a5f8-ef9b89b7439a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:13 np0005464891 nova_compute[259907]: 2025-10-01 16:54:13.682 2 DEBUG nova.storage.rbd_utils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] resizing rbd image eef473c3-8fff-4cd4-a5f8-ef9b89b7439a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6037 keys, 10805732 bytes, temperature: kUnknown
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337653816584, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10805732, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10758721, "index_size": 30837, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 152130, "raw_average_key_size": 25, "raw_value_size": 10643501, "raw_average_value_size": 1763, "num_data_blocks": 1244, "num_entries": 6037, "num_filter_entries": 6037, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759337653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:54:13.816924) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10805732 bytes
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:54:13.823432) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.7 rd, 50.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.7 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7125, records dropped: 1088 output_compression: NoCompression
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:54:13.823494) EVENT_LOG_v1 {"time_micros": 1759337653823478, "job": 34, "event": "compaction_finished", "compaction_time_micros": 213938, "compaction_time_cpu_micros": 36262, "output_level": 6, "num_output_files": 1, "total_output_size": 10805732, "num_input_records": 7125, "num_output_records": 6037, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337653824151, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337653825468, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:54:13.602611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:54:13.825498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:54:13.825505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:54:13.825507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:54:13.825508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:54:13 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:54:13.825510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:54:14 np0005464891 nova_compute[259907]: 2025-10-01 16:54:14.020 2 DEBUG nova.objects.instance [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lazy-loading 'migration_context' on Instance uuid eef473c3-8fff-4cd4-a5f8-ef9b89b7439a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:54:14 np0005464891 nova_compute[259907]: 2025-10-01 16:54:14.060 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  1 12:54:14 np0005464891 nova_compute[259907]: 2025-10-01 16:54:14.060 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Ensure instance console log exists: /var/lib/nova/instances/eef473c3-8fff-4cd4-a5f8-ef9b89b7439a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:54:14 np0005464891 nova_compute[259907]: 2025-10-01 16:54:14.061 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:14 np0005464891 nova_compute[259907]: 2025-10-01 16:54:14.061 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:14 np0005464891 nova_compute[259907]: 2025-10-01 16:54:14.061 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:14 np0005464891 podman[290309]: 2025-10-01 16:54:14.193261361 +0000 UTC m=+0.102471861 container create a16d4aea70af641ad761c3b9d247cd5e8fd320a138a08cf8e6dcbe9fc78846fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:54:14 np0005464891 podman[290309]: 2025-10-01 16:54:14.115075608 +0000 UTC m=+0.024286128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:54:14 np0005464891 nova_compute[259907]: 2025-10-01 16:54:14.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:14 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:54:14 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:54:14 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:54:14 np0005464891 systemd[1]: Started libpod-conmon-a16d4aea70af641ad761c3b9d247cd5e8fd320a138a08cf8e6dcbe9fc78846fc.scope.
Oct  1 12:54:14 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:54:14 np0005464891 podman[290309]: 2025-10-01 16:54:14.339648686 +0000 UTC m=+0.248859206 container init a16d4aea70af641ad761c3b9d247cd5e8fd320a138a08cf8e6dcbe9fc78846fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bell, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:54:14 np0005464891 podman[290309]: 2025-10-01 16:54:14.34756383 +0000 UTC m=+0.256774370 container start a16d4aea70af641ad761c3b9d247cd5e8fd320a138a08cf8e6dcbe9fc78846fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 12:54:14 np0005464891 keen_bell[290325]: 167 167
Oct  1 12:54:14 np0005464891 systemd[1]: libpod-a16d4aea70af641ad761c3b9d247cd5e8fd320a138a08cf8e6dcbe9fc78846fc.scope: Deactivated successfully.
Oct  1 12:54:14 np0005464891 podman[290309]: 2025-10-01 16:54:14.36236887 +0000 UTC m=+0.271579390 container attach a16d4aea70af641ad761c3b9d247cd5e8fd320a138a08cf8e6dcbe9fc78846fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bell, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 12:54:14 np0005464891 podman[290309]: 2025-10-01 16:54:14.363032078 +0000 UTC m=+0.272242578 container died a16d4aea70af641ad761c3b9d247cd5e8fd320a138a08cf8e6dcbe9fc78846fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:54:14 np0005464891 nova_compute[259907]: 2025-10-01 16:54:14.365 2 DEBUG nova.network.neutron [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Successfully created port: a11a83be-c1d2-47f1-92f5-556ead33435e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:54:14 np0005464891 systemd[1]: var-lib-containers-storage-overlay-04f86aec418b0c13b58558dd8765bdf30e3215eaf75fd70ada853c9c824bb5c7-merged.mount: Deactivated successfully.
Oct  1 12:54:14 np0005464891 podman[290309]: 2025-10-01 16:54:14.472881596 +0000 UTC m=+0.382092096 container remove a16d4aea70af641ad761c3b9d247cd5e8fd320a138a08cf8e6dcbe9fc78846fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 12:54:14 np0005464891 systemd[1]: libpod-conmon-a16d4aea70af641ad761c3b9d247cd5e8fd320a138a08cf8e6dcbe9fc78846fc.scope: Deactivated successfully.
Oct  1 12:54:14 np0005464891 NetworkManager[44940]: <info>  [1759337654.5719] manager: (patch-br-int-to-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Oct  1 12:54:14 np0005464891 NetworkManager[44940]: <info>  [1759337654.5729] manager: (patch-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Oct  1 12:54:14 np0005464891 nova_compute[259907]: 2025-10-01 16:54:14.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:14 np0005464891 podman[290351]: 2025-10-01 16:54:14.715760869 +0000 UTC m=+0.094322699 container create a27f8e562f8e7a25952391c6d5d529cc167987958cd2dcaecc74ba8deb214307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:54:14 np0005464891 podman[290351]: 2025-10-01 16:54:14.643090366 +0000 UTC m=+0.021652226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:54:14 np0005464891 systemd[1]: Started libpod-conmon-a27f8e562f8e7a25952391c6d5d529cc167987958cd2dcaecc74ba8deb214307.scope.
Oct  1 12:54:14 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:54:14 np0005464891 nova_compute[259907]: 2025-10-01 16:54:14.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d582c31aed38bcff2ce37f553f301cbba0d545dcbb44c124aa8c10c178d5bfdf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:54:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d582c31aed38bcff2ce37f553f301cbba0d545dcbb44c124aa8c10c178d5bfdf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:54:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d582c31aed38bcff2ce37f553f301cbba0d545dcbb44c124aa8c10c178d5bfdf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:54:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d582c31aed38bcff2ce37f553f301cbba0d545dcbb44c124aa8c10c178d5bfdf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:54:14 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d582c31aed38bcff2ce37f553f301cbba0d545dcbb44c124aa8c10c178d5bfdf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:54:14 np0005464891 podman[290351]: 2025-10-01 16:54:14.835047532 +0000 UTC m=+0.213609412 container init a27f8e562f8e7a25952391c6d5d529cc167987958cd2dcaecc74ba8deb214307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:54:14 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:14Z|00140|binding|INFO|Releasing lport fe368bf3-4945-4229-95fa-fd16f8dbee93 from this chassis (sb_readonly=0)
Oct  1 12:54:14 np0005464891 podman[290351]: 2025-10-01 16:54:14.884704874 +0000 UTC m=+0.263266704 container start a27f8e562f8e7a25952391c6d5d529cc167987958cd2dcaecc74ba8deb214307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:54:14 np0005464891 nova_compute[259907]: 2025-10-01 16:54:14.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:14 np0005464891 podman[290351]: 2025-10-01 16:54:14.887944451 +0000 UTC m=+0.266506281 container attach a27f8e562f8e7a25952391c6d5d529cc167987958cd2dcaecc74ba8deb214307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct  1 12:54:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 150 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 194 op/s
Oct  1 12:54:15 np0005464891 nova_compute[259907]: 2025-10-01 16:54:15.211 2 DEBUG nova.network.neutron [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Successfully updated port: a11a83be-c1d2-47f1-92f5-556ead33435e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:54:15 np0005464891 nova_compute[259907]: 2025-10-01 16:54:15.231 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "refresh_cache-eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:54:15 np0005464891 nova_compute[259907]: 2025-10-01 16:54:15.231 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquired lock "refresh_cache-eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:54:15 np0005464891 nova_compute[259907]: 2025-10-01 16:54:15.232 2 DEBUG nova.network.neutron [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:54:15 np0005464891 nova_compute[259907]: 2025-10-01 16:54:15.305 2 DEBUG nova.compute.manager [req-63abb4fa-b344-40bb-9563-2c686193ffba req-1beef31a-087a-4c2b-9791-9545d40fc96c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Received event network-changed-a11a83be-c1d2-47f1-92f5-556ead33435e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:54:15 np0005464891 nova_compute[259907]: 2025-10-01 16:54:15.305 2 DEBUG nova.compute.manager [req-63abb4fa-b344-40bb-9563-2c686193ffba req-1beef31a-087a-4c2b-9791-9545d40fc96c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Refreshing instance network info cache due to event network-changed-a11a83be-c1d2-47f1-92f5-556ead33435e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:54:15 np0005464891 nova_compute[259907]: 2025-10-01 16:54:15.305 2 DEBUG oslo_concurrency.lockutils [req-63abb4fa-b344-40bb-9563-2c686193ffba req-1beef31a-087a-4c2b-9791-9545d40fc96c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:54:15 np0005464891 nova_compute[259907]: 2025-10-01 16:54:15.315 2 DEBUG nova.compute.manager [req-11496155-29ee-4029-8741-d2f31e028d78 req-8e8d52b4-10cd-42b3-8da8-6806f507442b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Received event network-changed-8ef42ea7-b750-44b5-9353-fbc089ba0eef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:54:15 np0005464891 nova_compute[259907]: 2025-10-01 16:54:15.315 2 DEBUG nova.compute.manager [req-11496155-29ee-4029-8741-d2f31e028d78 req-8e8d52b4-10cd-42b3-8da8-6806f507442b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Refreshing instance network info cache due to event network-changed-8ef42ea7-b750-44b5-9353-fbc089ba0eef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:54:15 np0005464891 nova_compute[259907]: 2025-10-01 16:54:15.316 2 DEBUG oslo_concurrency.lockutils [req-11496155-29ee-4029-8741-d2f31e028d78 req-8e8d52b4-10cd-42b3-8da8-6806f507442b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-ce0fbe07-9503-45c6-a10c-1c09f27dd045" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:54:15 np0005464891 nova_compute[259907]: 2025-10-01 16:54:15.316 2 DEBUG oslo_concurrency.lockutils [req-11496155-29ee-4029-8741-d2f31e028d78 req-8e8d52b4-10cd-42b3-8da8-6806f507442b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-ce0fbe07-9503-45c6-a10c-1c09f27dd045" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:54:15 np0005464891 nova_compute[259907]: 2025-10-01 16:54:15.316 2 DEBUG nova.network.neutron [req-11496155-29ee-4029-8741-d2f31e028d78 req-8e8d52b4-10cd-42b3-8da8-6806f507442b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Refreshing network info cache for port 8ef42ea7-b750-44b5-9353-fbc089ba0eef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:54:15 np0005464891 nova_compute[259907]: 2025-10-01 16:54:15.425 2 DEBUG nova.network.neutron [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:54:15 np0005464891 hopeful_heyrovsky[290368]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:54:15 np0005464891 hopeful_heyrovsky[290368]: --> relative data size: 1.0
Oct  1 12:54:15 np0005464891 hopeful_heyrovsky[290368]: --> All data devices are unavailable
Oct  1 12:54:16 np0005464891 systemd[1]: libpod-a27f8e562f8e7a25952391c6d5d529cc167987958cd2dcaecc74ba8deb214307.scope: Deactivated successfully.
Oct  1 12:54:16 np0005464891 systemd[1]: libpod-a27f8e562f8e7a25952391c6d5d529cc167987958cd2dcaecc74ba8deb214307.scope: Consumed 1.085s CPU time.
Oct  1 12:54:16 np0005464891 podman[290351]: 2025-10-01 16:54:16.021941012 +0000 UTC m=+1.400502842 container died a27f8e562f8e7a25952391c6d5d529cc167987958cd2dcaecc74ba8deb214307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:54:16 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d582c31aed38bcff2ce37f553f301cbba0d545dcbb44c124aa8c10c178d5bfdf-merged.mount: Deactivated successfully.
Oct  1 12:54:16 np0005464891 podman[290351]: 2025-10-01 16:54:16.140499356 +0000 UTC m=+1.519061186 container remove a27f8e562f8e7a25952391c6d5d529cc167987958cd2dcaecc74ba8deb214307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:54:16 np0005464891 systemd[1]: libpod-conmon-a27f8e562f8e7a25952391c6d5d529cc167987958cd2dcaecc74ba8deb214307.scope: Deactivated successfully.
Oct  1 12:54:16 np0005464891 podman[290398]: 2025-10-01 16:54:16.152761998 +0000 UTC m=+0.085641336 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  1 12:54:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.388 2 DEBUG nova.network.neutron [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Updating instance_info_cache with network_info: [{"id": "a11a83be-c1d2-47f1-92f5-556ead33435e", "address": "fa:16:3e:b5:a9:d3", "network": {"id": "f871e885-fd92-424f-b0b3-6d810367183a", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1323819196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6195d07ebe4991a5be01fb7ba2afdc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa11a83be-c1", "ovs_interfaceid": "a11a83be-c1d2-47f1-92f5-556ead33435e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.418 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Releasing lock "refresh_cache-eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.418 2 DEBUG nova.compute.manager [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Instance network_info: |[{"id": "a11a83be-c1d2-47f1-92f5-556ead33435e", "address": "fa:16:3e:b5:a9:d3", "network": {"id": "f871e885-fd92-424f-b0b3-6d810367183a", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1323819196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6195d07ebe4991a5be01fb7ba2afdc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa11a83be-c1", "ovs_interfaceid": "a11a83be-c1d2-47f1-92f5-556ead33435e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.419 2 DEBUG oslo_concurrency.lockutils [req-63abb4fa-b344-40bb-9563-2c686193ffba req-1beef31a-087a-4c2b-9791-9545d40fc96c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.419 2 DEBUG nova.network.neutron [req-63abb4fa-b344-40bb-9563-2c686193ffba req-1beef31a-087a-4c2b-9791-9545d40fc96c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Refreshing network info cache for port a11a83be-c1d2-47f1-92f5-556ead33435e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.421 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Start _get_guest_xml network_info=[{"id": "a11a83be-c1d2-47f1-92f5-556ead33435e", "address": "fa:16:3e:b5:a9:d3", "network": {"id": "f871e885-fd92-424f-b0b3-6d810367183a", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1323819196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6195d07ebe4991a5be01fb7ba2afdc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa11a83be-c1", "ovs_interfaceid": "a11a83be-c1d2-47f1-92f5-556ead33435e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'image_id': 'f01c1e7c-fea3-4433-a44a-d71153552c78'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.427 2 WARNING nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.432 2 DEBUG nova.virt.libvirt.host [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.433 2 DEBUG nova.virt.libvirt.host [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.436 2 DEBUG nova.virt.libvirt.host [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.436 2 DEBUG nova.virt.libvirt.host [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.437 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.437 2 DEBUG nova.virt.hardware [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.438 2 DEBUG nova.virt.hardware [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.438 2 DEBUG nova.virt.hardware [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.438 2 DEBUG nova.virt.hardware [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.438 2 DEBUG nova.virt.hardware [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.439 2 DEBUG nova.virt.hardware [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.439 2 DEBUG nova.virt.hardware [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.439 2 DEBUG nova.virt.hardware [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.440 2 DEBUG nova.virt.hardware [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.440 2 DEBUG nova.virt.hardware [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.440 2 DEBUG nova.virt.hardware [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.442 2 DEBUG oslo_concurrency.processutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.672 2 DEBUG nova.network.neutron [req-11496155-29ee-4029-8741-d2f31e028d78 req-8e8d52b4-10cd-42b3-8da8-6806f507442b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Updated VIF entry in instance network info cache for port 8ef42ea7-b750-44b5-9353-fbc089ba0eef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.674 2 DEBUG nova.network.neutron [req-11496155-29ee-4029-8741-d2f31e028d78 req-8e8d52b4-10cd-42b3-8da8-6806f507442b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Updating instance_info_cache with network_info: [{"id": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "address": "fa:16:3e:ea:bc:ef", "network": {"id": "c9d562fc-0c1c-4b41-aa7c-4cb07be574c7", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-481427918-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0abf1cc99d79491f87a03f334eb255f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ef42ea7-b7", "ovs_interfaceid": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.695 2 DEBUG oslo_concurrency.lockutils [req-11496155-29ee-4029-8741-d2f31e028d78 req-8e8d52b4-10cd-42b3-8da8-6806f507442b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-ce0fbe07-9503-45c6-a10c-1c09f27dd045" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:54:16 np0005464891 podman[290590]: 2025-10-01 16:54:16.860235904 +0000 UTC m=+0.050143056 container create d701a733c6502aaf88ae4d389cad7a497c8c8ad829c81b58afae134f44b2c694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_feistel, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:54:16 np0005464891 systemd[1]: Started libpod-conmon-d701a733c6502aaf88ae4d389cad7a497c8c8ad829c81b58afae134f44b2c694.scope.
Oct  1 12:54:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:54:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3876457549' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.928 2 DEBUG oslo_concurrency.processutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:16 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:54:16 np0005464891 podman[290590]: 2025-10-01 16:54:16.840278555 +0000 UTC m=+0.030185737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:54:16 np0005464891 podman[290590]: 2025-10-01 16:54:16.944877361 +0000 UTC m=+0.134784513 container init d701a733c6502aaf88ae4d389cad7a497c8c8ad829c81b58afae134f44b2c694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_feistel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 12:54:16 np0005464891 podman[290590]: 2025-10-01 16:54:16.951210013 +0000 UTC m=+0.141117165 container start d701a733c6502aaf88ae4d389cad7a497c8c8ad829c81b58afae134f44b2c694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_feistel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:54:16 np0005464891 podman[290590]: 2025-10-01 16:54:16.954857121 +0000 UTC m=+0.144764273 container attach d701a733c6502aaf88ae4d389cad7a497c8c8ad829c81b58afae134f44b2c694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_feistel, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:54:16 np0005464891 affectionate_feistel[290606]: 167 167
Oct  1 12:54:16 np0005464891 systemd[1]: libpod-d701a733c6502aaf88ae4d389cad7a497c8c8ad829c81b58afae134f44b2c694.scope: Deactivated successfully.
Oct  1 12:54:16 np0005464891 podman[290590]: 2025-10-01 16:54:16.956573507 +0000 UTC m=+0.146480649 container died d701a733c6502aaf88ae4d389cad7a497c8c8ad829c81b58afae134f44b2c694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_feistel, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.960 2 DEBUG nova.storage.rbd_utils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] rbd image eef473c3-8fff-4cd4-a5f8-ef9b89b7439a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:54:16 np0005464891 nova_compute[259907]: 2025-10-01 16:54:16.967 2 DEBUG oslo_concurrency.processutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:16 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6c93a98fb63f16c9769800e693d6014764849c9e98b30ea09aa6637af0c4dcce-merged.mount: Deactivated successfully.
Oct  1 12:54:16 np0005464891 podman[290590]: 2025-10-01 16:54:16.989876947 +0000 UTC m=+0.179784089 container remove d701a733c6502aaf88ae4d389cad7a497c8c8ad829c81b58afae134f44b2c694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_feistel, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 12:54:17 np0005464891 systemd[1]: libpod-conmon-d701a733c6502aaf88ae4d389cad7a497c8c8ad829c81b58afae134f44b2c694.scope: Deactivated successfully.
Oct  1 12:54:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 150 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.1 MiB/s wr, 158 op/s
Oct  1 12:54:17 np0005464891 podman[290671]: 2025-10-01 16:54:17.165686937 +0000 UTC m=+0.052067697 container create ce15bc490d3d560614eeb0e0070d1e0d9990ceccde541082c4ac529e1469a7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:54:17 np0005464891 systemd[1]: Started libpod-conmon-ce15bc490d3d560614eeb0e0070d1e0d9990ceccde541082c4ac529e1469a7d3.scope.
Oct  1 12:54:17 np0005464891 podman[290671]: 2025-10-01 16:54:17.138493362 +0000 UTC m=+0.024874132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:54:17 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:54:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59debf02afd644dc6ceda311462020a119f3772eb341eee68a4e7bc422ed8712/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:54:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Oct  1 12:54:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59debf02afd644dc6ceda311462020a119f3772eb341eee68a4e7bc422ed8712/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:54:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Oct  1 12:54:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59debf02afd644dc6ceda311462020a119f3772eb341eee68a4e7bc422ed8712/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:54:17 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Oct  1 12:54:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59debf02afd644dc6ceda311462020a119f3772eb341eee68a4e7bc422ed8712/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:54:17 np0005464891 podman[290671]: 2025-10-01 16:54:17.302734231 +0000 UTC m=+0.189115021 container init ce15bc490d3d560614eeb0e0070d1e0d9990ceccde541082c4ac529e1469a7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 12:54:17 np0005464891 podman[290671]: 2025-10-01 16:54:17.311940919 +0000 UTC m=+0.198321649 container start ce15bc490d3d560614eeb0e0070d1e0d9990ceccde541082c4ac529e1469a7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 12:54:17 np0005464891 podman[290671]: 2025-10-01 16:54:17.31565499 +0000 UTC m=+0.202035780 container attach ce15bc490d3d560614eeb0e0070d1e0d9990ceccde541082c4ac529e1469a7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:54:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:54:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/265771365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.394 2 DEBUG oslo_concurrency.processutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.396 2 DEBUG nova.virt.libvirt.vif [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:54:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1844506122',display_name='tempest-SnapshotDataIntegrityTests-server-1844506122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1844506122',id=15,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEnrHCkN4tXnW26maD9DFNY004Z2A+ODEW0hXAFiLnkZTejfo4yGwI1auNgqnB9srNoiYwRYFXiPTQ/EiqFhro8485VJkjlEg8R1WH/ORqVOcXHDgWBC9f5dDJho5Yosg==',key_name='tempest-keypair-1851487111',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f6195d07ebe4991a5be01fb7ba2afdc',ramdisk_id='',reservation_id='r-n3jgfpdn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-1433560761',owner_user_name='tempest-SnapshotDataIntegrityTests-1433560761-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:54:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9dcf2401f8724e5b8337ca100dda75db',uuid=eef473c3-8fff-4cd4-a5f8-ef9b89b7439a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a11a83be-c1d2-47f1-92f5-556ead33435e", "address": "fa:16:3e:b5:a9:d3", "network": {"id": "f871e885-fd92-424f-b0b3-6d810367183a", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1323819196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6195d07ebe4991a5be01fb7ba2afdc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa11a83be-c1", "ovs_interfaceid": "a11a83be-c1d2-47f1-92f5-556ead33435e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.397 2 DEBUG nova.network.os_vif_util [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Converting VIF {"id": "a11a83be-c1d2-47f1-92f5-556ead33435e", "address": "fa:16:3e:b5:a9:d3", "network": {"id": "f871e885-fd92-424f-b0b3-6d810367183a", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1323819196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6195d07ebe4991a5be01fb7ba2afdc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa11a83be-c1", "ovs_interfaceid": "a11a83be-c1d2-47f1-92f5-556ead33435e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.398 2 DEBUG nova.network.os_vif_util [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:a9:d3,bridge_name='br-int',has_traffic_filtering=True,id=a11a83be-c1d2-47f1-92f5-556ead33435e,network=Network(f871e885-fd92-424f-b0b3-6d810367183a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa11a83be-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.399 2 DEBUG nova.objects.instance [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lazy-loading 'pci_devices' on Instance uuid eef473c3-8fff-4cd4-a5f8-ef9b89b7439a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.415 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  <uuid>eef473c3-8fff-4cd4-a5f8-ef9b89b7439a</uuid>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  <name>instance-0000000f</name>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <nova:name>tempest-SnapshotDataIntegrityTests-server-1844506122</nova:name>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:54:16</nova:creationTime>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:        <nova:user uuid="9dcf2401f8724e5b8337ca100dda75db">tempest-SnapshotDataIntegrityTests-1433560761-project-member</nova:user>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:        <nova:project uuid="6f6195d07ebe4991a5be01fb7ba2afdc">tempest-SnapshotDataIntegrityTests-1433560761</nova:project>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="f01c1e7c-fea3-4433-a44a-d71153552c78"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:        <nova:port uuid="a11a83be-c1d2-47f1-92f5-556ead33435e">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <entry name="serial">eef473c3-8fff-4cd4-a5f8-ef9b89b7439a</entry>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <entry name="uuid">eef473c3-8fff-4cd4-a5f8-ef9b89b7439a</entry>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/eef473c3-8fff-4cd4-a5f8-ef9b89b7439a_disk">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/eef473c3-8fff-4cd4-a5f8-ef9b89b7439a_disk.config">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:b5:a9:d3"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <target dev="tapa11a83be-c1"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/eef473c3-8fff-4cd4-a5f8-ef9b89b7439a/console.log" append="off"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:54:17 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:54:17 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:54:17 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:54:17 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.417 2 DEBUG nova.compute.manager [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Preparing to wait for external event network-vif-plugged-a11a83be-c1d2-47f1-92f5-556ead33435e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.417 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.417 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.418 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.419 2 DEBUG nova.virt.libvirt.vif [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:54:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1844506122',display_name='tempest-SnapshotDataIntegrityTests-server-1844506122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1844506122',id=15,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEnrHCkN4tXnW26maD9DFNY004Z2A+ODEW0hXAFiLnkZTejfo4yGwI1auNgqnB9srNoiYwRYFXiPTQ/EiqFhro8485VJkjlEg8R1WH/ORqVOcXHDgWBC9f5dDJho5Yosg==',key_name='tempest-keypair-1851487111',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f6195d07ebe4991a5be01fb7ba2afdc',ramdisk_id='',reservation_id='r-n3jgfpdn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-1433560761',owner_user_name='tempest-SnapshotDataIntegrityTests-1433560761-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:54:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9dcf2401f8724e5b8337ca100dda75db',uuid=eef473c3-8fff-4cd4-a5f8-ef9b89b7439a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a11a83be-c1d2-47f1-92f5-556ead33435e", "address": "fa:16:3e:b5:a9:d3", "network": {"id": "f871e885-fd92-424f-b0b3-6d810367183a", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1323819196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6195d07ebe4991a5be01fb7ba2afdc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa11a83be-c1", "ovs_interfaceid": "a11a83be-c1d2-47f1-92f5-556ead33435e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.419 2 DEBUG nova.network.os_vif_util [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Converting VIF {"id": "a11a83be-c1d2-47f1-92f5-556ead33435e", "address": "fa:16:3e:b5:a9:d3", "network": {"id": "f871e885-fd92-424f-b0b3-6d810367183a", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1323819196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6195d07ebe4991a5be01fb7ba2afdc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa11a83be-c1", "ovs_interfaceid": "a11a83be-c1d2-47f1-92f5-556ead33435e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.420 2 DEBUG nova.network.os_vif_util [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:a9:d3,bridge_name='br-int',has_traffic_filtering=True,id=a11a83be-c1d2-47f1-92f5-556ead33435e,network=Network(f871e885-fd92-424f-b0b3-6d810367183a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa11a83be-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.420 2 DEBUG os_vif [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:a9:d3,bridge_name='br-int',has_traffic_filtering=True,id=a11a83be-c1d2-47f1-92f5-556ead33435e,network=Network(f871e885-fd92-424f-b0b3-6d810367183a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa11a83be-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.421 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.422 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.426 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa11a83be-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.426 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa11a83be-c1, col_values=(('external_ids', {'iface-id': 'a11a83be-c1d2-47f1-92f5-556ead33435e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:a9:d3', 'vm-uuid': 'eef473c3-8fff-4cd4-a5f8-ef9b89b7439a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:17 np0005464891 NetworkManager[44940]: <info>  [1759337657.4295] manager: (tapa11a83be-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.438 2 INFO os_vif [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:a9:d3,bridge_name='br-int',has_traffic_filtering=True,id=a11a83be-c1d2-47f1-92f5-556ead33435e,network=Network(f871e885-fd92-424f-b0b3-6d810367183a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa11a83be-c1')#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.498 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.499 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.499 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No VIF found with MAC fa:16:3e:b5:a9:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.500 2 INFO nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Using config drive#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.520 2 DEBUG nova.storage.rbd_utils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] rbd image eef473c3-8fff-4cd4-a5f8-ef9b89b7439a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.712 2 DEBUG nova.network.neutron [req-63abb4fa-b344-40bb-9563-2c686193ffba req-1beef31a-087a-4c2b-9791-9545d40fc96c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Updated VIF entry in instance network info cache for port a11a83be-c1d2-47f1-92f5-556ead33435e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.713 2 DEBUG nova.network.neutron [req-63abb4fa-b344-40bb-9563-2c686193ffba req-1beef31a-087a-4c2b-9791-9545d40fc96c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Updating instance_info_cache with network_info: [{"id": "a11a83be-c1d2-47f1-92f5-556ead33435e", "address": "fa:16:3e:b5:a9:d3", "network": {"id": "f871e885-fd92-424f-b0b3-6d810367183a", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1323819196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6195d07ebe4991a5be01fb7ba2afdc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa11a83be-c1", "ovs_interfaceid": "a11a83be-c1d2-47f1-92f5-556ead33435e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.735 2 DEBUG oslo_concurrency.lockutils [req-63abb4fa-b344-40bb-9563-2c686193ffba req-1beef31a-087a-4c2b-9791-9545d40fc96c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.871 2 INFO nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Creating config drive at /var/lib/nova/instances/eef473c3-8fff-4cd4-a5f8-ef9b89b7439a/disk.config#033[00m
Oct  1 12:54:17 np0005464891 nova_compute[259907]: 2025-10-01 16:54:17.876 2 DEBUG oslo_concurrency.processutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eef473c3-8fff-4cd4-a5f8-ef9b89b7439a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcwih8cew execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:18 np0005464891 nova_compute[259907]: 2025-10-01 16:54:18.021 2 DEBUG oslo_concurrency.processutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eef473c3-8fff-4cd4-a5f8-ef9b89b7439a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcwih8cew" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:18 np0005464891 happy_austin[290688]: {
Oct  1 12:54:18 np0005464891 happy_austin[290688]:    "0": [
Oct  1 12:54:18 np0005464891 happy_austin[290688]:        {
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "devices": [
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "/dev/loop3"
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            ],
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "lv_name": "ceph_lv0",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "lv_size": "21470642176",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "name": "ceph_lv0",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "tags": {
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.cluster_name": "ceph",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.crush_device_class": "",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.encrypted": "0",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.osd_id": "0",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.type": "block",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.vdo": "0"
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            },
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "type": "block",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "vg_name": "ceph_vg0"
Oct  1 12:54:18 np0005464891 happy_austin[290688]:        }
Oct  1 12:54:18 np0005464891 happy_austin[290688]:    ],
Oct  1 12:54:18 np0005464891 happy_austin[290688]:    "1": [
Oct  1 12:54:18 np0005464891 happy_austin[290688]:        {
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "devices": [
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "/dev/loop4"
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            ],
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "lv_name": "ceph_lv1",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "lv_size": "21470642176",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "name": "ceph_lv1",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "tags": {
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.cluster_name": "ceph",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.crush_device_class": "",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.encrypted": "0",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.osd_id": "1",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.type": "block",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.vdo": "0"
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            },
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "type": "block",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "vg_name": "ceph_vg1"
Oct  1 12:54:18 np0005464891 happy_austin[290688]:        }
Oct  1 12:54:18 np0005464891 happy_austin[290688]:    ],
Oct  1 12:54:18 np0005464891 happy_austin[290688]:    "2": [
Oct  1 12:54:18 np0005464891 happy_austin[290688]:        {
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "devices": [
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "/dev/loop5"
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            ],
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "lv_name": "ceph_lv2",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "lv_size": "21470642176",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "name": "ceph_lv2",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "tags": {
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.cluster_name": "ceph",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.crush_device_class": "",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.encrypted": "0",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.osd_id": "2",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.type": "block",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:                "ceph.vdo": "0"
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            },
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "type": "block",
Oct  1 12:54:18 np0005464891 happy_austin[290688]:            "vg_name": "ceph_vg2"
Oct  1 12:54:18 np0005464891 happy_austin[290688]:        }
Oct  1 12:54:18 np0005464891 happy_austin[290688]:    ]
Oct  1 12:54:18 np0005464891 happy_austin[290688]: }
Oct  1 12:54:18 np0005464891 nova_compute[259907]: 2025-10-01 16:54:18.060 2 DEBUG nova.storage.rbd_utils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] rbd image eef473c3-8fff-4cd4-a5f8-ef9b89b7439a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:54:18 np0005464891 nova_compute[259907]: 2025-10-01 16:54:18.064 2 DEBUG oslo_concurrency.processutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eef473c3-8fff-4cd4-a5f8-ef9b89b7439a/disk.config eef473c3-8fff-4cd4-a5f8-ef9b89b7439a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:18 np0005464891 systemd[1]: libpod-ce15bc490d3d560614eeb0e0070d1e0d9990ceccde541082c4ac529e1469a7d3.scope: Deactivated successfully.
Oct  1 12:54:18 np0005464891 podman[290742]: 2025-10-01 16:54:18.153850829 +0000 UTC m=+0.048067000 container died ce15bc490d3d560614eeb0e0070d1e0d9990ceccde541082c4ac529e1469a7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:54:18 np0005464891 systemd[1]: var-lib-containers-storage-overlay-59debf02afd644dc6ceda311462020a119f3772eb341eee68a4e7bc422ed8712-merged.mount: Deactivated successfully.
Oct  1 12:54:18 np0005464891 podman[290742]: 2025-10-01 16:54:18.469764545 +0000 UTC m=+0.363980666 container remove ce15bc490d3d560614eeb0e0070d1e0d9990ceccde541082c4ac529e1469a7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 12:54:18 np0005464891 systemd[1]: libpod-conmon-ce15bc490d3d560614eeb0e0070d1e0d9990ceccde541082c4ac529e1469a7d3.scope: Deactivated successfully.
Oct  1 12:54:18 np0005464891 nova_compute[259907]: 2025-10-01 16:54:18.584 2 DEBUG oslo_concurrency.processutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eef473c3-8fff-4cd4-a5f8-ef9b89b7439a/disk.config eef473c3-8fff-4cd4-a5f8-ef9b89b7439a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:18 np0005464891 nova_compute[259907]: 2025-10-01 16:54:18.587 2 INFO nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Deleting local config drive /var/lib/nova/instances/eef473c3-8fff-4cd4-a5f8-ef9b89b7439a/disk.config because it was imported into RBD.#033[00m
Oct  1 12:54:18 np0005464891 kernel: tapa11a83be-c1: entered promiscuous mode
Oct  1 12:54:18 np0005464891 NetworkManager[44940]: <info>  [1759337658.6535] manager: (tapa11a83be-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Oct  1 12:54:18 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:18Z|00141|binding|INFO|Claiming lport a11a83be-c1d2-47f1-92f5-556ead33435e for this chassis.
Oct  1 12:54:18 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:18Z|00142|binding|INFO|a11a83be-c1d2-47f1-92f5-556ead33435e: Claiming fa:16:3e:b5:a9:d3 10.100.0.7
Oct  1 12:54:18 np0005464891 nova_compute[259907]: 2025-10-01 16:54:18.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.668 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:a9:d3 10.100.0.7'], port_security=['fa:16:3e:b5:a9:d3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eef473c3-8fff-4cd4-a5f8-ef9b89b7439a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f871e885-fd92-424f-b0b3-6d810367183a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6195d07ebe4991a5be01fb7ba2afdc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b6795c28-c4d2-4c23-9300-5a320196f859 fa9ad8e8-60f0-4036-9b1b-a940940c2e2e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d856bf9-7949-405b-8a21-06a5e8d1a429, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=a11a83be-c1d2-47f1-92f5-556ead33435e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.669 162546 INFO neutron.agent.ovn.metadata.agent [-] Port a11a83be-c1d2-47f1-92f5-556ead33435e in datapath f871e885-fd92-424f-b0b3-6d810367183a bound to our chassis#033[00m
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.672 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f871e885-fd92-424f-b0b3-6d810367183a#033[00m
Oct  1 12:54:18 np0005464891 systemd-udevd[290836]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:54:18 np0005464891 nova_compute[259907]: 2025-10-01 16:54:18.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:18 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:18Z|00143|binding|INFO|Setting lport a11a83be-c1d2-47f1-92f5-556ead33435e ovn-installed in OVS
Oct  1 12:54:18 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:18Z|00144|binding|INFO|Setting lport a11a83be-c1d2-47f1-92f5-556ead33435e up in Southbound
Oct  1 12:54:18 np0005464891 nova_compute[259907]: 2025-10-01 16:54:18.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.687 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[114981ac-52e4-44c6-9eda-2fdf61a8b9b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.688 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf871e885-f1 in ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.691 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf871e885-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.691 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[eda73716-7abf-43f2-b90e-83159714a3ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.692 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[72c08955-cd14-4651-abb9-27b0720dabae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:18 np0005464891 NetworkManager[44940]: <info>  [1759337658.6989] device (tapa11a83be-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:54:18 np0005464891 NetworkManager[44940]: <info>  [1759337658.7003] device (tapa11a83be-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:54:18 np0005464891 systemd-machined[214891]: New machine qemu-15-instance-0000000f.
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.706 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[6284602e-0c1a-4254-9118-486319c5d7e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:18 np0005464891 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.734 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d716eb08-cd8d-4964-a39c-e81c7c2ab872]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.759 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[692a5252-d6fe-43ff-889c-0919b6b0b538]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:18 np0005464891 NetworkManager[44940]: <info>  [1759337658.7677] manager: (tapf871e885-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/89)
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.766 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8139d4-a3b8-4dcb-90ad-6973fea779c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.804 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[23b42b28-4815-4341-ab60-6885036a6812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.808 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4e61b1-c606-4279-90a1-41b23a1bbb4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:18 np0005464891 NetworkManager[44940]: <info>  [1759337658.8350] device (tapf871e885-f0): carrier: link connected
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.841 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[4581c4eb-c421-4af8-baf9-a23ab197ae6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.870 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e11a77f1-1d69-43f3-8834-e28d9f61103f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf871e885-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:a4:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460740, 'reachable_time': 30884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290920, 'error': None, 'target': 'ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.890 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca08bf9-fd76-4b80-a48d-4b94e974f813]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7c:a402'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460740, 'tstamp': 460740}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290921, 'error': None, 'target': 'ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.915 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[de10fd72-093e-44c1-aeb0-86def1389321]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf871e885-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:a4:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460740, 'reachable_time': 30884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290922, 'error': None, 'target': 'ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:18.954 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[af23eb73-d250-4dfa-a7e4-91eee0c31d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:19.054 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c341ab-4b8c-4d6f-81a2-e02a63bda082]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:19.056 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf871e885-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:54:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 181 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 173 op/s
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:19.056 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:19.059 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf871e885-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:54:19 np0005464891 nova_compute[259907]: 2025-10-01 16:54:19.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:19 np0005464891 NetworkManager[44940]: <info>  [1759337659.0624] manager: (tapf871e885-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Oct  1 12:54:19 np0005464891 kernel: tapf871e885-f0: entered promiscuous mode
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:19.081 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf871e885-f0, col_values=(('external_ids', {'iface-id': '2980a674-8e6a-4461-8bb6-70fb63ec12c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:54:19 np0005464891 nova_compute[259907]: 2025-10-01 16:54:19.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:19 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:19Z|00145|binding|INFO|Releasing lport 2980a674-8e6a-4461-8bb6-70fb63ec12c0 from this chassis (sb_readonly=0)
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:19.089 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f871e885-fd92-424f-b0b3-6d810367183a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f871e885-fd92-424f-b0b3-6d810367183a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:19.090 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2abe25d1-3165-42ed-ae4a-4608b3a9303b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:19.092 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-f871e885-fd92-424f-b0b3-6d810367183a
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/f871e885-fd92-424f-b0b3-6d810367183a.pid.haproxy
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID f871e885-fd92-424f-b0b3-6d810367183a
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:54:19 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:19.096 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a', 'env', 'PROCESS_TAG=haproxy-f871e885-fd92-424f-b0b3-6d810367183a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f871e885-fd92-424f-b0b3-6d810367183a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:54:19 np0005464891 nova_compute[259907]: 2025-10-01 16:54:19.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:19 np0005464891 podman[290968]: 2025-10-01 16:54:19.24495596 +0000 UTC m=+0.062373256 container create be38ad5fd5fa454951e277f8cddecb49d4cae8e2f717b42c689ed68c06ab3334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:54:19 np0005464891 systemd[1]: Started libpod-conmon-be38ad5fd5fa454951e277f8cddecb49d4cae8e2f717b42c689ed68c06ab3334.scope.
Oct  1 12:54:19 np0005464891 podman[290968]: 2025-10-01 16:54:19.211196798 +0000 UTC m=+0.028614084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:54:19 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:54:19 np0005464891 podman[290968]: 2025-10-01 16:54:19.411410879 +0000 UTC m=+0.228828175 container init be38ad5fd5fa454951e277f8cddecb49d4cae8e2f717b42c689ed68c06ab3334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_babbage, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:54:19 np0005464891 podman[290968]: 2025-10-01 16:54:19.420624337 +0000 UTC m=+0.238041613 container start be38ad5fd5fa454951e277f8cddecb49d4cae8e2f717b42c689ed68c06ab3334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:54:19 np0005464891 nova_compute[259907]: 2025-10-01 16:54:19.428 2 DEBUG nova.compute.manager [req-8b6c8366-f42f-4191-a03d-b83ef0bf01fd req-8d167a32-0259-4bee-b58b-58b9e1d7bb68 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Received event network-vif-plugged-a11a83be-c1d2-47f1-92f5-556ead33435e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:54:19 np0005464891 friendly_babbage[290984]: 167 167
Oct  1 12:54:19 np0005464891 nova_compute[259907]: 2025-10-01 16:54:19.430 2 DEBUG oslo_concurrency.lockutils [req-8b6c8366-f42f-4191-a03d-b83ef0bf01fd req-8d167a32-0259-4bee-b58b-58b9e1d7bb68 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:19 np0005464891 systemd[1]: libpod-be38ad5fd5fa454951e277f8cddecb49d4cae8e2f717b42c689ed68c06ab3334.scope: Deactivated successfully.
Oct  1 12:54:19 np0005464891 nova_compute[259907]: 2025-10-01 16:54:19.431 2 DEBUG oslo_concurrency.lockutils [req-8b6c8366-f42f-4191-a03d-b83ef0bf01fd req-8d167a32-0259-4bee-b58b-58b9e1d7bb68 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:19 np0005464891 nova_compute[259907]: 2025-10-01 16:54:19.432 2 DEBUG oslo_concurrency.lockutils [req-8b6c8366-f42f-4191-a03d-b83ef0bf01fd req-8d167a32-0259-4bee-b58b-58b9e1d7bb68 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:19 np0005464891 nova_compute[259907]: 2025-10-01 16:54:19.432 2 DEBUG nova.compute.manager [req-8b6c8366-f42f-4191-a03d-b83ef0bf01fd req-8d167a32-0259-4bee-b58b-58b9e1d7bb68 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Processing event network-vif-plugged-a11a83be-c1d2-47f1-92f5-556ead33435e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:54:19 np0005464891 podman[290968]: 2025-10-01 16:54:19.451766669 +0000 UTC m=+0.269183935 container attach be38ad5fd5fa454951e277f8cddecb49d4cae8e2f717b42c689ed68c06ab3334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:54:19 np0005464891 podman[290968]: 2025-10-01 16:54:19.452158419 +0000 UTC m=+0.269575685 container died be38ad5fd5fa454951e277f8cddecb49d4cae8e2f717b42c689ed68c06ab3334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_babbage, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:54:19 np0005464891 systemd[1]: var-lib-containers-storage-overlay-55b3744708a32d0470722785df03d69a667a7dceeb82fc995b968b7f38afe7e6-merged.mount: Deactivated successfully.
Oct  1 12:54:19 np0005464891 podman[290968]: 2025-10-01 16:54:19.56655075 +0000 UTC m=+0.383968036 container remove be38ad5fd5fa454951e277f8cddecb49d4cae8e2f717b42c689ed68c06ab3334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:54:19 np0005464891 systemd[1]: libpod-conmon-be38ad5fd5fa454951e277f8cddecb49d4cae8e2f717b42c689ed68c06ab3334.scope: Deactivated successfully.
Oct  1 12:54:19 np0005464891 podman[291027]: 2025-10-01 16:54:19.654660552 +0000 UTC m=+0.179723408 container create b6fb98775c33ec1b897fe4e58adac9a4b36dadaaae327a8def4ba98427cbe9cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:54:19 np0005464891 podman[291027]: 2025-10-01 16:54:19.569843809 +0000 UTC m=+0.094906725 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:54:19 np0005464891 systemd[1]: Started libpod-conmon-b6fb98775c33ec1b897fe4e58adac9a4b36dadaaae327a8def4ba98427cbe9cb.scope.
Oct  1 12:54:19 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:54:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2efa1d5b9934c68b2afabd6fd57d68c5027290a65fa5f30a6f7f06122d5c451/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:54:19 np0005464891 podman[291027]: 2025-10-01 16:54:19.769750452 +0000 UTC m=+0.294813298 container init b6fb98775c33ec1b897fe4e58adac9a4b36dadaaae327a8def4ba98427cbe9cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 12:54:19 np0005464891 podman[291027]: 2025-10-01 16:54:19.783079661 +0000 UTC m=+0.308142497 container start b6fb98775c33ec1b897fe4e58adac9a4b36dadaaae327a8def4ba98427cbe9cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:54:19 np0005464891 neutron-haproxy-ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a[291081]: [NOTICE]   (291098) : New worker (291102) forked
Oct  1 12:54:19 np0005464891 neutron-haproxy-ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a[291081]: [NOTICE]   (291098) : Loading success.
Oct  1 12:54:19 np0005464891 podman[291085]: 2025-10-01 16:54:19.839017953 +0000 UTC m=+0.095143353 container create cb600f1aac9ab5668aa343276548f241d0834107f4f6df8f27b02a03851b3695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 12:54:19 np0005464891 podman[291085]: 2025-10-01 16:54:19.786249227 +0000 UTC m=+0.042374717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:54:19 np0005464891 systemd[1]: Started libpod-conmon-cb600f1aac9ab5668aa343276548f241d0834107f4f6df8f27b02a03851b3695.scope.
Oct  1 12:54:19 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:54:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3f056ebb4f7db64560b23cc593432f3b2eb0de1df33f303b8150ac70a5fc7d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:54:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3f056ebb4f7db64560b23cc593432f3b2eb0de1df33f303b8150ac70a5fc7d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:54:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3f056ebb4f7db64560b23cc593432f3b2eb0de1df33f303b8150ac70a5fc7d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:54:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3f056ebb4f7db64560b23cc593432f3b2eb0de1df33f303b8150ac70a5fc7d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:54:20 np0005464891 podman[291085]: 2025-10-01 16:54:20.009681454 +0000 UTC m=+0.265806864 container init cb600f1aac9ab5668aa343276548f241d0834107f4f6df8f27b02a03851b3695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 12:54:20 np0005464891 podman[291085]: 2025-10-01 16:54:20.01804458 +0000 UTC m=+0.274170000 container start cb600f1aac9ab5668aa343276548f241d0834107f4f6df8f27b02a03851b3695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_northcutt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 12:54:20 np0005464891 podman[291085]: 2025-10-01 16:54:20.03097309 +0000 UTC m=+0.287098510 container attach cb600f1aac9ab5668aa343276548f241d0834107f4f6df8f27b02a03851b3695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.077 2 DEBUG nova.compute.manager [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.079 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337660.0769076, eef473c3-8fff-4cd4-a5f8-ef9b89b7439a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.080 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] VM Started (Lifecycle Event)#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.085 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.089 2 INFO nova.virt.libvirt.driver [-] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Instance spawned successfully.#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.089 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.118 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.125 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.128 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.129 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.130 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.130 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.131 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.132 2 DEBUG nova.virt.libvirt.driver [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.249 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.250 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337660.0779128, eef473c3-8fff-4cd4-a5f8-ef9b89b7439a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.251 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.294 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.298 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337660.0866528, eef473c3-8fff-4cd4-a5f8-ef9b89b7439a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.298 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.313 2 INFO nova.compute.manager [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Took 7.37 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.314 2 DEBUG nova.compute.manager [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.335 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.339 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:54:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.437 2 INFO nova.compute.manager [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Took 8.43 seconds to build instance.#033[00m
Oct  1 12:54:20 np0005464891 nova_compute[259907]: 2025-10-01 16:54:20.465 2 DEBUG oslo_concurrency.lockutils [None req-60a3f9bb-a6eb-4bf8-8d4c-a9799a0c0299 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Oct  1 12:54:20 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]: {
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:        "osd_id": 2,
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:        "type": "bluestore"
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:    },
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:        "osd_id": 0,
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:        "type": "bluestore"
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:    },
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:        "osd_id": 1,
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:        "type": "bluestore"
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]:    }
Oct  1 12:54:20 np0005464891 naughty_northcutt[291113]: }
Oct  1 12:54:21 np0005464891 systemd[1]: libpod-cb600f1aac9ab5668aa343276548f241d0834107f4f6df8f27b02a03851b3695.scope: Deactivated successfully.
Oct  1 12:54:21 np0005464891 podman[291085]: 2025-10-01 16:54:21.019624934 +0000 UTC m=+1.275750354 container died cb600f1aac9ab5668aa343276548f241d0834107f4f6df8f27b02a03851b3695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_northcutt, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:54:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 181 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 167 op/s
Oct  1 12:54:21 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d3f056ebb4f7db64560b23cc593432f3b2eb0de1df33f303b8150ac70a5fc7d5-merged.mount: Deactivated successfully.
Oct  1 12:54:21 np0005464891 podman[291085]: 2025-10-01 16:54:21.150323086 +0000 UTC m=+1.406448486 container remove cb600f1aac9ab5668aa343276548f241d0834107f4f6df8f27b02a03851b3695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 12:54:21 np0005464891 systemd[1]: libpod-conmon-cb600f1aac9ab5668aa343276548f241d0834107f4f6df8f27b02a03851b3695.scope: Deactivated successfully.
Oct  1 12:54:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:54:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:54:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:54:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:54:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:54:21 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 0258a905-2456-4b84-b579-654e89b601df does not exist
Oct  1 12:54:21 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev cd980e51-c684-480f-97de-eec8b83d3c2e does not exist
Oct  1 12:54:21 np0005464891 nova_compute[259907]: 2025-10-01 16:54:21.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:21 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:54:21 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:54:21 np0005464891 nova_compute[259907]: 2025-10-01 16:54:21.527 2 DEBUG nova.compute.manager [req-5b197ed2-1ee4-4219-a76b-f769c129de1e req-cc2e1f06-1478-419c-b073-62d68c81e02c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Received event network-vif-plugged-a11a83be-c1d2-47f1-92f5-556ead33435e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:54:21 np0005464891 nova_compute[259907]: 2025-10-01 16:54:21.527 2 DEBUG oslo_concurrency.lockutils [req-5b197ed2-1ee4-4219-a76b-f769c129de1e req-cc2e1f06-1478-419c-b073-62d68c81e02c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:21 np0005464891 nova_compute[259907]: 2025-10-01 16:54:21.528 2 DEBUG oslo_concurrency.lockutils [req-5b197ed2-1ee4-4219-a76b-f769c129de1e req-cc2e1f06-1478-419c-b073-62d68c81e02c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:21 np0005464891 nova_compute[259907]: 2025-10-01 16:54:21.528 2 DEBUG oslo_concurrency.lockutils [req-5b197ed2-1ee4-4219-a76b-f769c129de1e req-cc2e1f06-1478-419c-b073-62d68c81e02c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:21 np0005464891 nova_compute[259907]: 2025-10-01 16:54:21.528 2 DEBUG nova.compute.manager [req-5b197ed2-1ee4-4219-a76b-f769c129de1e req-cc2e1f06-1478-419c-b073-62d68c81e02c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] No waiting events found dispatching network-vif-plugged-a11a83be-c1d2-47f1-92f5-556ead33435e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:54:21 np0005464891 nova_compute[259907]: 2025-10-01 16:54:21.528 2 WARNING nova.compute.manager [req-5b197ed2-1ee4-4219-a76b-f769c129de1e req-cc2e1f06-1478-419c-b073-62d68c81e02c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Received unexpected event network-vif-plugged-a11a83be-c1d2-47f1-92f5-556ead33435e for instance with vm_state active and task_state None.#033[00m
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006947920736471967 of space, bias 1.0, pg target 0.208437622094159 quantized to 32 (current 32)
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003517837956632155 of space, bias 1.0, pg target 0.10553513869896464 quantized to 32 (current 32)
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:54:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:54:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:54:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3525473635' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:54:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:54:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3525473635' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:54:22 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct  1 12:54:22 np0005464891 nova_compute[259907]: 2025-10-01 16:54:22.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:22 np0005464891 nova_compute[259907]: 2025-10-01 16:54:22.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:22 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:22.454 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:54:22 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:22.457 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:54:22 np0005464891 nova_compute[259907]: 2025-10-01 16:54:22.500 2 DEBUG nova.compute.manager [req-b991cfca-8a9c-45cb-90d1-7fef0eeaf3ec req-b4fe551b-4266-4ef5-ab4f-fd59c92ac251 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Received event network-changed-a11a83be-c1d2-47f1-92f5-556ead33435e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:54:22 np0005464891 nova_compute[259907]: 2025-10-01 16:54:22.501 2 DEBUG nova.compute.manager [req-b991cfca-8a9c-45cb-90d1-7fef0eeaf3ec req-b4fe551b-4266-4ef5-ab4f-fd59c92ac251 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Refreshing instance network info cache due to event network-changed-a11a83be-c1d2-47f1-92f5-556ead33435e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:54:22 np0005464891 nova_compute[259907]: 2025-10-01 16:54:22.501 2 DEBUG oslo_concurrency.lockutils [req-b991cfca-8a9c-45cb-90d1-7fef0eeaf3ec req-b4fe551b-4266-4ef5-ab4f-fd59c92ac251 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:54:22 np0005464891 nova_compute[259907]: 2025-10-01 16:54:22.501 2 DEBUG oslo_concurrency.lockutils [req-b991cfca-8a9c-45cb-90d1-7fef0eeaf3ec req-b4fe551b-4266-4ef5-ab4f-fd59c92ac251 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:54:22 np0005464891 nova_compute[259907]: 2025-10-01 16:54:22.502 2 DEBUG nova.network.neutron [req-b991cfca-8a9c-45cb-90d1-7fef0eeaf3ec req-b4fe551b-4266-4ef5-ab4f-fd59c92ac251 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Refreshing network info cache for port a11a83be-c1d2-47f1-92f5-556ead33435e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:54:22 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct  1 12:54:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1567: 321 pgs: 321 active+clean; 186 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 147 op/s
Oct  1 12:54:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Oct  1 12:54:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Oct  1 12:54:23 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Oct  1 12:54:24 np0005464891 nova_compute[259907]: 2025-10-01 16:54:24.177 2 DEBUG nova.network.neutron [req-b991cfca-8a9c-45cb-90d1-7fef0eeaf3ec req-b4fe551b-4266-4ef5-ab4f-fd59c92ac251 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Updated VIF entry in instance network info cache for port a11a83be-c1d2-47f1-92f5-556ead33435e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:54:24 np0005464891 nova_compute[259907]: 2025-10-01 16:54:24.179 2 DEBUG nova.network.neutron [req-b991cfca-8a9c-45cb-90d1-7fef0eeaf3ec req-b4fe551b-4266-4ef5-ab4f-fd59c92ac251 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Updating instance_info_cache with network_info: [{"id": "a11a83be-c1d2-47f1-92f5-556ead33435e", "address": "fa:16:3e:b5:a9:d3", "network": {"id": "f871e885-fd92-424f-b0b3-6d810367183a", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1323819196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6195d07ebe4991a5be01fb7ba2afdc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa11a83be-c1", "ovs_interfaceid": "a11a83be-c1d2-47f1-92f5-556ead33435e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:54:24 np0005464891 nova_compute[259907]: 2025-10-01 16:54:24.204 2 DEBUG oslo_concurrency.lockutils [req-b991cfca-8a9c-45cb-90d1-7fef0eeaf3ec req-b4fe551b-4266-4ef5-ab4f-fd59c92ac251 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:54:24 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:24Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ea:bc:ef 10.100.0.5
Oct  1 12:54:24 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:24Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ea:bc:ef 10.100.0.5
Oct  1 12:54:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:54:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/797451884' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:54:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:54:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/797451884' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:54:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 321 active+clean; 213 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.9 MiB/s wr, 292 op/s
Oct  1 12:54:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:54:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Oct  1 12:54:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Oct  1 12:54:26 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Oct  1 12:54:26 np0005464891 nova_compute[259907]: 2025-10-01 16:54:26.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 213 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 270 op/s
Oct  1 12:54:27 np0005464891 nova_compute[259907]: 2025-10-01 16:54:27.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:27 np0005464891 podman[291209]: 2025-10-01 16:54:27.965956479 +0000 UTC m=+0.070091595 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:54:28 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:28.459 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:54:28 np0005464891 nova_compute[259907]: 2025-10-01 16:54:28.799 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:54:28 np0005464891 nova_compute[259907]: 2025-10-01 16:54:28.803 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:54:28 np0005464891 nova_compute[259907]: 2025-10-01 16:54:28.803 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:54:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 214 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 258 op/s
Oct  1 12:54:29 np0005464891 nova_compute[259907]: 2025-10-01 16:54:29.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:54:30 np0005464891 nova_compute[259907]: 2025-10-01 16:54:30.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:54:30 np0005464891 nova_compute[259907]: 2025-10-01 16:54:30.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:54:30 np0005464891 nova_compute[259907]: 2025-10-01 16:54:30.834 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:30 np0005464891 nova_compute[259907]: 2025-10-01 16:54:30.835 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:30 np0005464891 nova_compute[259907]: 2025-10-01 16:54:30.836 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:30 np0005464891 nova_compute[259907]: 2025-10-01 16:54:30.836 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:54:30 np0005464891 nova_compute[259907]: 2025-10-01 16:54:30.838 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 214 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.5 MiB/s wr, 174 op/s
Oct  1 12:54:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:54:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4266194465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:54:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:54:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Oct  1 12:54:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Oct  1 12:54:31 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Oct  1 12:54:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:54:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508123688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.352 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.436 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.436 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.440 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.441 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.668 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.669 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4091MB free_disk=59.921810150146484GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.670 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.670 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.743 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance ce0fbe07-9503-45c6-a10c-1c09f27dd045 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.744 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.744 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.745 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.764 2 DEBUG oslo_concurrency.lockutils [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Acquiring lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.765 2 DEBUG oslo_concurrency.lockutils [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.786 2 DEBUG nova.objects.instance [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lazy-loading 'flavor' on Instance uuid ce0fbe07-9503-45c6-a10c-1c09f27dd045 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.792 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:31 np0005464891 nova_compute[259907]: 2025-10-01 16:54:31.836 2 DEBUG oslo_concurrency.lockutils [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.043 2 DEBUG oslo_concurrency.lockutils [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Acquiring lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.044 2 DEBUG oslo_concurrency.lockutils [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.045 2 INFO nova.compute.manager [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Attaching volume 52bef2d5-d5e1-49a4-bf6e-186d12d32ddd to /dev/vdb#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.169 2 DEBUG os_brick.utils [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.170 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.188 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.188 741 DEBUG oslo.privsep.daemon [-] privsep: reply[62bef608-ea2b-4183-9c19-dbbd0db6fc4c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.190 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.203 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.203 741 DEBUG oslo.privsep.daemon [-] privsep: reply[725cb61e-22dd-47cd-985a-2d162dc8d14c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:54:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633271414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.205 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.220 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.221 741 DEBUG oslo.privsep.daemon [-] privsep: reply[984643cd-cb82-4e81-af04-83f937711bc7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.222 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.223 741 DEBUG oslo.privsep.daemon [-] privsep: reply[7982830a-98e1-4f2f-81ca-5fcbaa405379]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.224 2 DEBUG oslo_concurrency.processutils [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Oct  1 12:54:32 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.251 2 DEBUG oslo_concurrency.processutils [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.255 2 DEBUG os_brick.initiator.connectors.lightos [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.255 2 DEBUG os_brick.initiator.connectors.lightos [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.256 2 DEBUG os_brick.initiator.connectors.lightos [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.257 2 DEBUG os_brick.utils [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] <== get_connector_properties: return (87ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.258 2 DEBUG nova.virt.block_device [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Updating existing volume attachment record: 2b1f58ed-8833-458c-90f5-5e23e742cd5c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.266 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.280 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.308 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.309 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:32 np0005464891 nova_compute[259907]: 2025-10-01 16:54:32.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:32 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:32Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:a9:d3 10.100.0.7
Oct  1 12:54:32 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:32Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:a9:d3 10.100.0.7
Oct  1 12:54:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:54:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/58015592' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:54:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 223 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 483 KiB/s rd, 2.2 MiB/s wr, 116 op/s
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.104 2 DEBUG os_brick.encryptors [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Using volume encryption metadata '{'encryption_key_id': '86d609d9-31f4-4876-ba44-d7c50cbc6f97', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-52bef2d5-d5e1-49a4-bf6e-186d12d32ddd', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '52bef2d5-d5e1-49a4-bf6e-186d12d32ddd', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'ce0fbe07-9503-45c6-a10c-1c09f27dd045', 'attached_at': '', 'detached_at': '', 'volume_id': '52bef2d5-d5e1-49a4-bf6e-186d12d32ddd', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.112 2 DEBUG barbicanclient.client [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.134 2 DEBUG barbicanclient.v1.secrets [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/86d609d9-31f4-4876-ba44-d7c50cbc6f97 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.135 2 INFO barbicanclient.base [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Calculated Secrets uuid ref: secrets/86d609d9-31f4-4876-ba44-d7c50cbc6f97#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.175 2 DEBUG barbicanclient.client [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.176 2 INFO barbicanclient.base [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Calculated Secrets uuid ref: secrets/86d609d9-31f4-4876-ba44-d7c50cbc6f97#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.199 2 DEBUG barbicanclient.client [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.200 2 INFO barbicanclient.base [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Calculated Secrets uuid ref: secrets/86d609d9-31f4-4876-ba44-d7c50cbc6f97#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.227 2 DEBUG barbicanclient.client [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.228 2 INFO barbicanclient.base [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Calculated Secrets uuid ref: secrets/86d609d9-31f4-4876-ba44-d7c50cbc6f97#033[00m
Oct  1 12:54:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Oct  1 12:54:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.247 2 DEBUG barbicanclient.client [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.248 2 INFO barbicanclient.base [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Calculated Secrets uuid ref: secrets/86d609d9-31f4-4876-ba44-d7c50cbc6f97#033[00m
Oct  1 12:54:33 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.289 2 DEBUG barbicanclient.client [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.290 2 INFO barbicanclient.base [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Calculated Secrets uuid ref: secrets/86d609d9-31f4-4876-ba44-d7c50cbc6f97#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.313 2 DEBUG barbicanclient.client [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.314 2 INFO barbicanclient.base [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Calculated Secrets uuid ref: secrets/86d609d9-31f4-4876-ba44-d7c50cbc6f97#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.342 2 DEBUG barbicanclient.client [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.343 2 INFO barbicanclient.base [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Calculated Secrets uuid ref: secrets/86d609d9-31f4-4876-ba44-d7c50cbc6f97#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.365 2 DEBUG barbicanclient.client [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.366 2 INFO barbicanclient.base [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Calculated Secrets uuid ref: secrets/86d609d9-31f4-4876-ba44-d7c50cbc6f97#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.393 2 DEBUG barbicanclient.client [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.394 2 INFO barbicanclient.base [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Calculated Secrets uuid ref: secrets/86d609d9-31f4-4876-ba44-d7c50cbc6f97#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.422 2 DEBUG barbicanclient.client [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.423 2 INFO barbicanclient.base [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Calculated Secrets uuid ref: secrets/86d609d9-31f4-4876-ba44-d7c50cbc6f97#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.444 2 DEBUG barbicanclient.client [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.445 2 INFO barbicanclient.base [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Calculated Secrets uuid ref: secrets/86d609d9-31f4-4876-ba44-d7c50cbc6f97#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.468 2 DEBUG barbicanclient.client [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.468 2 INFO barbicanclient.base [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Calculated Secrets uuid ref: secrets/86d609d9-31f4-4876-ba44-d7c50cbc6f97#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.504 2 DEBUG barbicanclient.client [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.504 2 INFO barbicanclient.base [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Calculated Secrets uuid ref: secrets/86d609d9-31f4-4876-ba44-d7c50cbc6f97#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.574 2 DEBUG barbicanclient.client [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.574 2 INFO barbicanclient.base [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Calculated Secrets uuid ref: secrets/86d609d9-31f4-4876-ba44-d7c50cbc6f97#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.620 2 DEBUG barbicanclient.client [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.620 2 DEBUG nova.virt.libvirt.host [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct  1 12:54:33 np0005464891 nova_compute[259907]:  <usage type="volume">
Oct  1 12:54:33 np0005464891 nova_compute[259907]:    <volume>52bef2d5-d5e1-49a4-bf6e-186d12d32ddd</volume>
Oct  1 12:54:33 np0005464891 nova_compute[259907]:  </usage>
Oct  1 12:54:33 np0005464891 nova_compute[259907]: </secret>
Oct  1 12:54:33 np0005464891 nova_compute[259907]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.635 2 DEBUG nova.objects.instance [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lazy-loading 'flavor' on Instance uuid ce0fbe07-9503-45c6-a10c-1c09f27dd045 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.658 2 DEBUG nova.virt.libvirt.driver [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Attempting to attach volume 52bef2d5-d5e1-49a4-bf6e-186d12d32ddd with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  1 12:54:33 np0005464891 nova_compute[259907]: 2025-10-01 16:54:33.660 2 DEBUG nova.virt.libvirt.guest [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] attach device xml: <disk type="network" device="disk">
Oct  1 12:54:33 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:54:33 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-52bef2d5-d5e1-49a4-bf6e-186d12d32ddd">
Oct  1 12:54:33 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:54:33 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:54:33 np0005464891 nova_compute[259907]:  <auth username="openstack">
Oct  1 12:54:33 np0005464891 nova_compute[259907]:    <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:54:33 np0005464891 nova_compute[259907]:  </auth>
Oct  1 12:54:33 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:54:33 np0005464891 nova_compute[259907]:  <serial>52bef2d5-d5e1-49a4-bf6e-186d12d32ddd</serial>
Oct  1 12:54:33 np0005464891 nova_compute[259907]:  <encryption format="luks">
Oct  1 12:54:33 np0005464891 nova_compute[259907]:    <secret type="passphrase" uuid="4b7109d0-ee16-4ff4-94f4-7abe3d1edd73"/>
Oct  1 12:54:33 np0005464891 nova_compute[259907]:  </encryption>
Oct  1 12:54:33 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:54:33 np0005464891 nova_compute[259907]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  1 12:54:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Oct  1 12:54:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Oct  1 12:54:34 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Oct  1 12:54:34 np0005464891 nova_compute[259907]: 2025-10-01 16:54:34.309 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:54:34 np0005464891 nova_compute[259907]: 2025-10-01 16:54:34.310 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:54:34 np0005464891 nova_compute[259907]: 2025-10-01 16:54:34.310 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:54:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 247 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 6.4 MiB/s wr, 253 op/s
Oct  1 12:54:35 np0005464891 nova_compute[259907]: 2025-10-01 16:54:35.154 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "refresh_cache-ce0fbe07-9503-45c6-a10c-1c09f27dd045" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:54:35 np0005464891 nova_compute[259907]: 2025-10-01 16:54:35.154 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquired lock "refresh_cache-ce0fbe07-9503-45c6-a10c-1c09f27dd045" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:54:35 np0005464891 nova_compute[259907]: 2025-10-01 16:54:35.155 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  1 12:54:35 np0005464891 nova_compute[259907]: 2025-10-01 16:54:35.155 2 DEBUG nova.objects.instance [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ce0fbe07-9503-45c6-a10c-1c09f27dd045 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:54:36 np0005464891 nova_compute[259907]: 2025-10-01 16:54:36.159 2 DEBUG nova.virt.libvirt.driver [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:54:36 np0005464891 nova_compute[259907]: 2025-10-01 16:54:36.159 2 DEBUG nova.virt.libvirt.driver [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:54:36 np0005464891 nova_compute[259907]: 2025-10-01 16:54:36.160 2 DEBUG nova.virt.libvirt.driver [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:54:36 np0005464891 nova_compute[259907]: 2025-10-01 16:54:36.160 2 DEBUG nova.virt.libvirt.driver [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] No VIF found with MAC fa:16:3e:ea:bc:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:54:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:54:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Oct  1 12:54:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Oct  1 12:54:36 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Oct  1 12:54:36 np0005464891 nova_compute[259907]: 2025-10-01 16:54:36.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:36 np0005464891 nova_compute[259907]: 2025-10-01 16:54:36.465 2 DEBUG oslo_concurrency.lockutils [None req-b26081e7-477e-4243-a720-2fb3c961439a 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:54:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/387727406' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:54:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:54:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/387727406' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:54:36 np0005464891 nova_compute[259907]: 2025-10-01 16:54:36.792 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Updating instance_info_cache with network_info: [{"id": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "address": "fa:16:3e:ea:bc:ef", "network": {"id": "c9d562fc-0c1c-4b41-aa7c-4cb07be574c7", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-481427918-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0abf1cc99d79491f87a03f334eb255f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ef42ea7-b7", "ovs_interfaceid": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:54:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:54:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4082562981' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:54:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:54:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4082562981' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:54:36 np0005464891 nova_compute[259907]: 2025-10-01 16:54:36.810 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Releasing lock "refresh_cache-ce0fbe07-9503-45c6-a10c-1c09f27dd045" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:54:36 np0005464891 nova_compute[259907]: 2025-10-01 16:54:36.811 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  1 12:54:36 np0005464891 nova_compute[259907]: 2025-10-01 16:54:36.812 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:54:36 np0005464891 nova_compute[259907]: 2025-10-01 16:54:36.813 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:54:36 np0005464891 nova_compute[259907]: 2025-10-01 16:54:36.813 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:54:37 np0005464891 podman[291300]: 2025-10-01 16:54:37.004561701 +0000 UTC m=+0.113852948 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  1 12:54:37 np0005464891 nova_compute[259907]: 2025-10-01 16:54:37.036 2 DEBUG oslo_concurrency.lockutils [None req-f2bf3fca-5949-491d-8f6d-bd53925dfe78 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Acquiring lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:37 np0005464891 nova_compute[259907]: 2025-10-01 16:54:37.036 2 DEBUG oslo_concurrency.lockutils [None req-f2bf3fca-5949-491d-8f6d-bd53925dfe78 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:37 np0005464891 nova_compute[259907]: 2025-10-01 16:54:37.048 2 INFO nova.compute.manager [None req-f2bf3fca-5949-491d-8f6d-bd53925dfe78 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Detaching volume 52bef2d5-d5e1-49a4-bf6e-186d12d32ddd#033[00m
Oct  1 12:54:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 247 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 851 KiB/s rd, 5.3 MiB/s wr, 210 op/s
Oct  1 12:54:37 np0005464891 nova_compute[259907]: 2025-10-01 16:54:37.161 2 INFO nova.virt.block_device [None req-f2bf3fca-5949-491d-8f6d-bd53925dfe78 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Attempting to driver detach volume 52bef2d5-d5e1-49a4-bf6e-186d12d32ddd from mountpoint /dev/vdb#033[00m
Oct  1 12:54:37 np0005464891 nova_compute[259907]: 2025-10-01 16:54:37.254 2 DEBUG os_brick.encryptors [None req-f2bf3fca-5949-491d-8f6d-bd53925dfe78 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Using volume encryption metadata '{'encryption_key_id': '86d609d9-31f4-4876-ba44-d7c50cbc6f97', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-52bef2d5-d5e1-49a4-bf6e-186d12d32ddd', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '52bef2d5-d5e1-49a4-bf6e-186d12d32ddd', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'ce0fbe07-9503-45c6-a10c-1c09f27dd045', 'attached_at': '', 'detached_at': '', 'volume_id': '52bef2d5-d5e1-49a4-bf6e-186d12d32ddd', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct  1 12:54:37 np0005464891 nova_compute[259907]: 2025-10-01 16:54:37.261 2 DEBUG nova.virt.libvirt.driver [None req-f2bf3fca-5949-491d-8f6d-bd53925dfe78 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Attempting to detach device vdb from instance ce0fbe07-9503-45c6-a10c-1c09f27dd045 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  1 12:54:37 np0005464891 nova_compute[259907]: 2025-10-01 16:54:37.262 2 DEBUG nova.virt.libvirt.guest [None req-f2bf3fca-5949-491d-8f6d-bd53925dfe78 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:54:37 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:54:37 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-52bef2d5-d5e1-49a4-bf6e-186d12d32ddd">
Oct  1 12:54:37 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:54:37 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:54:37 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:54:37 np0005464891 nova_compute[259907]:  <serial>52bef2d5-d5e1-49a4-bf6e-186d12d32ddd</serial>
Oct  1 12:54:37 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:54:37 np0005464891 nova_compute[259907]:  <encryption format="luks">
Oct  1 12:54:37 np0005464891 nova_compute[259907]:    <secret type="passphrase" uuid="4b7109d0-ee16-4ff4-94f4-7abe3d1edd73"/>
Oct  1 12:54:37 np0005464891 nova_compute[259907]:  </encryption>
Oct  1 12:54:37 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:54:37 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:54:37 np0005464891 nova_compute[259907]: 2025-10-01 16:54:37.270 2 INFO nova.virt.libvirt.driver [None req-f2bf3fca-5949-491d-8f6d-bd53925dfe78 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Successfully detached device vdb from instance ce0fbe07-9503-45c6-a10c-1c09f27dd045 from the persistent domain config.#033[00m
Oct  1 12:54:37 np0005464891 nova_compute[259907]: 2025-10-01 16:54:37.270 2 DEBUG nova.virt.libvirt.driver [None req-f2bf3fca-5949-491d-8f6d-bd53925dfe78 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance ce0fbe07-9503-45c6-a10c-1c09f27dd045 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  1 12:54:37 np0005464891 nova_compute[259907]: 2025-10-01 16:54:37.271 2 DEBUG nova.virt.libvirt.guest [None req-f2bf3fca-5949-491d-8f6d-bd53925dfe78 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:54:37 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:54:37 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-52bef2d5-d5e1-49a4-bf6e-186d12d32ddd">
Oct  1 12:54:37 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:54:37 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:54:37 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:54:37 np0005464891 nova_compute[259907]:  <serial>52bef2d5-d5e1-49a4-bf6e-186d12d32ddd</serial>
Oct  1 12:54:37 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:54:37 np0005464891 nova_compute[259907]:  <encryption format="luks">
Oct  1 12:54:37 np0005464891 nova_compute[259907]:    <secret type="passphrase" uuid="4b7109d0-ee16-4ff4-94f4-7abe3d1edd73"/>
Oct  1 12:54:37 np0005464891 nova_compute[259907]:  </encryption>
Oct  1 12:54:37 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:54:37 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:54:37 np0005464891 nova_compute[259907]: 2025-10-01 16:54:37.388 2 DEBUG nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Received event <DeviceRemovedEvent: 1759337677.3875284, ce0fbe07-9503-45c6-a10c-1c09f27dd045 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  1 12:54:37 np0005464891 nova_compute[259907]: 2025-10-01 16:54:37.390 2 DEBUG nova.virt.libvirt.driver [None req-f2bf3fca-5949-491d-8f6d-bd53925dfe78 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance ce0fbe07-9503-45c6-a10c-1c09f27dd045 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  1 12:54:37 np0005464891 nova_compute[259907]: 2025-10-01 16:54:37.393 2 INFO nova.virt.libvirt.driver [None req-f2bf3fca-5949-491d-8f6d-bd53925dfe78 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Successfully detached device vdb from instance ce0fbe07-9503-45c6-a10c-1c09f27dd045 from the live domain config.#033[00m
Oct  1 12:54:37 np0005464891 nova_compute[259907]: 2025-10-01 16:54:37.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:37 np0005464891 nova_compute[259907]: 2025-10-01 16:54:37.608 2 DEBUG nova.objects.instance [None req-f2bf3fca-5949-491d-8f6d-bd53925dfe78 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lazy-loading 'flavor' on Instance uuid ce0fbe07-9503-45c6-a10c-1c09f27dd045 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:54:37 np0005464891 nova_compute[259907]: 2025-10-01 16:54:37.664 2 DEBUG oslo_concurrency.lockutils [None req-f2bf3fca-5949-491d-8f6d-bd53925dfe78 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:38 np0005464891 nova_compute[259907]: 2025-10-01 16:54:38.778 2 DEBUG oslo_concurrency.lockutils [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Acquiring lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:38 np0005464891 nova_compute[259907]: 2025-10-01 16:54:38.779 2 DEBUG oslo_concurrency.lockutils [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:38 np0005464891 nova_compute[259907]: 2025-10-01 16:54:38.779 2 DEBUG oslo_concurrency.lockutils [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Acquiring lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:38 np0005464891 nova_compute[259907]: 2025-10-01 16:54:38.779 2 DEBUG oslo_concurrency.lockutils [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:38 np0005464891 nova_compute[259907]: 2025-10-01 16:54:38.780 2 DEBUG oslo_concurrency.lockutils [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:38 np0005464891 nova_compute[259907]: 2025-10-01 16:54:38.780 2 INFO nova.compute.manager [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Terminating instance#033[00m
Oct  1 12:54:38 np0005464891 nova_compute[259907]: 2025-10-01 16:54:38.781 2 DEBUG nova.compute.manager [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:54:38 np0005464891 kernel: tap8ef42ea7-b7 (unregistering): left promiscuous mode
Oct  1 12:54:38 np0005464891 NetworkManager[44940]: <info>  [1759337678.8419] device (tap8ef42ea7-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:54:38 np0005464891 nova_compute[259907]: 2025-10-01 16:54:38.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:38 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:38Z|00146|binding|INFO|Releasing lport 8ef42ea7-b750-44b5-9353-fbc089ba0eef from this chassis (sb_readonly=0)
Oct  1 12:54:38 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:38Z|00147|binding|INFO|Setting lport 8ef42ea7-b750-44b5-9353-fbc089ba0eef down in Southbound
Oct  1 12:54:38 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:38Z|00148|binding|INFO|Removing iface tap8ef42ea7-b7 ovn-installed in OVS
Oct  1 12:54:38 np0005464891 nova_compute[259907]: 2025-10-01 16:54:38.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:38 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:38.873 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:bc:ef 10.100.0.5'], port_security=['fa:16:3e:ea:bc:ef 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ce0fbe07-9503-45c6-a10c-1c09f27dd045', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0abf1cc99d79491f87a03f334eb255f1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c30cafeb-2af0-4af1-bd27-6551ccb4bcc6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=41ea2ae0-f911-4d79-a8de-235bf805e7ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=8ef42ea7-b750-44b5-9353-fbc089ba0eef) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:54:38 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:38.875 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 8ef42ea7-b750-44b5-9353-fbc089ba0eef in datapath c9d562fc-0c1c-4b41-aa7c-4cb07be574c7 unbound from our chassis#033[00m
Oct  1 12:54:38 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:38.877 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c9d562fc-0c1c-4b41-aa7c-4cb07be574c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:54:38 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:38.878 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3673e480-dd53-4748-83f2-16094b5b1390]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:38 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:38.879 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7 namespace which is not needed anymore#033[00m
Oct  1 12:54:38 np0005464891 nova_compute[259907]: 2025-10-01 16:54:38.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:38 np0005464891 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct  1 12:54:38 np0005464891 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 16.093s CPU time.
Oct  1 12:54:38 np0005464891 systemd-machined[214891]: Machine qemu-14-instance-0000000e terminated.
Oct  1 12:54:39 np0005464891 neutron-haproxy-ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7[289816]: [NOTICE]   (289838) : haproxy version is 2.8.14-c23fe91
Oct  1 12:54:39 np0005464891 neutron-haproxy-ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7[289816]: [NOTICE]   (289838) : path to executable is /usr/sbin/haproxy
Oct  1 12:54:39 np0005464891 neutron-haproxy-ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7[289816]: [WARNING]  (289838) : Exiting Master process...
Oct  1 12:54:39 np0005464891 neutron-haproxy-ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7[289816]: [WARNING]  (289838) : Exiting Master process...
Oct  1 12:54:39 np0005464891 neutron-haproxy-ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7[289816]: [ALERT]    (289838) : Current worker (289841) exited with code 143 (Terminated)
Oct  1 12:54:39 np0005464891 neutron-haproxy-ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7[289816]: [WARNING]  (289838) : All workers exited. Exiting... (0)
Oct  1 12:54:39 np0005464891 systemd[1]: libpod-f52886f1c11c0476ad2ae2d3a21de8c8955a097c6060ffb2cc80bc1448c22863.scope: Deactivated successfully.
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.018 2 INFO nova.virt.libvirt.driver [-] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Instance destroyed successfully.#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.019 2 DEBUG nova.objects.instance [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lazy-loading 'resources' on Instance uuid ce0fbe07-9503-45c6-a10c-1c09f27dd045 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:54:39 np0005464891 podman[291353]: 2025-10-01 16:54:39.021094079 +0000 UTC m=+0.050809704 container died f52886f1c11c0476ad2ae2d3a21de8c8955a097c6060ffb2cc80bc1448c22863 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  1 12:54:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1582: 321 pgs: 321 active+clean; 247 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 1.9 MiB/s wr, 151 op/s
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.073 2 DEBUG nova.virt.libvirt.vif [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:54:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-804282757',display_name='tempest-TestEncryptedCinderVolumes-server-804282757',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-804282757',id=14,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1+uYIng/Nm/+oVaE3GM9Sm1tsjQriWZRlO6Bwtj76OMNUHXXErOUruu8mQcuHyP0af9JGljokaMhudZEWQrshT5dgncNDxJtUA3fyYEY0H2suKuHwykEs/LfW1SBu3vQ==',key_name='tempest-keypair-720287870',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:54:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0abf1cc99d79491f87a03f334eb255f1',ramdisk_id='',reservation_id='r-gdhsmpbc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1608655835',owner_user_name='tempest-TestEncryptedCinderVolumes-1608655835-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:54:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51f0df6e796a49c8b1e4f18f83b933f5',uuid=ce0fbe07-9503-45c6-a10c-1c09f27dd045,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "address": "fa:16:3e:ea:bc:ef", "network": {"id": "c9d562fc-0c1c-4b41-aa7c-4cb07be574c7", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-481427918-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0abf1cc99d79491f87a03f334eb255f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ef42ea7-b7", "ovs_interfaceid": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.074 2 DEBUG nova.network.os_vif_util [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Converting VIF {"id": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "address": "fa:16:3e:ea:bc:ef", "network": {"id": "c9d562fc-0c1c-4b41-aa7c-4cb07be574c7", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-481427918-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0abf1cc99d79491f87a03f334eb255f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ef42ea7-b7", "ovs_interfaceid": "8ef42ea7-b750-44b5-9353-fbc089ba0eef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.075 2 DEBUG nova.network.os_vif_util [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ea:bc:ef,bridge_name='br-int',has_traffic_filtering=True,id=8ef42ea7-b750-44b5-9353-fbc089ba0eef,network=Network(c9d562fc-0c1c-4b41-aa7c-4cb07be574c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ef42ea7-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.076 2 DEBUG os_vif [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ea:bc:ef,bridge_name='br-int',has_traffic_filtering=True,id=8ef42ea7-b750-44b5-9353-fbc089ba0eef,network=Network(c9d562fc-0c1c-4b41-aa7c-4cb07be574c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ef42ea7-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:54:39 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f52886f1c11c0476ad2ae2d3a21de8c8955a097c6060ffb2cc80bc1448c22863-userdata-shm.mount: Deactivated successfully.
Oct  1 12:54:39 np0005464891 systemd[1]: var-lib-containers-storage-overlay-96bbee777dc2df8955b3662960af3d1294adf681d3f83e062c8c58834cc3094a-merged.mount: Deactivated successfully.
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.080 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ef42ea7-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.088 2 INFO os_vif [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ea:bc:ef,bridge_name='br-int',has_traffic_filtering=True,id=8ef42ea7-b750-44b5-9353-fbc089ba0eef,network=Network(c9d562fc-0c1c-4b41-aa7c-4cb07be574c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ef42ea7-b7')#033[00m
Oct  1 12:54:39 np0005464891 podman[291353]: 2025-10-01 16:54:39.094175233 +0000 UTC m=+0.123890868 container cleanup f52886f1c11c0476ad2ae2d3a21de8c8955a097c6060ffb2cc80bc1448c22863 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  1 12:54:39 np0005464891 systemd[1]: libpod-conmon-f52886f1c11c0476ad2ae2d3a21de8c8955a097c6060ffb2cc80bc1448c22863.scope: Deactivated successfully.
Oct  1 12:54:39 np0005464891 podman[291400]: 2025-10-01 16:54:39.167484954 +0000 UTC m=+0.052396967 container remove f52886f1c11c0476ad2ae2d3a21de8c8955a097c6060ffb2cc80bc1448c22863 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:54:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:39.175 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8d12ac1a-0d5e-4423-aa5e-834f0048033b]: (4, ('Wed Oct  1 04:54:38 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7 (f52886f1c11c0476ad2ae2d3a21de8c8955a097c6060ffb2cc80bc1448c22863)\nf52886f1c11c0476ad2ae2d3a21de8c8955a097c6060ffb2cc80bc1448c22863\nWed Oct  1 04:54:39 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7 (f52886f1c11c0476ad2ae2d3a21de8c8955a097c6060ffb2cc80bc1448c22863)\nf52886f1c11c0476ad2ae2d3a21de8c8955a097c6060ffb2cc80bc1448c22863\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:39.177 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e1edcaef-1025-4ab7-91a9-fb3b36cdeb6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:39.178 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9d562fc-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:39 np0005464891 kernel: tapc9d562fc-00: left promiscuous mode
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:39.198 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[436dac92-4909-4af6-9c31-2eb03a305f74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.211 2 DEBUG nova.compute.manager [req-93316e39-3e3d-423e-a849-5a3d4924a8d3 req-b0d11e2c-40a7-401c-bdee-675e2d1e1ae6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Received event network-vif-unplugged-8ef42ea7-b750-44b5-9353-fbc089ba0eef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.212 2 DEBUG oslo_concurrency.lockutils [req-93316e39-3e3d-423e-a849-5a3d4924a8d3 req-b0d11e2c-40a7-401c-bdee-675e2d1e1ae6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.212 2 DEBUG oslo_concurrency.lockutils [req-93316e39-3e3d-423e-a849-5a3d4924a8d3 req-b0d11e2c-40a7-401c-bdee-675e2d1e1ae6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.212 2 DEBUG oslo_concurrency.lockutils [req-93316e39-3e3d-423e-a849-5a3d4924a8d3 req-b0d11e2c-40a7-401c-bdee-675e2d1e1ae6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.213 2 DEBUG nova.compute.manager [req-93316e39-3e3d-423e-a849-5a3d4924a8d3 req-b0d11e2c-40a7-401c-bdee-675e2d1e1ae6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] No waiting events found dispatching network-vif-unplugged-8ef42ea7-b750-44b5-9353-fbc089ba0eef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.213 2 DEBUG nova.compute.manager [req-93316e39-3e3d-423e-a849-5a3d4924a8d3 req-b0d11e2c-40a7-401c-bdee-675e2d1e1ae6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Received event network-vif-unplugged-8ef42ea7-b750-44b5-9353-fbc089ba0eef for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:54:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:39.229 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[def1ec5e-9127-4683-ab6c-75150b981d0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:39.230 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[02fd8a53-a7c3-4e5c-805f-b45f37bf908c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:39.245 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3f15ef71-214e-49ba-a3e9-d06c61ca1689]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459906, 'reachable_time': 20160, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291426, 'error': None, 'target': 'ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:39 np0005464891 systemd[1]: run-netns-ovnmeta\x2dc9d562fc\x2d0c1c\x2d4b41\x2daa7c\x2d4cb07be574c7.mount: Deactivated successfully.
Oct  1 12:54:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:39.253 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c9d562fc-0c1c-4b41-aa7c-4cb07be574c7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:54:39 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:54:39.253 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[0a69cd69-cdcf-44d8-ae20-ac74d51e1361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.752 2 INFO nova.virt.libvirt.driver [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Deleting instance files /var/lib/nova/instances/ce0fbe07-9503-45c6-a10c-1c09f27dd045_del#033[00m
Oct  1 12:54:39 np0005464891 nova_compute[259907]: 2025-10-01 16:54:39.753 2 INFO nova.virt.libvirt.driver [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Deletion of /var/lib/nova/instances/ce0fbe07-9503-45c6-a10c-1c09f27dd045_del complete#033[00m
Oct  1 12:54:40 np0005464891 nova_compute[259907]: 2025-10-01 16:54:40.040 2 INFO nova.compute.manager [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Took 1.26 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:54:40 np0005464891 nova_compute[259907]: 2025-10-01 16:54:40.041 2 DEBUG oslo.service.loopingcall [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:54:40 np0005464891 nova_compute[259907]: 2025-10-01 16:54:40.042 2 DEBUG nova.compute.manager [-] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:54:40 np0005464891 nova_compute[259907]: 2025-10-01 16:54:40.042 2 DEBUG nova.network.neutron [-] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:54:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 187 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 1.4 MiB/s wr, 161 op/s
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.095 2 DEBUG nova.network.neutron [-] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.134 2 INFO nova.compute.manager [-] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Took 1.09 seconds to deallocate network for instance.#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.180 2 DEBUG oslo_concurrency.lockutils [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.181 2 DEBUG oslo_concurrency.lockutils [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:54:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Oct  1 12:54:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Oct  1 12:54:41 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.243 2 DEBUG oslo_concurrency.processutils [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.309 2 DEBUG nova.compute.manager [req-063bb3d9-2d93-4011-bcc3-44e7c39f8f6b req-aa6997fe-af1d-4c11-ace7-936f4af20373 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Received event network-vif-deleted-8ef42ea7-b750-44b5-9353-fbc089ba0eef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.346 2 DEBUG nova.compute.manager [req-ed1a256e-f876-48c0-b105-0844f2e1dd4c req-aee334fa-bbf8-4879-9c99-027635332f9c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Received event network-vif-plugged-8ef42ea7-b750-44b5-9353-fbc089ba0eef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.346 2 DEBUG oslo_concurrency.lockutils [req-ed1a256e-f876-48c0-b105-0844f2e1dd4c req-aee334fa-bbf8-4879-9c99-027635332f9c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.346 2 DEBUG oslo_concurrency.lockutils [req-ed1a256e-f876-48c0-b105-0844f2e1dd4c req-aee334fa-bbf8-4879-9c99-027635332f9c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.347 2 DEBUG oslo_concurrency.lockutils [req-ed1a256e-f876-48c0-b105-0844f2e1dd4c req-aee334fa-bbf8-4879-9c99-027635332f9c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.347 2 DEBUG nova.compute.manager [req-ed1a256e-f876-48c0-b105-0844f2e1dd4c req-aee334fa-bbf8-4879-9c99-027635332f9c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] No waiting events found dispatching network-vif-plugged-8ef42ea7-b750-44b5-9353-fbc089ba0eef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.347 2 WARNING nova.compute.manager [req-ed1a256e-f876-48c0-b105-0844f2e1dd4c req-aee334fa-bbf8-4879-9c99-027635332f9c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Received unexpected event network-vif-plugged-8ef42ea7-b750-44b5-9353-fbc089ba0eef for instance with vm_state deleted and task_state None.#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:54:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/910608865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.703 2 DEBUG oslo_concurrency.processutils [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.713 2 DEBUG nova.compute.provider_tree [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.757 2 DEBUG nova.scheduler.client.report [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.846 2 DEBUG oslo_concurrency.lockutils [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.870 2 DEBUG oslo_concurrency.lockutils [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.870 2 DEBUG oslo_concurrency.lockutils [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.920 2 INFO nova.scheduler.client.report [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Deleted allocations for instance ce0fbe07-9503-45c6-a10c-1c09f27dd045#033[00m
Oct  1 12:54:41 np0005464891 nova_compute[259907]: 2025-10-01 16:54:41.928 2 DEBUG nova.objects.instance [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lazy-loading 'flavor' on Instance uuid eef473c3-8fff-4cd4-a5f8-ef9b89b7439a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:54:41 np0005464891 podman[291450]: 2025-10-01 16:54:41.943306379 +0000 UTC m=+0.056908838 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true)
Oct  1 12:54:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:54:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:54:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:54:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:54:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:54:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.167 2 DEBUG oslo_concurrency.lockutils [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.346 2 DEBUG oslo_concurrency.lockutils [None req-d5545043-f3dc-486c-8b4a-7b21186f4651 51f0df6e796a49c8b1e4f18f83b933f5 0abf1cc99d79491f87a03f334eb255f1 - - default default] Lock "ce0fbe07-9503-45c6-a10c-1c09f27dd045" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.558 2 DEBUG oslo_concurrency.lockutils [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.559 2 DEBUG oslo_concurrency.lockutils [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.559 2 INFO nova.compute.manager [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Attaching volume 6cfd5404-4c44-4198-a2d1-240407d0a6a3 to /dev/vdb#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.725 2 DEBUG os_brick.utils [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.727 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.742 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.742 741 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2dfe3e-30b4-4e35-bda0-1529bf6d6cd2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.744 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.756 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.756 741 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d2bfc3-b736-4a58-8b75-7ed404a14a14]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.758 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.774 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.774 741 DEBUG oslo.privsep.daemon [-] privsep: reply[a5910352-dc5f-4fec-8fed-685206324585]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.776 741 DEBUG oslo.privsep.daemon [-] privsep: reply[ece514cf-a74e-42ee-9f80-6c81e0289197]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.777 2 DEBUG oslo_concurrency.processutils [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.810 2 DEBUG oslo_concurrency.processutils [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.812 2 DEBUG os_brick.initiator.connectors.lightos [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.813 2 DEBUG os_brick.initiator.connectors.lightos [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.813 2 DEBUG os_brick.initiator.connectors.lightos [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.814 2 DEBUG os_brick.utils [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] <== get_connector_properties: return (87ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:54:42 np0005464891 nova_compute[259907]: 2025-10-01 16:54:42.814 2 DEBUG nova.virt.block_device [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Updating existing volume attachment record: faa7e18a-b094-4912-84a0-9c1f4895b920 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:54:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1585: 321 pgs: 321 active+clean; 167 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 29 KiB/s wr, 97 op/s
Oct  1 12:54:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:54:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2423793757' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:54:43 np0005464891 nova_compute[259907]: 2025-10-01 16:54:43.502 2 DEBUG nova.objects.instance [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lazy-loading 'flavor' on Instance uuid eef473c3-8fff-4cd4-a5f8-ef9b89b7439a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:54:43 np0005464891 nova_compute[259907]: 2025-10-01 16:54:43.539 2 DEBUG nova.virt.libvirt.driver [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Attempting to attach volume 6cfd5404-4c44-4198-a2d1-240407d0a6a3 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  1 12:54:43 np0005464891 nova_compute[259907]: 2025-10-01 16:54:43.543 2 DEBUG nova.virt.libvirt.guest [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] attach device xml: <disk type="network" device="disk">
Oct  1 12:54:43 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:54:43 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-6cfd5404-4c44-4198-a2d1-240407d0a6a3">
Oct  1 12:54:43 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:54:43 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:54:43 np0005464891 nova_compute[259907]:  <auth username="openstack">
Oct  1 12:54:43 np0005464891 nova_compute[259907]:    <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:54:43 np0005464891 nova_compute[259907]:  </auth>
Oct  1 12:54:43 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:54:43 np0005464891 nova_compute[259907]:  <serial>6cfd5404-4c44-4198-a2d1-240407d0a6a3</serial>
Oct  1 12:54:43 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:54:43 np0005464891 nova_compute[259907]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  1 12:54:43 np0005464891 nova_compute[259907]: 2025-10-01 16:54:43.814 2 DEBUG nova.virt.libvirt.driver [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:54:43 np0005464891 nova_compute[259907]: 2025-10-01 16:54:43.815 2 DEBUG nova.virt.libvirt.driver [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:54:43 np0005464891 nova_compute[259907]: 2025-10-01 16:54:43.816 2 DEBUG nova.virt.libvirt.driver [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:54:43 np0005464891 nova_compute[259907]: 2025-10-01 16:54:43.817 2 DEBUG nova.virt.libvirt.driver [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No VIF found with MAC fa:16:3e:b5:a9:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:54:44 np0005464891 nova_compute[259907]: 2025-10-01 16:54:44.033 2 DEBUG oslo_concurrency.lockutils [None req-1c1b1e82-c1c0-4c8f-8a4b-e277a41887bd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:54:44 np0005464891 nova_compute[259907]: 2025-10-01 16:54:44.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:54:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2976422525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:54:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:54:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2976422525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:54:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:54:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/112354013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:54:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 167 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 28 KiB/s wr, 111 op/s
Oct  1 12:54:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Oct  1 12:54:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Oct  1 12:54:45 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Oct  1 12:54:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:54:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Oct  1 12:54:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Oct  1 12:54:46 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Oct  1 12:54:46 np0005464891 nova_compute[259907]: 2025-10-01 16:54:46.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:46 np0005464891 podman[291498]: 2025-10-01 16:54:46.944417242 +0000 UTC m=+0.055563322 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:54:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1589: 321 pgs: 321 active+clean; 167 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 3.2 KiB/s wr, 38 op/s
Oct  1 12:54:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Oct  1 12:54:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Oct  1 12:54:47 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Oct  1 12:54:48 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:48Z|00149|binding|INFO|Releasing lport 2980a674-8e6a-4461-8bb6-70fb63ec12c0 from this chassis (sb_readonly=0)
Oct  1 12:54:48 np0005464891 nova_compute[259907]: 2025-10-01 16:54:48.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 167 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 9.8 KiB/s wr, 67 op/s
Oct  1 12:54:49 np0005464891 nova_compute[259907]: 2025-10-01 16:54:49.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Oct  1 12:54:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Oct  1 12:54:49 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Oct  1 12:54:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Oct  1 12:54:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Oct  1 12:54:50 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Oct  1 12:54:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:54:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828075784' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:54:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:54:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828075784' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:54:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1594: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 169 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 216 KiB/s wr, 140 op/s
Oct  1 12:54:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:54:51 np0005464891 nova_compute[259907]: 2025-10-01 16:54:51.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:51 np0005464891 ovn_controller[152409]: 2025-10-01T16:54:51Z|00150|binding|INFO|Releasing lport 2980a674-8e6a-4461-8bb6-70fb63ec12c0 from this chassis (sb_readonly=0)
Oct  1 12:54:51 np0005464891 nova_compute[259907]: 2025-10-01 16:54:51.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1595: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 169 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 184 KiB/s wr, 139 op/s
Oct  1 12:54:54 np0005464891 nova_compute[259907]: 2025-10-01 16:54:54.015 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337679.014603, ce0fbe07-9503-45c6-a10c-1c09f27dd045 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:54:54 np0005464891 nova_compute[259907]: 2025-10-01 16:54:54.016 2 INFO nova.compute.manager [-] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:54:54 np0005464891 nova_compute[259907]: 2025-10-01 16:54:54.083 2 DEBUG nova.compute.manager [None req-339f1cae-14f7-4e49-8ef7-bbbbb5720256 - - - - - -] [instance: ce0fbe07-9503-45c6-a10c-1c09f27dd045] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:54:54 np0005464891 nova_compute[259907]: 2025-10-01 16:54:54.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 170 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 171 KiB/s wr, 145 op/s
Oct  1 12:54:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Oct  1 12:54:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Oct  1 12:54:55 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Oct  1 12:54:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:54:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Oct  1 12:54:56 np0005464891 nova_compute[259907]: 2025-10-01 16:54:56.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Oct  1 12:54:56 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Oct  1 12:54:56 np0005464891 nova_compute[259907]: 2025-10-01 16:54:56.832 2 DEBUG oslo_concurrency.lockutils [None req-decebbdc-9674-4616-90ab-9ada7147d588 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:54:56 np0005464891 nova_compute[259907]: 2025-10-01 16:54:56.832 2 DEBUG oslo_concurrency.lockutils [None req-decebbdc-9674-4616-90ab-9ada7147d588 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:54:56 np0005464891 nova_compute[259907]: 2025-10-01 16:54:56.891 2 INFO nova.compute.manager [None req-decebbdc-9674-4616-90ab-9ada7147d588 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Detaching volume 6cfd5404-4c44-4198-a2d1-240407d0a6a3#033[00m
Oct  1 12:54:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1599: 321 pgs: 321 active+clean; 170 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 41 KiB/s wr, 66 op/s
Oct  1 12:54:58 np0005464891 nova_compute[259907]: 2025-10-01 16:54:58.275 2 INFO nova.virt.block_device [None req-decebbdc-9674-4616-90ab-9ada7147d588 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Attempting to driver detach volume 6cfd5404-4c44-4198-a2d1-240407d0a6a3 from mountpoint /dev/vdb#033[00m
Oct  1 12:54:58 np0005464891 nova_compute[259907]: 2025-10-01 16:54:58.284 2 DEBUG nova.virt.libvirt.driver [None req-decebbdc-9674-4616-90ab-9ada7147d588 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Attempting to detach device vdb from instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  1 12:54:58 np0005464891 nova_compute[259907]: 2025-10-01 16:54:58.284 2 DEBUG nova.virt.libvirt.guest [None req-decebbdc-9674-4616-90ab-9ada7147d588 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:54:58 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:54:58 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-6cfd5404-4c44-4198-a2d1-240407d0a6a3">
Oct  1 12:54:58 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:54:58 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:54:58 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:54:58 np0005464891 nova_compute[259907]:  <serial>6cfd5404-4c44-4198-a2d1-240407d0a6a3</serial>
Oct  1 12:54:58 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:54:58 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:54:58 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:54:58 np0005464891 nova_compute[259907]: 2025-10-01 16:54:58.329 2 INFO nova.virt.libvirt.driver [None req-decebbdc-9674-4616-90ab-9ada7147d588 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Successfully detached device vdb from instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a from the persistent domain config.#033[00m
Oct  1 12:54:58 np0005464891 nova_compute[259907]: 2025-10-01 16:54:58.330 2 DEBUG nova.virt.libvirt.driver [None req-decebbdc-9674-4616-90ab-9ada7147d588 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  1 12:54:58 np0005464891 nova_compute[259907]: 2025-10-01 16:54:58.330 2 DEBUG nova.virt.libvirt.guest [None req-decebbdc-9674-4616-90ab-9ada7147d588 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:54:58 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:54:58 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-6cfd5404-4c44-4198-a2d1-240407d0a6a3">
Oct  1 12:54:58 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:54:58 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:54:58 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:54:58 np0005464891 nova_compute[259907]:  <serial>6cfd5404-4c44-4198-a2d1-240407d0a6a3</serial>
Oct  1 12:54:58 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:54:58 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:54:58 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:54:58 np0005464891 nova_compute[259907]: 2025-10-01 16:54:58.795 2 DEBUG nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Received event <DeviceRemovedEvent: 1759337698.794799, eef473c3-8fff-4cd4-a5f8-ef9b89b7439a => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  1 12:54:58 np0005464891 nova_compute[259907]: 2025-10-01 16:54:58.814 2 DEBUG nova.virt.libvirt.driver [None req-decebbdc-9674-4616-90ab-9ada7147d588 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  1 12:54:58 np0005464891 nova_compute[259907]: 2025-10-01 16:54:58.818 2 INFO nova.virt.libvirt.driver [None req-decebbdc-9674-4616-90ab-9ada7147d588 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Successfully detached device vdb from instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a from the live domain config.#033[00m
Oct  1 12:54:58 np0005464891 podman[291520]: 2025-10-01 16:54:58.945136931 +0000 UTC m=+0.065245003 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  1 12:54:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 170 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 33 KiB/s wr, 57 op/s
Oct  1 12:54:59 np0005464891 nova_compute[259907]: 2025-10-01 16:54:59.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:54:59 np0005464891 nova_compute[259907]: 2025-10-01 16:54:59.275 2 DEBUG nova.objects.instance [None req-decebbdc-9674-4616-90ab-9ada7147d588 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lazy-loading 'flavor' on Instance uuid eef473c3-8fff-4cd4-a5f8-ef9b89b7439a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:54:59 np0005464891 nova_compute[259907]: 2025-10-01 16:54:59.396 2 DEBUG oslo_concurrency.lockutils [None req-decebbdc-9674-4616-90ab-9ada7147d588 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 170 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 26 KiB/s wr, 50 op/s
Oct  1 12:55:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Oct  1 12:55:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Oct  1 12:55:01 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Oct  1 12:55:01 np0005464891 nova_compute[259907]: 2025-10-01 16:55:01.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:01 np0005464891 nova_compute[259907]: 2025-10-01 16:55:01.567 2 DEBUG oslo_concurrency.lockutils [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:01 np0005464891 nova_compute[259907]: 2025-10-01 16:55:01.567 2 DEBUG oslo_concurrency.lockutils [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:01 np0005464891 nova_compute[259907]: 2025-10-01 16:55:01.593 2 DEBUG nova.objects.instance [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lazy-loading 'flavor' on Instance uuid eef473c3-8fff-4cd4-a5f8-ef9b89b7439a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:55:01 np0005464891 nova_compute[259907]: 2025-10-01 16:55:01.654 2 DEBUG oslo_concurrency.lockutils [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:01 np0005464891 nova_compute[259907]: 2025-10-01 16:55:01.870 2 DEBUG oslo_concurrency.lockutils [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:01 np0005464891 nova_compute[259907]: 2025-10-01 16:55:01.871 2 DEBUG oslo_concurrency.lockutils [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:01 np0005464891 nova_compute[259907]: 2025-10-01 16:55:01.871 2 INFO nova.compute.manager [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Attaching volume fe480ed7-39cd-4004-ac8b-751b7e501510 to /dev/vdb#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.057 2 DEBUG os_brick.utils [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.059 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.081 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.081 741 DEBUG oslo.privsep.daemon [-] privsep: reply[93d1f616-dff1-417a-b834-85ea27035a44]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.083 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.094 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.094 741 DEBUG oslo.privsep.daemon [-] privsep: reply[e61a649d-17a5-4f21-87b0-03744569c4ce]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.097 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.108 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.109 741 DEBUG oslo.privsep.daemon [-] privsep: reply[781a930f-2101-4418-92dd-6a6e04de9bd0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.110 741 DEBUG oslo.privsep.daemon [-] privsep: reply[041f7ff0-e35f-4965-a485-0cdb4fa0a879]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.111 2 DEBUG oslo_concurrency.processutils [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.185 2 DEBUG oslo_concurrency.processutils [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] CMD "nvme version" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.187 2 DEBUG os_brick.initiator.connectors.lightos [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.187 2 DEBUG os_brick.initiator.connectors.lightos [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.188 2 DEBUG os_brick.initiator.connectors.lightos [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.188 2 DEBUG os_brick.utils [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] <== get_connector_properties: return (129ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.188 2 DEBUG nova.virt.block_device [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Updating existing volume attachment record: 5f803c6f-5dca-4dce-b744-ff4b4d823600 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:55:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:55:02 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3395886566' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:55:02 np0005464891 nova_compute[259907]: 2025-10-01 16:55:02.995 2 DEBUG nova.objects.instance [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lazy-loading 'flavor' on Instance uuid eef473c3-8fff-4cd4-a5f8-ef9b89b7439a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:55:03 np0005464891 nova_compute[259907]: 2025-10-01 16:55:03.031 2 DEBUG nova.virt.libvirt.driver [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Attempting to attach volume fe480ed7-39cd-4004-ac8b-751b7e501510 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  1 12:55:03 np0005464891 nova_compute[259907]: 2025-10-01 16:55:03.034 2 DEBUG nova.virt.libvirt.guest [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] attach device xml: <disk type="network" device="disk">
Oct  1 12:55:03 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:55:03 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-fe480ed7-39cd-4004-ac8b-751b7e501510">
Oct  1 12:55:03 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:55:03 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:55:03 np0005464891 nova_compute[259907]:  <auth username="openstack">
Oct  1 12:55:03 np0005464891 nova_compute[259907]:    <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:55:03 np0005464891 nova_compute[259907]:  </auth>
Oct  1 12:55:03 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:55:03 np0005464891 nova_compute[259907]:  <serial>fe480ed7-39cd-4004-ac8b-751b7e501510</serial>
Oct  1 12:55:03 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:55:03 np0005464891 nova_compute[259907]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  1 12:55:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 170 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 7.3 KiB/s wr, 22 op/s
Oct  1 12:55:03 np0005464891 nova_compute[259907]: 2025-10-01 16:55:03.159 2 DEBUG nova.virt.libvirt.driver [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:55:03 np0005464891 nova_compute[259907]: 2025-10-01 16:55:03.159 2 DEBUG nova.virt.libvirt.driver [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:55:03 np0005464891 nova_compute[259907]: 2025-10-01 16:55:03.159 2 DEBUG nova.virt.libvirt.driver [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:55:03 np0005464891 nova_compute[259907]: 2025-10-01 16:55:03.160 2 DEBUG nova.virt.libvirt.driver [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No VIF found with MAC fa:16:3e:b5:a9:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:55:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Oct  1 12:55:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Oct  1 12:55:03 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Oct  1 12:55:03 np0005464891 nova_compute[259907]: 2025-10-01 16:55:03.358 2 DEBUG oslo_concurrency.lockutils [None req-318f0dd9-3424-46a1-b038-324dd4d00ffd 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:04 np0005464891 nova_compute[259907]: 2025-10-01 16:55:04.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1605: 321 pgs: 321 active+clean; 170 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 12 KiB/s wr, 75 op/s
Oct  1 12:55:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:55:06 np0005464891 nova_compute[259907]: 2025-10-01 16:55:06.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:06 np0005464891 nova_compute[259907]: 2025-10-01 16:55:06.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1606: 321 pgs: 321 active+clean; 170 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 11 KiB/s wr, 71 op/s
Oct  1 12:55:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:55:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3617970636' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:55:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:55:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3617970636' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:55:07 np0005464891 podman[291567]: 2025-10-01 16:55:07.955844507 +0000 UTC m=+0.072497590 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  1 12:55:08 np0005464891 nova_compute[259907]: 2025-10-01 16:55:08.041 2 DEBUG oslo_concurrency.lockutils [None req-6076a9f0-4e6a-4b16-aca3-e291388df5f9 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:08 np0005464891 nova_compute[259907]: 2025-10-01 16:55:08.042 2 DEBUG oslo_concurrency.lockutils [None req-6076a9f0-4e6a-4b16-aca3-e291388df5f9 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:08 np0005464891 nova_compute[259907]: 2025-10-01 16:55:08.060 2 INFO nova.compute.manager [None req-6076a9f0-4e6a-4b16-aca3-e291388df5f9 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Detaching volume fe480ed7-39cd-4004-ac8b-751b7e501510#033[00m
Oct  1 12:55:08 np0005464891 nova_compute[259907]: 2025-10-01 16:55:08.207 2 INFO nova.virt.block_device [None req-6076a9f0-4e6a-4b16-aca3-e291388df5f9 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Attempting to driver detach volume fe480ed7-39cd-4004-ac8b-751b7e501510 from mountpoint /dev/vdb#033[00m
Oct  1 12:55:08 np0005464891 nova_compute[259907]: 2025-10-01 16:55:08.217 2 DEBUG nova.virt.libvirt.driver [None req-6076a9f0-4e6a-4b16-aca3-e291388df5f9 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Attempting to detach device vdb from instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  1 12:55:08 np0005464891 nova_compute[259907]: 2025-10-01 16:55:08.217 2 DEBUG nova.virt.libvirt.guest [None req-6076a9f0-4e6a-4b16-aca3-e291388df5f9 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:55:08 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:55:08 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-fe480ed7-39cd-4004-ac8b-751b7e501510">
Oct  1 12:55:08 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:55:08 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:55:08 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:55:08 np0005464891 nova_compute[259907]:  <serial>fe480ed7-39cd-4004-ac8b-751b7e501510</serial>
Oct  1 12:55:08 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:55:08 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:55:08 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:55:08 np0005464891 nova_compute[259907]: 2025-10-01 16:55:08.227 2 INFO nova.virt.libvirt.driver [None req-6076a9f0-4e6a-4b16-aca3-e291388df5f9 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Successfully detached device vdb from instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a from the persistent domain config.#033[00m
Oct  1 12:55:08 np0005464891 nova_compute[259907]: 2025-10-01 16:55:08.228 2 DEBUG nova.virt.libvirt.driver [None req-6076a9f0-4e6a-4b16-aca3-e291388df5f9 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  1 12:55:08 np0005464891 nova_compute[259907]: 2025-10-01 16:55:08.229 2 DEBUG nova.virt.libvirt.guest [None req-6076a9f0-4e6a-4b16-aca3-e291388df5f9 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:55:08 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:55:08 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-fe480ed7-39cd-4004-ac8b-751b7e501510">
Oct  1 12:55:08 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:55:08 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:55:08 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:55:08 np0005464891 nova_compute[259907]:  <serial>fe480ed7-39cd-4004-ac8b-751b7e501510</serial>
Oct  1 12:55:08 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:55:08 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:55:08 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:55:08 np0005464891 nova_compute[259907]: 2025-10-01 16:55:08.356 2 DEBUG nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Received event <DeviceRemovedEvent: 1759337708.3548734, eef473c3-8fff-4cd4-a5f8-ef9b89b7439a => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  1 12:55:08 np0005464891 nova_compute[259907]: 2025-10-01 16:55:08.357 2 DEBUG nova.virt.libvirt.driver [None req-6076a9f0-4e6a-4b16-aca3-e291388df5f9 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  1 12:55:08 np0005464891 nova_compute[259907]: 2025-10-01 16:55:08.360 2 INFO nova.virt.libvirt.driver [None req-6076a9f0-4e6a-4b16-aca3-e291388df5f9 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Successfully detached device vdb from instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a from the live domain config.#033[00m
Oct  1 12:55:08 np0005464891 nova_compute[259907]: 2025-10-01 16:55:08.619 2 DEBUG nova.objects.instance [None req-6076a9f0-4e6a-4b16-aca3-e291388df5f9 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lazy-loading 'flavor' on Instance uuid eef473c3-8fff-4cd4-a5f8-ef9b89b7439a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:55:08 np0005464891 nova_compute[259907]: 2025-10-01 16:55:08.669 2 DEBUG oslo_concurrency.lockutils [None req-6076a9f0-4e6a-4b16-aca3-e291388df5f9 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 170 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 82 KiB/s wr, 63 op/s
Oct  1 12:55:09 np0005464891 nova_compute[259907]: 2025-10-01 16:55:09.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Oct  1 12:55:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Oct  1 12:55:10 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Oct  1 12:55:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 170 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 158 KiB/s rd, 91 KiB/s wr, 94 op/s
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:55:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Oct  1 12:55:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Oct  1 12:55:11 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.317 2 DEBUG oslo_concurrency.lockutils [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.317 2 DEBUG oslo_concurrency.lockutils [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.354 2 DEBUG nova.objects.instance [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lazy-loading 'flavor' on Instance uuid eef473c3-8fff-4cd4-a5f8-ef9b89b7439a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.414 2 DEBUG oslo_concurrency.lockutils [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.734 2 DEBUG oslo_concurrency.lockutils [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.734 2 DEBUG oslo_concurrency.lockutils [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.735 2 INFO nova.compute.manager [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Attaching volume 2e6d71e0-7f14-4121-a3e7-9afc1ce6864b to /dev/vdb#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.910 2 DEBUG os_brick.utils [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.911 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.923 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.924 741 DEBUG oslo.privsep.daemon [-] privsep: reply[4c7f641f-a209-4b75-be7b-d76e3cde450b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.925 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.935 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.936 741 DEBUG oslo.privsep.daemon [-] privsep: reply[cd65a6bb-bdc1-4e19-8687-d0d720693db5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.937 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.952 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.952 741 DEBUG oslo.privsep.daemon [-] privsep: reply[64bc6fce-121b-4658-93bb-3f8ec5b4bdd5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.953 741 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad065d1-dd74-479a-92ae-7c94e6f45a5d]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.954 2 DEBUG oslo_concurrency.processutils [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.980 2 DEBUG oslo_concurrency.processutils [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.983 2 DEBUG os_brick.initiator.connectors.lightos [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.984 2 DEBUG os_brick.initiator.connectors.lightos [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.984 2 DEBUG os_brick.initiator.connectors.lightos [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.985 2 DEBUG os_brick.utils [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] <== get_connector_properties: return (74ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:55:11 np0005464891 nova_compute[259907]: 2025-10-01 16:55:11.985 2 DEBUG nova.virt.block_device [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Updating existing volume attachment record: 2487ada2-acfa-4cd4-85e7-4433a5661991 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:55:12
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['default.rgw.control', 'vms', '.mgr', 'volumes', 'images', 'default.rgw.meta', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data']
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:55:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:55:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:12.457 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:12.458 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:12.459 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:55:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2151174807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:55:12 np0005464891 nova_compute[259907]: 2025-10-01 16:55:12.664 2 DEBUG nova.objects.instance [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lazy-loading 'flavor' on Instance uuid eef473c3-8fff-4cd4-a5f8-ef9b89b7439a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:55:12 np0005464891 nova_compute[259907]: 2025-10-01 16:55:12.683 2 DEBUG nova.virt.libvirt.driver [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Attempting to attach volume 2e6d71e0-7f14-4121-a3e7-9afc1ce6864b with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  1 12:55:12 np0005464891 nova_compute[259907]: 2025-10-01 16:55:12.685 2 DEBUG nova.virt.libvirt.guest [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] attach device xml: <disk type="network" device="disk">
Oct  1 12:55:12 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:55:12 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-2e6d71e0-7f14-4121-a3e7-9afc1ce6864b">
Oct  1 12:55:12 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:55:12 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:55:12 np0005464891 nova_compute[259907]:  <auth username="openstack">
Oct  1 12:55:12 np0005464891 nova_compute[259907]:    <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:55:12 np0005464891 nova_compute[259907]:  </auth>
Oct  1 12:55:12 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:55:12 np0005464891 nova_compute[259907]:  <serial>2e6d71e0-7f14-4121-a3e7-9afc1ce6864b</serial>
Oct  1 12:55:12 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:55:12 np0005464891 nova_compute[259907]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  1 12:55:12 np0005464891 nova_compute[259907]: 2025-10-01 16:55:12.863 2 DEBUG nova.virt.libvirt.driver [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:55:12 np0005464891 nova_compute[259907]: 2025-10-01 16:55:12.864 2 DEBUG nova.virt.libvirt.driver [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:55:12 np0005464891 nova_compute[259907]: 2025-10-01 16:55:12.864 2 DEBUG nova.virt.libvirt.driver [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:55:12 np0005464891 nova_compute[259907]: 2025-10-01 16:55:12.865 2 DEBUG nova.virt.libvirt.driver [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No VIF found with MAC fa:16:3e:b5:a9:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:55:12 np0005464891 podman[291622]: 2025-10-01 16:55:12.952395197 +0000 UTC m=+0.065671415 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 12:55:13 np0005464891 nova_compute[259907]: 2025-10-01 16:55:13.040 2 DEBUG oslo_concurrency.lockutils [None req-8887088a-fb16-4764-9622-7e6fc9f1fdb6 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:55:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/73812634' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:55:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:55:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/73812634' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:55:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1611: 321 pgs: 321 active+clean; 170 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 89 KiB/s wr, 53 op/s
Oct  1 12:55:14 np0005464891 nova_compute[259907]: 2025-10-01 16:55:14.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:14 np0005464891 nova_compute[259907]: 2025-10-01 16:55:14.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 171 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 181 KiB/s rd, 159 KiB/s wr, 102 op/s
Oct  1 12:55:15 np0005464891 nova_compute[259907]: 2025-10-01 16:55:15.855 2 DEBUG oslo_concurrency.lockutils [None req-913cb5bf-f90f-4208-9f41-1d4b6828c242 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:15 np0005464891 nova_compute[259907]: 2025-10-01 16:55:15.855 2 DEBUG oslo_concurrency.lockutils [None req-913cb5bf-f90f-4208-9f41-1d4b6828c242 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:15 np0005464891 nova_compute[259907]: 2025-10-01 16:55:15.872 2 INFO nova.compute.manager [None req-913cb5bf-f90f-4208-9f41-1d4b6828c242 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Detaching volume 2e6d71e0-7f14-4121-a3e7-9afc1ce6864b#033[00m
Oct  1 12:55:15 np0005464891 nova_compute[259907]: 2025-10-01 16:55:15.973 2 INFO nova.virt.block_device [None req-913cb5bf-f90f-4208-9f41-1d4b6828c242 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Attempting to driver detach volume 2e6d71e0-7f14-4121-a3e7-9afc1ce6864b from mountpoint /dev/vdb#033[00m
Oct  1 12:55:15 np0005464891 nova_compute[259907]: 2025-10-01 16:55:15.983 2 DEBUG nova.virt.libvirt.driver [None req-913cb5bf-f90f-4208-9f41-1d4b6828c242 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Attempting to detach device vdb from instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  1 12:55:15 np0005464891 nova_compute[259907]: 2025-10-01 16:55:15.983 2 DEBUG nova.virt.libvirt.guest [None req-913cb5bf-f90f-4208-9f41-1d4b6828c242 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:55:15 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:55:15 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-2e6d71e0-7f14-4121-a3e7-9afc1ce6864b">
Oct  1 12:55:15 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:55:15 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:55:15 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:55:15 np0005464891 nova_compute[259907]:  <serial>2e6d71e0-7f14-4121-a3e7-9afc1ce6864b</serial>
Oct  1 12:55:15 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:55:15 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:55:15 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:55:15 np0005464891 nova_compute[259907]: 2025-10-01 16:55:15.990 2 INFO nova.virt.libvirt.driver [None req-913cb5bf-f90f-4208-9f41-1d4b6828c242 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Successfully detached device vdb from instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a from the persistent domain config.#033[00m
Oct  1 12:55:15 np0005464891 nova_compute[259907]: 2025-10-01 16:55:15.991 2 DEBUG nova.virt.libvirt.driver [None req-913cb5bf-f90f-4208-9f41-1d4b6828c242 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  1 12:55:15 np0005464891 nova_compute[259907]: 2025-10-01 16:55:15.991 2 DEBUG nova.virt.libvirt.guest [None req-913cb5bf-f90f-4208-9f41-1d4b6828c242 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:55:15 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:55:15 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-2e6d71e0-7f14-4121-a3e7-9afc1ce6864b">
Oct  1 12:55:15 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:55:15 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:55:15 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:55:15 np0005464891 nova_compute[259907]:  <serial>2e6d71e0-7f14-4121-a3e7-9afc1ce6864b</serial>
Oct  1 12:55:15 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:55:15 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:55:15 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:55:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:55:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 22K writes, 76K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 22K writes, 7898 syncs, 2.83 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 13K writes, 44K keys, 13K commit groups, 1.0 writes per commit group, ingest: 27.32 MB, 0.05 MB/s#012Interval WAL: 13K writes, 5608 syncs, 2.39 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 12:55:16 np0005464891 nova_compute[259907]: 2025-10-01 16:55:16.065 2 DEBUG nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Received event <DeviceRemovedEvent: 1759337716.0649753, eef473c3-8fff-4cd4-a5f8-ef9b89b7439a => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  1 12:55:16 np0005464891 nova_compute[259907]: 2025-10-01 16:55:16.067 2 DEBUG nova.virt.libvirt.driver [None req-913cb5bf-f90f-4208-9f41-1d4b6828c242 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  1 12:55:16 np0005464891 nova_compute[259907]: 2025-10-01 16:55:16.070 2 INFO nova.virt.libvirt.driver [None req-913cb5bf-f90f-4208-9f41-1d4b6828c242 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Successfully detached device vdb from instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a from the live domain config.#033[00m
Oct  1 12:55:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Oct  1 12:55:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Oct  1 12:55:16 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Oct  1 12:55:16 np0005464891 nova_compute[259907]: 2025-10-01 16:55:16.214 2 DEBUG nova.objects.instance [None req-913cb5bf-f90f-4208-9f41-1d4b6828c242 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lazy-loading 'flavor' on Instance uuid eef473c3-8fff-4cd4-a5f8-ef9b89b7439a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:55:16 np0005464891 nova_compute[259907]: 2025-10-01 16:55:16.268 2 DEBUG oslo_concurrency.lockutils [None req-913cb5bf-f90f-4208-9f41-1d4b6828c242 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:55:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Oct  1 12:55:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Oct  1 12:55:16 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Oct  1 12:55:16 np0005464891 nova_compute[259907]: 2025-10-01 16:55:16.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1615: 321 pgs: 321 active+clean; 171 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 98 KiB/s wr, 83 op/s
Oct  1 12:55:17 np0005464891 podman[291642]: 2025-10-01 16:55:17.975679548 +0000 UTC m=+0.081179215 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  1 12:55:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Oct  1 12:55:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Oct  1 12:55:18 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Oct  1 12:55:18 np0005464891 nova_compute[259907]: 2025-10-01 16:55:18.953 2 DEBUG oslo_concurrency.lockutils [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:18 np0005464891 nova_compute[259907]: 2025-10-01 16:55:18.953 2 DEBUG oslo_concurrency.lockutils [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:18 np0005464891 nova_compute[259907]: 2025-10-01 16:55:18.969 2 DEBUG nova.objects.instance [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lazy-loading 'flavor' on Instance uuid eef473c3-8fff-4cd4-a5f8-ef9b89b7439a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.000 2 DEBUG oslo_concurrency.lockutils [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 171 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 174 KiB/s rd, 94 KiB/s wr, 73 op/s
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.199 2 DEBUG oslo_concurrency.lockutils [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.199 2 DEBUG oslo_concurrency.lockutils [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.200 2 INFO nova.compute.manager [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Attaching volume 709ede01-7758-4948-8f10-aaa0eec37fcc to /dev/vdb#033[00m
Oct  1 12:55:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:55:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1679439122' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:55:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:55:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1679439122' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.335 2 DEBUG os_brick.utils [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.336 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.353 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.354 741 DEBUG oslo.privsep.daemon [-] privsep: reply[60ab30e7-7e21-4f14-bf3e-7106ec4a6e64]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.355 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.368 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.368 741 DEBUG oslo.privsep.daemon [-] privsep: reply[33651efe-3547-432a-bd91-ba6caaac9df8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.371 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.382 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.382 741 DEBUG oslo.privsep.daemon [-] privsep: reply[58771758-b155-4be0-9642-de5596afa3ff]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.384 741 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f1e0b7-a913-44f9-b9d2-8097ea306653]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.385 2 DEBUG oslo_concurrency.processutils [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.425 2 DEBUG oslo_concurrency.processutils [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.428 2 DEBUG os_brick.initiator.connectors.lightos [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.429 2 DEBUG os_brick.initiator.connectors.lightos [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.429 2 DEBUG os_brick.initiator.connectors.lightos [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.429 2 DEBUG os_brick.utils [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] <== get_connector_properties: return (94ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:55:19 np0005464891 nova_compute[259907]: 2025-10-01 16:55:19.430 2 DEBUG nova.virt.block_device [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Updating existing volume attachment record: ce72ec6f-5fc8-42d2-98ac-d15e32e533f4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:55:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:55:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/411824611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:55:20 np0005464891 nova_compute[259907]: 2025-10-01 16:55:20.131 2 DEBUG nova.objects.instance [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lazy-loading 'flavor' on Instance uuid eef473c3-8fff-4cd4-a5f8-ef9b89b7439a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:55:20 np0005464891 nova_compute[259907]: 2025-10-01 16:55:20.159 2 DEBUG nova.virt.libvirt.driver [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Attempting to attach volume 709ede01-7758-4948-8f10-aaa0eec37fcc with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  1 12:55:20 np0005464891 nova_compute[259907]: 2025-10-01 16:55:20.162 2 DEBUG nova.virt.libvirt.guest [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] attach device xml: <disk type="network" device="disk">
Oct  1 12:55:20 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:55:20 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-709ede01-7758-4948-8f10-aaa0eec37fcc">
Oct  1 12:55:20 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:55:20 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:55:20 np0005464891 nova_compute[259907]:  <auth username="openstack">
Oct  1 12:55:20 np0005464891 nova_compute[259907]:    <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:55:20 np0005464891 nova_compute[259907]:  </auth>
Oct  1 12:55:20 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:55:20 np0005464891 nova_compute[259907]:  <serial>709ede01-7758-4948-8f10-aaa0eec37fcc</serial>
Oct  1 12:55:20 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:55:20 np0005464891 nova_compute[259907]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  1 12:55:20 np0005464891 nova_compute[259907]: 2025-10-01 16:55:20.295 2 DEBUG nova.virt.libvirt.driver [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:55:20 np0005464891 nova_compute[259907]: 2025-10-01 16:55:20.296 2 DEBUG nova.virt.libvirt.driver [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:55:20 np0005464891 nova_compute[259907]: 2025-10-01 16:55:20.296 2 DEBUG nova.virt.libvirt.driver [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:55:20 np0005464891 nova_compute[259907]: 2025-10-01 16:55:20.297 2 DEBUG nova.virt.libvirt.driver [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] No VIF found with MAC fa:16:3e:b5:a9:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:55:20 np0005464891 nova_compute[259907]: 2025-10-01 16:55:20.476 2 DEBUG oslo_concurrency.lockutils [None req-51881fb1-fd69-4baa-bfce-0164db955ac5 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1618: 321 pgs: 321 active+clean; 171 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 23 KiB/s wr, 94 op/s
Oct  1 12:55:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:55:21 np0005464891 nova_compute[259907]: 2025-10-01 16:55:21.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:55:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 21K writes, 77K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 21K writes, 7365 syncs, 2.90 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 43K keys, 12K commit groups, 1.0 writes per commit group, ingest: 27.49 MB, 0.05 MB/s#012Interval WAL: 12K writes, 5239 syncs, 2.38 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007620710425213258 of space, bias 1.0, pg target 0.22862131275639774 quantized to 32 (current 32)
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003835791495734844 of space, bias 1.0, pg target 0.11507374487204532 quantized to 32 (current 32)
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 4f30f86f-818b-4523-a7b6-2d3b149099f7 does not exist
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 13f9d3dc-f7b5-4bd1-9d16-96279a37af5e does not exist
Oct  1 12:55:22 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 3c54f1d9-7f51-446b-ba9c-3d9fad9bc8ac does not exist
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Oct  1 12:55:22 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Oct  1 12:55:22 np0005464891 podman[291961]: 2025-10-01 16:55:22.967144241 +0000 UTC m=+0.082036597 container create d9912ee98cdaffae126effa2f1d4620b9065c7d5689dc03aada43e8824bda879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendeleev, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 12:55:23 np0005464891 podman[291961]: 2025-10-01 16:55:22.917058328 +0000 UTC m=+0.031950694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:55:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 171 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 25 KiB/s wr, 99 op/s
Oct  1 12:55:23 np0005464891 systemd[1]: Started libpod-conmon-d9912ee98cdaffae126effa2f1d4620b9065c7d5689dc03aada43e8824bda879.scope.
Oct  1 12:55:23 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:55:23 np0005464891 podman[291961]: 2025-10-01 16:55:23.371360064 +0000 UTC m=+0.486252450 container init d9912ee98cdaffae126effa2f1d4620b9065c7d5689dc03aada43e8824bda879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:55:23 np0005464891 podman[291961]: 2025-10-01 16:55:23.379190855 +0000 UTC m=+0.494083201 container start d9912ee98cdaffae126effa2f1d4620b9065c7d5689dc03aada43e8824bda879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendeleev, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:55:23 np0005464891 suspicious_mendeleev[291977]: 167 167
Oct  1 12:55:23 np0005464891 systemd[1]: libpod-d9912ee98cdaffae126effa2f1d4620b9065c7d5689dc03aada43e8824bda879.scope: Deactivated successfully.
Oct  1 12:55:23 np0005464891 conmon[291977]: conmon d9912ee98cdaffae126e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9912ee98cdaffae126effa2f1d4620b9065c7d5689dc03aada43e8824bda879.scope/container/memory.events
Oct  1 12:55:23 np0005464891 podman[291961]: 2025-10-01 16:55:23.404216312 +0000 UTC m=+0.519108668 container attach d9912ee98cdaffae126effa2f1d4620b9065c7d5689dc03aada43e8824bda879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendeleev, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:55:23 np0005464891 podman[291961]: 2025-10-01 16:55:23.40561279 +0000 UTC m=+0.520505146 container died d9912ee98cdaffae126effa2f1d4620b9065c7d5689dc03aada43e8824bda879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 12:55:23 np0005464891 systemd[1]: var-lib-containers-storage-overlay-cb6565c033a5f2f5d59a99052aa2b928ac35656e4d8de19628ef3d9d7c32abbf-merged.mount: Deactivated successfully.
Oct  1 12:55:23 np0005464891 podman[291961]: 2025-10-01 16:55:23.46891909 +0000 UTC m=+0.583811436 container remove d9912ee98cdaffae126effa2f1d4620b9065c7d5689dc03aada43e8824bda879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:55:23 np0005464891 systemd[1]: libpod-conmon-d9912ee98cdaffae126effa2f1d4620b9065c7d5689dc03aada43e8824bda879.scope: Deactivated successfully.
Oct  1 12:55:23 np0005464891 podman[292002]: 2025-10-01 16:55:23.692734078 +0000 UTC m=+0.053916398 container create 581cf53c1ab7e7afe44f9f3cd4fa036eabb19a908f05a9b76aed5c3ca5e0a30e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:55:23 np0005464891 systemd[1]: Started libpod-conmon-581cf53c1ab7e7afe44f9f3cd4fa036eabb19a908f05a9b76aed5c3ca5e0a30e.scope.
Oct  1 12:55:23 np0005464891 podman[292002]: 2025-10-01 16:55:23.667014583 +0000 UTC m=+0.028196983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:55:23 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:55:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1a3ca980182410797082a52039ba0c4ff5b34b8ea270ec917bafa3dbf99f03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:55:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1a3ca980182410797082a52039ba0c4ff5b34b8ea270ec917bafa3dbf99f03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:55:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1a3ca980182410797082a52039ba0c4ff5b34b8ea270ec917bafa3dbf99f03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:55:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1a3ca980182410797082a52039ba0c4ff5b34b8ea270ec917bafa3dbf99f03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:55:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1a3ca980182410797082a52039ba0c4ff5b34b8ea270ec917bafa3dbf99f03/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:55:23 np0005464891 podman[292002]: 2025-10-01 16:55:23.795184226 +0000 UTC m=+0.156366576 container init 581cf53c1ab7e7afe44f9f3cd4fa036eabb19a908f05a9b76aed5c3ca5e0a30e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:55:23 np0005464891 podman[292002]: 2025-10-01 16:55:23.817123978 +0000 UTC m=+0.178306338 container start 581cf53c1ab7e7afe44f9f3cd4fa036eabb19a908f05a9b76aed5c3ca5e0a30e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 12:55:23 np0005464891 podman[292002]: 2025-10-01 16:55:23.824086107 +0000 UTC m=+0.185268467 container attach 581cf53c1ab7e7afe44f9f3cd4fa036eabb19a908f05a9b76aed5c3ca5e0a30e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 12:55:23 np0005464891 nova_compute[259907]: 2025-10-01 16:55:23.907 2 DEBUG oslo_concurrency.lockutils [None req-2d9a10e2-6a0b-4cdc-9791-3c7a12c9e943 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:23 np0005464891 nova_compute[259907]: 2025-10-01 16:55:23.909 2 DEBUG oslo_concurrency.lockutils [None req-2d9a10e2-6a0b-4cdc-9791-3c7a12c9e943 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:23 np0005464891 nova_compute[259907]: 2025-10-01 16:55:23.932 2 INFO nova.compute.manager [None req-2d9a10e2-6a0b-4cdc-9791-3c7a12c9e943 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Detaching volume 709ede01-7758-4948-8f10-aaa0eec37fcc#033[00m
Oct  1 12:55:24 np0005464891 nova_compute[259907]: 2025-10-01 16:55:24.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:24 np0005464891 nova_compute[259907]: 2025-10-01 16:55:24.137 2 INFO nova.virt.block_device [None req-2d9a10e2-6a0b-4cdc-9791-3c7a12c9e943 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Attempting to driver detach volume 709ede01-7758-4948-8f10-aaa0eec37fcc from mountpoint /dev/vdb#033[00m
Oct  1 12:55:24 np0005464891 nova_compute[259907]: 2025-10-01 16:55:24.150 2 DEBUG nova.virt.libvirt.driver [None req-2d9a10e2-6a0b-4cdc-9791-3c7a12c9e943 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Attempting to detach device vdb from instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  1 12:55:24 np0005464891 nova_compute[259907]: 2025-10-01 16:55:24.150 2 DEBUG nova.virt.libvirt.guest [None req-2d9a10e2-6a0b-4cdc-9791-3c7a12c9e943 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:55:24 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:55:24 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-709ede01-7758-4948-8f10-aaa0eec37fcc">
Oct  1 12:55:24 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:55:24 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:55:24 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:55:24 np0005464891 nova_compute[259907]:  <serial>709ede01-7758-4948-8f10-aaa0eec37fcc</serial>
Oct  1 12:55:24 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:55:24 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:55:24 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:55:24 np0005464891 nova_compute[259907]: 2025-10-01 16:55:24.162 2 INFO nova.virt.libvirt.driver [None req-2d9a10e2-6a0b-4cdc-9791-3c7a12c9e943 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Successfully detached device vdb from instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a from the persistent domain config.#033[00m
Oct  1 12:55:24 np0005464891 nova_compute[259907]: 2025-10-01 16:55:24.163 2 DEBUG nova.virt.libvirt.driver [None req-2d9a10e2-6a0b-4cdc-9791-3c7a12c9e943 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  1 12:55:24 np0005464891 nova_compute[259907]: 2025-10-01 16:55:24.163 2 DEBUG nova.virt.libvirt.guest [None req-2d9a10e2-6a0b-4cdc-9791-3c7a12c9e943 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:55:24 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:55:24 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-709ede01-7758-4948-8f10-aaa0eec37fcc">
Oct  1 12:55:24 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:55:24 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:55:24 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:55:24 np0005464891 nova_compute[259907]:  <serial>709ede01-7758-4948-8f10-aaa0eec37fcc</serial>
Oct  1 12:55:24 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:55:24 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:55:24 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:55:24 np0005464891 nova_compute[259907]: 2025-10-01 16:55:24.302 2 DEBUG nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Received event <DeviceRemovedEvent: 1759337724.3013525, eef473c3-8fff-4cd4-a5f8-ef9b89b7439a => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  1 12:55:24 np0005464891 nova_compute[259907]: 2025-10-01 16:55:24.306 2 DEBUG nova.virt.libvirt.driver [None req-2d9a10e2-6a0b-4cdc-9791-3c7a12c9e943 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  1 12:55:24 np0005464891 nova_compute[259907]: 2025-10-01 16:55:24.309 2 INFO nova.virt.libvirt.driver [None req-2d9a10e2-6a0b-4cdc-9791-3c7a12c9e943 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Successfully detached device vdb from instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a from the live domain config.#033[00m
Oct  1 12:55:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Oct  1 12:55:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Oct  1 12:55:24 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Oct  1 12:55:24 np0005464891 nova_compute[259907]: 2025-10-01 16:55:24.496 2 DEBUG nova.objects.instance [None req-2d9a10e2-6a0b-4cdc-9791-3c7a12c9e943 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lazy-loading 'flavor' on Instance uuid eef473c3-8fff-4cd4-a5f8-ef9b89b7439a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:55:24 np0005464891 nova_compute[259907]: 2025-10-01 16:55:24.534 2 DEBUG oslo_concurrency.lockutils [None req-2d9a10e2-6a0b-4cdc-9791-3c7a12c9e943 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:24 np0005464891 confident_taussig[292018]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:55:24 np0005464891 confident_taussig[292018]: --> relative data size: 1.0
Oct  1 12:55:24 np0005464891 confident_taussig[292018]: --> All data devices are unavailable
Oct  1 12:55:24 np0005464891 systemd[1]: libpod-581cf53c1ab7e7afe44f9f3cd4fa036eabb19a908f05a9b76aed5c3ca5e0a30e.scope: Deactivated successfully.
Oct  1 12:55:24 np0005464891 systemd[1]: libpod-581cf53c1ab7e7afe44f9f3cd4fa036eabb19a908f05a9b76aed5c3ca5e0a30e.scope: Consumed 1.012s CPU time.
Oct  1 12:55:24 np0005464891 podman[292002]: 2025-10-01 16:55:24.891851269 +0000 UTC m=+1.253033629 container died 581cf53c1ab7e7afe44f9f3cd4fa036eabb19a908f05a9b76aed5c3ca5e0a30e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:55:24 np0005464891 systemd[1]: var-lib-containers-storage-overlay-3e1a3ca980182410797082a52039ba0c4ff5b34b8ea270ec917bafa3dbf99f03-merged.mount: Deactivated successfully.
Oct  1 12:55:24 np0005464891 podman[292002]: 2025-10-01 16:55:24.951871011 +0000 UTC m=+1.313053351 container remove 581cf53c1ab7e7afe44f9f3cd4fa036eabb19a908f05a9b76aed5c3ca5e0a30e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 12:55:24 np0005464891 systemd[1]: libpod-conmon-581cf53c1ab7e7afe44f9f3cd4fa036eabb19a908f05a9b76aed5c3ca5e0a30e.scope: Deactivated successfully.
Oct  1 12:55:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 321 active+clean; 172 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 162 KiB/s wr, 139 op/s
Oct  1 12:55:25 np0005464891 podman[292205]: 2025-10-01 16:55:25.639886181 +0000 UTC m=+0.056120477 container create 99853c8fe920dbf1c9dee38929289157d82afded523eb21fa2d5aefde14f727e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_liskov, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 12:55:25 np0005464891 systemd[1]: Started libpod-conmon-99853c8fe920dbf1c9dee38929289157d82afded523eb21fa2d5aefde14f727e.scope.
Oct  1 12:55:25 np0005464891 podman[292205]: 2025-10-01 16:55:25.614341611 +0000 UTC m=+0.030575927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:55:25 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:55:25 np0005464891 podman[292205]: 2025-10-01 16:55:25.745382482 +0000 UTC m=+0.161616858 container init 99853c8fe920dbf1c9dee38929289157d82afded523eb21fa2d5aefde14f727e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_liskov, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:55:25 np0005464891 podman[292205]: 2025-10-01 16:55:25.755092394 +0000 UTC m=+0.171326710 container start 99853c8fe920dbf1c9dee38929289157d82afded523eb21fa2d5aefde14f727e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 12:55:25 np0005464891 nice_liskov[292221]: 167 167
Oct  1 12:55:25 np0005464891 systemd[1]: libpod-99853c8fe920dbf1c9dee38929289157d82afded523eb21fa2d5aefde14f727e.scope: Deactivated successfully.
Oct  1 12:55:25 np0005464891 podman[292205]: 2025-10-01 16:55:25.76306596 +0000 UTC m=+0.179300306 container attach 99853c8fe920dbf1c9dee38929289157d82afded523eb21fa2d5aefde14f727e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_liskov, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:55:25 np0005464891 podman[292205]: 2025-10-01 16:55:25.763758848 +0000 UTC m=+0.179993154 container died 99853c8fe920dbf1c9dee38929289157d82afded523eb21fa2d5aefde14f727e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 12:55:25 np0005464891 systemd[1]: var-lib-containers-storage-overlay-2c4c146fca1e193ba0fa7644334f1d0c8b84904945f3b06420080d04dc7625bc-merged.mount: Deactivated successfully.
Oct  1 12:55:25 np0005464891 podman[292205]: 2025-10-01 16:55:25.80859977 +0000 UTC m=+0.224834076 container remove 99853c8fe920dbf1c9dee38929289157d82afded523eb21fa2d5aefde14f727e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_liskov, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:55:25 np0005464891 systemd[1]: libpod-conmon-99853c8fe920dbf1c9dee38929289157d82afded523eb21fa2d5aefde14f727e.scope: Deactivated successfully.
Oct  1 12:55:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:55:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3558581798' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:55:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:55:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3558581798' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:55:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:55:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/231328018' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:55:26 np0005464891 podman[292245]: 2025-10-01 16:55:26.042680095 +0000 UTC m=+0.077315570 container create 691075088ca3d5efe293bb229e49d04ebeff9f94b084a398736645e5aeb03ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wright, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 12:55:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:55:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/231328018' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:55:26 np0005464891 podman[292245]: 2025-10-01 16:55:26.010045833 +0000 UTC m=+0.044681358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:55:26 np0005464891 systemd[1]: Started libpod-conmon-691075088ca3d5efe293bb229e49d04ebeff9f94b084a398736645e5aeb03ffd.scope.
Oct  1 12:55:26 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:55:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46f0ecaddccc5b698d59cd7248f3fced3637676c38a837410a13c7ca829fcf51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:55:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46f0ecaddccc5b698d59cd7248f3fced3637676c38a837410a13c7ca829fcf51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:55:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46f0ecaddccc5b698d59cd7248f3fced3637676c38a837410a13c7ca829fcf51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:55:26 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46f0ecaddccc5b698d59cd7248f3fced3637676c38a837410a13c7ca829fcf51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:55:26 np0005464891 podman[292245]: 2025-10-01 16:55:26.159262776 +0000 UTC m=+0.193898261 container init 691075088ca3d5efe293bb229e49d04ebeff9f94b084a398736645e5aeb03ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 12:55:26 np0005464891 podman[292245]: 2025-10-01 16:55:26.172496233 +0000 UTC m=+0.207131668 container start 691075088ca3d5efe293bb229e49d04ebeff9f94b084a398736645e5aeb03ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 12:55:26 np0005464891 podman[292245]: 2025-10-01 16:55:26.175650778 +0000 UTC m=+0.210286223 container attach 691075088ca3d5efe293bb229e49d04ebeff9f94b084a398736645e5aeb03ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wright, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 12:55:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:55:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Oct  1 12:55:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Oct  1 12:55:26 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Oct  1 12:55:26 np0005464891 nova_compute[259907]: 2025-10-01 16:55:26.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]: {
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:    "0": [
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:        {
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "devices": [
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "/dev/loop3"
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            ],
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "lv_name": "ceph_lv0",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "lv_size": "21470642176",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "name": "ceph_lv0",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "tags": {
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.cluster_name": "ceph",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.crush_device_class": "",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.encrypted": "0",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.osd_id": "0",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.type": "block",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.vdo": "0"
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            },
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "type": "block",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "vg_name": "ceph_vg0"
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:        }
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:    ],
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:    "1": [
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:        {
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "devices": [
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "/dev/loop4"
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            ],
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "lv_name": "ceph_lv1",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "lv_size": "21470642176",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "name": "ceph_lv1",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "tags": {
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.cluster_name": "ceph",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.crush_device_class": "",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.encrypted": "0",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.osd_id": "1",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.type": "block",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.vdo": "0"
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            },
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "type": "block",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "vg_name": "ceph_vg1"
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:        }
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:    ],
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:    "2": [
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:        {
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "devices": [
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "/dev/loop5"
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            ],
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "lv_name": "ceph_lv2",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "lv_size": "21470642176",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "name": "ceph_lv2",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "tags": {
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.cluster_name": "ceph",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.crush_device_class": "",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.encrypted": "0",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.osd_id": "2",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.type": "block",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:                "ceph.vdo": "0"
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            },
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "type": "block",
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:            "vg_name": "ceph_vg2"
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:        }
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]:    ]
Oct  1 12:55:27 np0005464891 wizardly_wright[292261]: }
Oct  1 12:55:27 np0005464891 systemd[1]: libpod-691075088ca3d5efe293bb229e49d04ebeff9f94b084a398736645e5aeb03ffd.scope: Deactivated successfully.
Oct  1 12:55:27 np0005464891 podman[292245]: 2025-10-01 16:55:27.042376498 +0000 UTC m=+1.077011953 container died 691075088ca3d5efe293bb229e49d04ebeff9f94b084a398736645e5aeb03ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wright, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:55:27 np0005464891 nova_compute[259907]: 2025-10-01 16:55:27.052 2 DEBUG oslo_concurrency.lockutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "c6b5a948-5763-4847-a02b-6010ab49c3da" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:27 np0005464891 nova_compute[259907]: 2025-10-01 16:55:27.054 2 DEBUG oslo_concurrency.lockutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "c6b5a948-5763-4847-a02b-6010ab49c3da" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:27 np0005464891 systemd[1]: var-lib-containers-storage-overlay-46f0ecaddccc5b698d59cd7248f3fced3637676c38a837410a13c7ca829fcf51-merged.mount: Deactivated successfully.
Oct  1 12:55:27 np0005464891 nova_compute[259907]: 2025-10-01 16:55:27.070 2 DEBUG nova.compute.manager [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:55:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 321 active+clean; 172 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 169 KiB/s rd, 161 KiB/s wr, 71 op/s
Oct  1 12:55:27 np0005464891 podman[292245]: 2025-10-01 16:55:27.096891721 +0000 UTC m=+1.131527156 container remove 691075088ca3d5efe293bb229e49d04ebeff9f94b084a398736645e5aeb03ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wright, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:55:27 np0005464891 systemd[1]: libpod-conmon-691075088ca3d5efe293bb229e49d04ebeff9f94b084a398736645e5aeb03ffd.scope: Deactivated successfully.
Oct  1 12:55:27 np0005464891 nova_compute[259907]: 2025-10-01 16:55:27.133 2 DEBUG oslo_concurrency.lockutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:27 np0005464891 nova_compute[259907]: 2025-10-01 16:55:27.133 2 DEBUG oslo_concurrency.lockutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:27 np0005464891 nova_compute[259907]: 2025-10-01 16:55:27.142 2 DEBUG nova.virt.hardware [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:55:27 np0005464891 nova_compute[259907]: 2025-10-01 16:55:27.143 2 INFO nova.compute.claims [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:55:27 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 12:55:27 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.3 total, 600.0 interval#012Cumulative writes: 18K writes, 68K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 18K writes, 5951 syncs, 3.04 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 25.91 MB, 0.04 MB/s#012Interval WAL: 10K writes, 4344 syncs, 2.45 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 12:55:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:55:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/528214204' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:55:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:55:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/528214204' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:55:27 np0005464891 nova_compute[259907]: 2025-10-01 16:55:27.418 2 DEBUG oslo_concurrency.processutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:27 np0005464891 podman[292444]: 2025-10-01 16:55:27.767590594 +0000 UTC m=+0.049744685 container create 106e8b88c79889f2f0a3cf11ea5804f66120811fd73c8ddcdb5a4f60aa6848e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 12:55:27 np0005464891 systemd[1]: Started libpod-conmon-106e8b88c79889f2f0a3cf11ea5804f66120811fd73c8ddcdb5a4f60aa6848e0.scope.
Oct  1 12:55:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:55:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3429761563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:55:27 np0005464891 podman[292444]: 2025-10-01 16:55:27.737366637 +0000 UTC m=+0.019520738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:55:27 np0005464891 nova_compute[259907]: 2025-10-01 16:55:27.853 2 DEBUG oslo_concurrency.processutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:27 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:55:27 np0005464891 nova_compute[259907]: 2025-10-01 16:55:27.860 2 DEBUG nova.compute.provider_tree [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:55:27 np0005464891 podman[292444]: 2025-10-01 16:55:27.900356351 +0000 UTC m=+0.182510452 container init 106e8b88c79889f2f0a3cf11ea5804f66120811fd73c8ddcdb5a4f60aa6848e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 12:55:27 np0005464891 nova_compute[259907]: 2025-10-01 16:55:27.905 2 DEBUG nova.scheduler.client.report [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:55:27 np0005464891 podman[292444]: 2025-10-01 16:55:27.907859284 +0000 UTC m=+0.190013345 container start 106e8b88c79889f2f0a3cf11ea5804f66120811fd73c8ddcdb5a4f60aa6848e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:55:27 np0005464891 determined_blackburn[292462]: 167 167
Oct  1 12:55:27 np0005464891 systemd[1]: libpod-106e8b88c79889f2f0a3cf11ea5804f66120811fd73c8ddcdb5a4f60aa6848e0.scope: Deactivated successfully.
Oct  1 12:55:27 np0005464891 podman[292444]: 2025-10-01 16:55:27.923545978 +0000 UTC m=+0.205700079 container attach 106e8b88c79889f2f0a3cf11ea5804f66120811fd73c8ddcdb5a4f60aa6848e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 12:55:27 np0005464891 podman[292444]: 2025-10-01 16:55:27.924291928 +0000 UTC m=+0.206445999 container died 106e8b88c79889f2f0a3cf11ea5804f66120811fd73c8ddcdb5a4f60aa6848e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 12:55:27 np0005464891 nova_compute[259907]: 2025-10-01 16:55:27.961 2 DEBUG oslo_concurrency.lockutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:27 np0005464891 nova_compute[259907]: 2025-10-01 16:55:27.962 2 DEBUG nova.compute.manager [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:55:27 np0005464891 systemd[1]: var-lib-containers-storage-overlay-bd5b940b9d21d8dc6d374a768114af15bfc204614c6dbc29bc195ba9f3f3a90f-merged.mount: Deactivated successfully.
Oct  1 12:55:28 np0005464891 ceph-mgr[74592]: [devicehealth INFO root] Check health
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.019 2 DEBUG nova.compute.manager [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.019 2 DEBUG nova.network.neutron [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:55:28 np0005464891 podman[292444]: 2025-10-01 16:55:28.025668017 +0000 UTC m=+0.307822078 container remove 106e8b88c79889f2f0a3cf11ea5804f66120811fd73c8ddcdb5a4f60aa6848e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:55:28 np0005464891 systemd[1]: libpod-conmon-106e8b88c79889f2f0a3cf11ea5804f66120811fd73c8ddcdb5a4f60aa6848e0.scope: Deactivated successfully.
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.048 2 INFO nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.075 2 DEBUG nova.compute.manager [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.119 2 INFO nova.virt.block_device [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Booting with volume da20faa4-b3b4-4ffa-aa39-6b9eba1450d7 at /dev/vda#033[00m
Oct  1 12:55:28 np0005464891 podman[292488]: 2025-10-01 16:55:28.224474629 +0000 UTC m=+0.065903382 container create e68a02ecd872d038386ce506a68eb4240d06135511e861594b5f524b36a980cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_saha, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.228 2 DEBUG nova.policy [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1280014cdfb74333ae8d71c78116e646', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8318b65fa88942a99937a0d198a04a9c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.248 2 DEBUG os_brick.utils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.250 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:28 np0005464891 systemd[1]: Started libpod-conmon-e68a02ecd872d038386ce506a68eb4240d06135511e861594b5f524b36a980cb.scope.
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.266 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.266 741 DEBUG oslo.privsep.daemon [-] privsep: reply[b3011175-6850-45f2-a715-6ac9a858130f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.267 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.276 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.277 741 DEBUG oslo.privsep.daemon [-] privsep: reply[748c21ab-8b63-460c-8a66-9d5207d3b976]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.278 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:28 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:55:28 np0005464891 podman[292488]: 2025-10-01 16:55:28.195335492 +0000 UTC m=+0.036764315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:55:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/930f05de75d90a60b0cdceae3aad6b3adb2b27ddfc8ec4fec6ccd8328226baa3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:55:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/930f05de75d90a60b0cdceae3aad6b3adb2b27ddfc8ec4fec6ccd8328226baa3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:55:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/930f05de75d90a60b0cdceae3aad6b3adb2b27ddfc8ec4fec6ccd8328226baa3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:55:28 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/930f05de75d90a60b0cdceae3aad6b3adb2b27ddfc8ec4fec6ccd8328226baa3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.289 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.290 741 DEBUG oslo.privsep.daemon [-] privsep: reply[b70e149b-a4bd-45d2-808e-6a9af0696e37]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.294 741 DEBUG oslo.privsep.daemon [-] privsep: reply[192afbab-9b69-41e9-80f3-27271d8b6d06]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.295 2 DEBUG oslo_concurrency.processutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:28 np0005464891 podman[292488]: 2025-10-01 16:55:28.309441865 +0000 UTC m=+0.150870628 container init e68a02ecd872d038386ce506a68eb4240d06135511e861594b5f524b36a980cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_saha, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 12:55:28 np0005464891 podman[292488]: 2025-10-01 16:55:28.318799917 +0000 UTC m=+0.160228660 container start e68a02ecd872d038386ce506a68eb4240d06135511e861594b5f524b36a980cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_saha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.318 2 DEBUG oslo_concurrency.processutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:28 np0005464891 podman[292488]: 2025-10-01 16:55:28.322828047 +0000 UTC m=+0.164256790 container attach e68a02ecd872d038386ce506a68eb4240d06135511e861594b5f524b36a980cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_saha, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.324 2 DEBUG os_brick.initiator.connectors.lightos [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.325 2 DEBUG os_brick.initiator.connectors.lightos [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.325 2 DEBUG os_brick.initiator.connectors.lightos [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.326 2 DEBUG os_brick.utils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.326 2 DEBUG nova.virt.block_device [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Updating existing volume attachment record: 77f4ceb8-45fe-4e12-a7a4-2ca512f3bb42 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:55:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Oct  1 12:55:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Oct  1 12:55:28 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:28 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:28.593 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:55:28 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:28.596 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.712 2 DEBUG nova.network.neutron [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Successfully created port: fd73f99f-9c18-4285-83fe-e782392df556 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:55:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:55:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1051624263' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:55:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:55:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1051624263' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:55:28 np0005464891 nova_compute[259907]: 2025-10-01 16:55:28.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:55:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:55:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3566952306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:55:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 172 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 161 KiB/s wr, 77 op/s
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.227 2 DEBUG nova.compute.manager [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.230 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.230 2 INFO nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Creating image(s)#033[00m
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.231 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.232 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Ensure instance console log exists: /var/lib/nova/instances/c6b5a948-5763-4847-a02b-6010ab49c3da/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.233 2 DEBUG oslo_concurrency.lockutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.233 2 DEBUG oslo_concurrency.lockutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.234 2 DEBUG oslo_concurrency.lockutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:29 np0005464891 elastic_saha[292506]: {
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:        "osd_id": 2,
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:        "type": "bluestore"
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:    },
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:        "osd_id": 0,
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:        "type": "bluestore"
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:    },
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:        "osd_id": 1,
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:        "type": "bluestore"
Oct  1 12:55:29 np0005464891 elastic_saha[292506]:    }
Oct  1 12:55:29 np0005464891 elastic_saha[292506]: }
Oct  1 12:55:29 np0005464891 systemd[1]: libpod-e68a02ecd872d038386ce506a68eb4240d06135511e861594b5f524b36a980cb.scope: Deactivated successfully.
Oct  1 12:55:29 np0005464891 systemd[1]: libpod-e68a02ecd872d038386ce506a68eb4240d06135511e861594b5f524b36a980cb.scope: Consumed 1.071s CPU time.
Oct  1 12:55:29 np0005464891 conmon[292506]: conmon e68a02ecd872d038386c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e68a02ecd872d038386ce506a68eb4240d06135511e861594b5f524b36a980cb.scope/container/memory.events
Oct  1 12:55:29 np0005464891 podman[292488]: 2025-10-01 16:55:29.399754425 +0000 UTC m=+1.241183178 container died e68a02ecd872d038386ce506a68eb4240d06135511e861594b5f524b36a980cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_saha, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:55:29 np0005464891 systemd[1]: var-lib-containers-storage-overlay-930f05de75d90a60b0cdceae3aad6b3adb2b27ddfc8ec4fec6ccd8328226baa3-merged.mount: Deactivated successfully.
Oct  1 12:55:29 np0005464891 podman[292488]: 2025-10-01 16:55:29.479674735 +0000 UTC m=+1.321103478 container remove e68a02ecd872d038386ce506a68eb4240d06135511e861594b5f524b36a980cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_saha, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 12:55:29 np0005464891 systemd[1]: libpod-conmon-e68a02ecd872d038386ce506a68eb4240d06135511e861594b5f524b36a980cb.scope: Deactivated successfully.
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.513 2 DEBUG nova.network.neutron [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Successfully updated port: fd73f99f-9c18-4285-83fe-e782392df556 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:55:29 np0005464891 podman[292544]: 2025-10-01 16:55:29.527313153 +0000 UTC m=+0.099295515 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  1 12:55:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.569 2 DEBUG oslo_concurrency.lockutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "refresh_cache-c6b5a948-5763-4847-a02b-6010ab49c3da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.570 2 DEBUG oslo_concurrency.lockutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquired lock "refresh_cache-c6b5a948-5763-4847-a02b-6010ab49c3da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.570 2 DEBUG nova.network.neutron [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:55:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:55:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.591 2 DEBUG nova.compute.manager [req-5d581ffc-b9a3-4d00-83db-5cfa0af854f1 req-940544ca-7b1b-4e5c-9a01-ad0c382b412e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Received event network-changed-fd73f99f-9c18-4285-83fe-e782392df556 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.591 2 DEBUG nova.compute.manager [req-5d581ffc-b9a3-4d00-83db-5cfa0af854f1 req-940544ca-7b1b-4e5c-9a01-ad0c382b412e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Refreshing instance network info cache due to event network-changed-fd73f99f-9c18-4285-83fe-e782392df556. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.591 2 DEBUG oslo_concurrency.lockutils [req-5d581ffc-b9a3-4d00-83db-5cfa0af854f1 req-940544ca-7b1b-4e5c-9a01-ad0c382b412e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-c6b5a948-5763-4847-a02b-6010ab49c3da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:55:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:55:29 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 80e30a66-f82d-4706-8aad-9c190cc16a43 does not exist
Oct  1 12:55:29 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 61e5e3c9-ef7b-4f5e-96ed-d382c87c8b0c does not exist
Oct  1 12:55:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:29.600 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:55:29 np0005464891 nova_compute[259907]: 2025-10-01 16:55:29.744 2 DEBUG nova.network.neutron [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:55:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Oct  1 12:55:30 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:55:30 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:55:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Oct  1 12:55:30 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.746 2 DEBUG nova.network.neutron [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Updating instance_info_cache with network_info: [{"id": "fd73f99f-9c18-4285-83fe-e782392df556", "address": "fa:16:3e:17:ee:ad", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd73f99f-9c", "ovs_interfaceid": "fd73f99f-9c18-4285-83fe-e782392df556", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.768 2 DEBUG oslo_concurrency.lockutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Releasing lock "refresh_cache-c6b5a948-5763-4847-a02b-6010ab49c3da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.768 2 DEBUG nova.compute.manager [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Instance network_info: |[{"id": "fd73f99f-9c18-4285-83fe-e782392df556", "address": "fa:16:3e:17:ee:ad", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd73f99f-9c", "ovs_interfaceid": "fd73f99f-9c18-4285-83fe-e782392df556", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.769 2 DEBUG oslo_concurrency.lockutils [req-5d581ffc-b9a3-4d00-83db-5cfa0af854f1 req-940544ca-7b1b-4e5c-9a01-ad0c382b412e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-c6b5a948-5763-4847-a02b-6010ab49c3da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.769 2 DEBUG nova.network.neutron [req-5d581ffc-b9a3-4d00-83db-5cfa0af854f1 req-940544ca-7b1b-4e5c-9a01-ad0c382b412e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Refreshing network info cache for port fd73f99f-9c18-4285-83fe-e782392df556 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.772 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Start _get_guest_xml network_info=[{"id": "fd73f99f-9c18-4285-83fe-e782392df556", "address": "fa:16:3e:17:ee:ad", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd73f99f-9c", "ovs_interfaceid": "fd73f99f-9c18-4285-83fe-e782392df556", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'mount_device': '/dev/vda', 'attachment_id': '77f4ceb8-45fe-4e12-a7a4-2ca512f3bb42', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-da20faa4-b3b4-4ffa-aa39-6b9eba1450d7', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'da20faa4-b3b4-4ffa-aa39-6b9eba1450d7', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c6b5a948-5763-4847-a02b-6010ab49c3da', 'attached_at': '', 'detached_at': '', 'volume_id': 'da20faa4-b3b4-4ffa-aa39-6b9eba1450d7', 'serial': 'da20faa4-b3b4-4ffa-aa39-6b9eba1450d7'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.777 2 WARNING nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.783 2 DEBUG nova.virt.libvirt.host [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.783 2 DEBUG nova.virt.libvirt.host [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.786 2 DEBUG nova.virt.libvirt.host [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.786 2 DEBUG nova.virt.libvirt.host [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.787 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.787 2 DEBUG nova.virt.hardware [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.787 2 DEBUG nova.virt.hardware [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.788 2 DEBUG nova.virt.hardware [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.788 2 DEBUG nova.virt.hardware [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.788 2 DEBUG nova.virt.hardware [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.788 2 DEBUG nova.virt.hardware [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.788 2 DEBUG nova.virt.hardware [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.789 2 DEBUG nova.virt.hardware [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.789 2 DEBUG nova.virt.hardware [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.789 2 DEBUG nova.virt.hardware [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.789 2 DEBUG nova.virt.hardware [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.842 2 DEBUG nova.storage.rbd_utils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image c6b5a948-5763-4847-a02b-6010ab49c3da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.849 2 DEBUG oslo_concurrency.processutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.876 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.882 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.883 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.933 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.934 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.935 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.935 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:55:30 np0005464891 nova_compute[259907]: 2025-10-01 16:55:30.936 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 170 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 10 KiB/s wr, 170 op/s
Oct  1 12:55:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:55:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Oct  1 12:55:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:55:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3131840331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:55:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Oct  1 12:55:31 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.307 2 DEBUG oslo_concurrency.processutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:55:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/375725528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.394 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.458 2 DEBUG os_brick.encryptors [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Using volume encryption metadata '{'encryption_key_id': '31214fe1-c53f-4f9c-8922-04ce10b0b63f', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-da20faa4-b3b4-4ffa-aa39-6b9eba1450d7', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'da20faa4-b3b4-4ffa-aa39-6b9eba1450d7', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c6b5a948-5763-4847-a02b-6010ab49c3da', 'attached_at': '', 'detached_at': '', 'volume_id': 'da20faa4-b3b4-4ffa-aa39-6b9eba1450d7', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.460 2 DEBUG barbicanclient.client [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.473 2 DEBUG barbicanclient.v1.secrets [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/31214fe1-c53f-4f9c-8922-04ce10b0b63f get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.474 2 INFO barbicanclient.base [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Calculated Secrets uuid ref: secrets/31214fe1-c53f-4f9c-8922-04ce10b0b63f#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.481 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.481 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.496 2 DEBUG barbicanclient.client [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.497 2 INFO barbicanclient.base [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Calculated Secrets uuid ref: secrets/31214fe1-c53f-4f9c-8922-04ce10b0b63f#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.519 2 DEBUG barbicanclient.client [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.520 2 INFO barbicanclient.base [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Calculated Secrets uuid ref: secrets/31214fe1-c53f-4f9c-8922-04ce10b0b63f#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.545 2 DEBUG barbicanclient.client [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.547 2 INFO barbicanclient.base [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Calculated Secrets uuid ref: secrets/31214fe1-c53f-4f9c-8922-04ce10b0b63f#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.568 2 DEBUG barbicanclient.client [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.568 2 INFO barbicanclient.base [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Calculated Secrets uuid ref: secrets/31214fe1-c53f-4f9c-8922-04ce10b0b63f#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.586 2 DEBUG barbicanclient.client [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.587 2 INFO barbicanclient.base [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Calculated Secrets uuid ref: secrets/31214fe1-c53f-4f9c-8922-04ce10b0b63f#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.616 2 DEBUG barbicanclient.client [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.617 2 INFO barbicanclient.base [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Calculated Secrets uuid ref: secrets/31214fe1-c53f-4f9c-8922-04ce10b0b63f#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.637 2 DEBUG barbicanclient.client [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.637 2 INFO barbicanclient.base [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Calculated Secrets uuid ref: secrets/31214fe1-c53f-4f9c-8922-04ce10b0b63f#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.661 2 DEBUG barbicanclient.client [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.662 2 INFO barbicanclient.base [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Calculated Secrets uuid ref: secrets/31214fe1-c53f-4f9c-8922-04ce10b0b63f#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.680 2 DEBUG barbicanclient.client [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.680 2 INFO barbicanclient.base [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Calculated Secrets uuid ref: secrets/31214fe1-c53f-4f9c-8922-04ce10b0b63f#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.687 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.688 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4197MB free_disk=59.94252014160156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.688 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.688 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.701 2 DEBUG barbicanclient.client [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.702 2 INFO barbicanclient.base [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Calculated Secrets uuid ref: secrets/31214fe1-c53f-4f9c-8922-04ce10b0b63f#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.721 2 DEBUG barbicanclient.client [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.721 2 INFO barbicanclient.base [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Calculated Secrets uuid ref: secrets/31214fe1-c53f-4f9c-8922-04ce10b0b63f#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.747 2 DEBUG barbicanclient.client [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.748 2 INFO barbicanclient.base [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Calculated Secrets uuid ref: secrets/31214fe1-c53f-4f9c-8922-04ce10b0b63f#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.773 2 DEBUG barbicanclient.client [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.775 2 INFO barbicanclient.base [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Calculated Secrets uuid ref: secrets/31214fe1-c53f-4f9c-8922-04ce10b0b63f#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.778 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.778 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance c6b5a948-5763-4847-a02b-6010ab49c3da actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.779 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.779 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.809 2 DEBUG barbicanclient.client [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.810 2 INFO barbicanclient.base [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Calculated Secrets uuid ref: secrets/31214fe1-c53f-4f9c-8922-04ce10b0b63f#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.832 2 DEBUG barbicanclient.client [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.833 2 DEBUG nova.virt.libvirt.host [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  <usage type="volume">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <volume>da20faa4-b3b4-4ffa-aa39-6b9eba1450d7</volume>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  </usage>
Oct  1 12:55:31 np0005464891 nova_compute[259907]: </secret>
Oct  1 12:55:31 np0005464891 nova_compute[259907]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.854 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.889 2 DEBUG nova.network.neutron [req-5d581ffc-b9a3-4d00-83db-5cfa0af854f1 req-940544ca-7b1b-4e5c-9a01-ad0c382b412e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Updated VIF entry in instance network info cache for port fd73f99f-9c18-4285-83fe-e782392df556. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.890 2 DEBUG nova.network.neutron [req-5d581ffc-b9a3-4d00-83db-5cfa0af854f1 req-940544ca-7b1b-4e5c-9a01-ad0c382b412e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Updating instance_info_cache with network_info: [{"id": "fd73f99f-9c18-4285-83fe-e782392df556", "address": "fa:16:3e:17:ee:ad", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd73f99f-9c", "ovs_interfaceid": "fd73f99f-9c18-4285-83fe-e782392df556", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.919 2 DEBUG nova.virt.libvirt.vif [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:55:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1658957626',display_name='tempest-TestVolumeBootPattern-server-1658957626',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1658957626',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-bbaa26ff',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:55:28Z,user_data=None,user_id='1280014cdfb74333ae8d71c78116e646',uuid=c6b5a948-5763-4847-a02b-6010ab49c3da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd73f99f-9c18-4285-83fe-e782392df556", "address": "fa:16:3e:17:ee:ad", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd73f99f-9c", "ovs_interfaceid": "fd73f99f-9c18-4285-83fe-e782392df556", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.920 2 DEBUG nova.network.os_vif_util [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "fd73f99f-9c18-4285-83fe-e782392df556", "address": "fa:16:3e:17:ee:ad", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd73f99f-9c", "ovs_interfaceid": "fd73f99f-9c18-4285-83fe-e782392df556", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.921 2 DEBUG nova.network.os_vif_util [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:ee:ad,bridge_name='br-int',has_traffic_filtering=True,id=fd73f99f-9c18-4285-83fe-e782392df556,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd73f99f-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.923 2 DEBUG nova.objects.instance [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lazy-loading 'pci_devices' on Instance uuid c6b5a948-5763-4847-a02b-6010ab49c3da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.925 2 DEBUG oslo_concurrency.lockutils [req-5d581ffc-b9a3-4d00-83db-5cfa0af854f1 req-940544ca-7b1b-4e5c-9a01-ad0c382b412e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-c6b5a948-5763-4847-a02b-6010ab49c3da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.939 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  <uuid>c6b5a948-5763-4847-a02b-6010ab49c3da</uuid>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  <name>instance-00000010</name>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <nova:name>tempest-TestVolumeBootPattern-server-1658957626</nova:name>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:55:30</nova:creationTime>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:        <nova:user uuid="1280014cdfb74333ae8d71c78116e646">tempest-TestVolumeBootPattern-582136054-project-member</nova:user>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:        <nova:project uuid="8318b65fa88942a99937a0d198a04a9c">tempest-TestVolumeBootPattern-582136054</nova:project>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:        <nova:port uuid="fd73f99f-9c18-4285-83fe-e782392df556">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <entry name="serial">c6b5a948-5763-4847-a02b-6010ab49c3da</entry>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <entry name="uuid">c6b5a948-5763-4847-a02b-6010ab49c3da</entry>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/c6b5a948-5763-4847-a02b-6010ab49c3da_disk.config">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="volumes/volume-da20faa4-b3b4-4ffa-aa39-6b9eba1450d7">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <serial>da20faa4-b3b4-4ffa-aa39-6b9eba1450d7</serial>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <encryption format="luks">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:        <secret type="passphrase" uuid="0b05bc63-a742-40c0-824a-4fa7e702a491"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      </encryption>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:17:ee:ad"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <target dev="tapfd73f99f-9c"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/c6b5a948-5763-4847-a02b-6010ab49c3da/console.log" append="off"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:55:31 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:55:31 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:55:31 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:55:31 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.941 2 DEBUG nova.compute.manager [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Preparing to wait for external event network-vif-plugged-fd73f99f-9c18-4285-83fe-e782392df556 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.941 2 DEBUG oslo_concurrency.lockutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.941 2 DEBUG oslo_concurrency.lockutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.941 2 DEBUG oslo_concurrency.lockutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.942 2 DEBUG nova.virt.libvirt.vif [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:55:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1658957626',display_name='tempest-TestVolumeBootPattern-server-1658957626',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1658957626',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-bbaa26ff',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:55:28Z,user_data=None,user_id='1280014cdfb74333ae8d71c78116e646',uuid=c6b5a948-5763-4847-a02b-6010ab49c3da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd73f99f-9c18-4285-83fe-e782392df556", "address": "fa:16:3e:17:ee:ad", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd73f99f-9c", "ovs_interfaceid": "fd73f99f-9c18-4285-83fe-e782392df556", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.943 2 DEBUG nova.network.os_vif_util [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "fd73f99f-9c18-4285-83fe-e782392df556", "address": "fa:16:3e:17:ee:ad", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd73f99f-9c", "ovs_interfaceid": "fd73f99f-9c18-4285-83fe-e782392df556", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.943 2 DEBUG nova.network.os_vif_util [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:ee:ad,bridge_name='br-int',has_traffic_filtering=True,id=fd73f99f-9c18-4285-83fe-e782392df556,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd73f99f-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.944 2 DEBUG os_vif [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:ee:ad,bridge_name='br-int',has_traffic_filtering=True,id=fd73f99f-9c18-4285-83fe-e782392df556,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd73f99f-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.945 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.945 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.950 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd73f99f-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.950 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd73f99f-9c, col_values=(('external_ids', {'iface-id': 'fd73f99f-9c18-4285-83fe-e782392df556', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:ee:ad', 'vm-uuid': 'c6b5a948-5763-4847-a02b-6010ab49c3da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:31 np0005464891 NetworkManager[44940]: <info>  [1759337731.9530] manager: (tapfd73f99f-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:31 np0005464891 nova_compute[259907]: 2025-10-01 16:55:31.962 2 INFO os_vif [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:ee:ad,bridge_name='br-int',has_traffic_filtering=True,id=fd73f99f-9c18-4285-83fe-e782392df556,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd73f99f-9c')#033[00m
Oct  1 12:55:32 np0005464891 nova_compute[259907]: 2025-10-01 16:55:32.022 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:55:32 np0005464891 nova_compute[259907]: 2025-10-01 16:55:32.022 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:55:32 np0005464891 nova_compute[259907]: 2025-10-01 16:55:32.023 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No VIF found with MAC fa:16:3e:17:ee:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:55:32 np0005464891 nova_compute[259907]: 2025-10-01 16:55:32.023 2 INFO nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Using config drive#033[00m
Oct  1 12:55:32 np0005464891 nova_compute[259907]: 2025-10-01 16:55:32.046 2 DEBUG nova.storage.rbd_utils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image c6b5a948-5763-4847-a02b-6010ab49c3da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:55:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:55:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/473288074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:55:32 np0005464891 nova_compute[259907]: 2025-10-01 16:55:32.316 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:32 np0005464891 nova_compute[259907]: 2025-10-01 16:55:32.325 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:55:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:55:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/942270904' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:55:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:55:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/942270904' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:55:32 np0005464891 nova_compute[259907]: 2025-10-01 16:55:32.343 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:55:32 np0005464891 nova_compute[259907]: 2025-10-01 16:55:32.395 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:55:32 np0005464891 nova_compute[259907]: 2025-10-01 16:55:32.396 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:32 np0005464891 nova_compute[259907]: 2025-10-01 16:55:32.446 2 INFO nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Creating config drive at /var/lib/nova/instances/c6b5a948-5763-4847-a02b-6010ab49c3da/disk.config#033[00m
Oct  1 12:55:32 np0005464891 nova_compute[259907]: 2025-10-01 16:55:32.456 2 DEBUG oslo_concurrency.processutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c6b5a948-5763-4847-a02b-6010ab49c3da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoi7wtkrr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:32 np0005464891 nova_compute[259907]: 2025-10-01 16:55:32.590 2 DEBUG oslo_concurrency.processutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c6b5a948-5763-4847-a02b-6010ab49c3da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoi7wtkrr" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:32 np0005464891 nova_compute[259907]: 2025-10-01 16:55:32.629 2 DEBUG nova.storage.rbd_utils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image c6b5a948-5763-4847-a02b-6010ab49c3da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:55:32 np0005464891 nova_compute[259907]: 2025-10-01 16:55:32.633 2 DEBUG oslo_concurrency.processutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c6b5a948-5763-4847-a02b-6010ab49c3da/disk.config c6b5a948-5763-4847-a02b-6010ab49c3da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Oct  1 12:55:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Oct  1 12:55:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 321 active+clean; 170 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 171 KiB/s rd, 12 KiB/s wr, 227 op/s
Oct  1 12:55:33 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Oct  1 12:55:33 np0005464891 nova_compute[259907]: 2025-10-01 16:55:33.317 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:55:33 np0005464891 nova_compute[259907]: 2025-10-01 16:55:33.808 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:55:33 np0005464891 nova_compute[259907]: 2025-10-01 16:55:33.808 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:55:34 np0005464891 nova_compute[259907]: 2025-10-01 16:55:34.209 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "refresh_cache-eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:55:34 np0005464891 nova_compute[259907]: 2025-10-01 16:55:34.210 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquired lock "refresh_cache-eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:55:34 np0005464891 nova_compute[259907]: 2025-10-01 16:55:34.210 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  1 12:55:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 321 active+clean; 170 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 30 KiB/s wr, 240 op/s
Oct  1 12:55:35 np0005464891 nova_compute[259907]: 2025-10-01 16:55:35.810 2 DEBUG oslo_concurrency.processutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c6b5a948-5763-4847-a02b-6010ab49c3da/disk.config c6b5a948-5763-4847-a02b-6010ab49c3da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:35 np0005464891 nova_compute[259907]: 2025-10-01 16:55:35.811 2 INFO nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Deleting local config drive /var/lib/nova/instances/c6b5a948-5763-4847-a02b-6010ab49c3da/disk.config because it was imported into RBD.#033[00m
Oct  1 12:55:35 np0005464891 kernel: tapfd73f99f-9c: entered promiscuous mode
Oct  1 12:55:35 np0005464891 NetworkManager[44940]: <info>  [1759337735.8828] manager: (tapfd73f99f-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Oct  1 12:55:35 np0005464891 ovn_controller[152409]: 2025-10-01T16:55:35Z|00151|binding|INFO|Claiming lport fd73f99f-9c18-4285-83fe-e782392df556 for this chassis.
Oct  1 12:55:35 np0005464891 ovn_controller[152409]: 2025-10-01T16:55:35Z|00152|binding|INFO|fd73f99f-9c18-4285-83fe-e782392df556: Claiming fa:16:3e:17:ee:ad 10.100.0.10
Oct  1 12:55:35 np0005464891 nova_compute[259907]: 2025-10-01 16:55:35.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:35 np0005464891 nova_compute[259907]: 2025-10-01 16:55:35.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:35 np0005464891 ovn_controller[152409]: 2025-10-01T16:55:35Z|00153|binding|INFO|Setting lport fd73f99f-9c18-4285-83fe-e782392df556 ovn-installed in OVS
Oct  1 12:55:35 np0005464891 nova_compute[259907]: 2025-10-01 16:55:35.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:35 np0005464891 systemd-machined[214891]: New machine qemu-16-instance-00000010.
Oct  1 12:55:35 np0005464891 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Oct  1 12:55:35 np0005464891 systemd-udevd[292780]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:55:35 np0005464891 NetworkManager[44940]: <info>  [1759337735.9750] device (tapfd73f99f-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:55:35 np0005464891 NetworkManager[44940]: <info>  [1759337735.9759] device (tapfd73f99f-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:55:36 np0005464891 ovn_controller[152409]: 2025-10-01T16:55:36Z|00154|binding|INFO|Setting lport fd73f99f-9c18-4285-83fe-e782392df556 up in Southbound
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.046 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:ee:ad 10.100.0.10'], port_security=['fa:16:3e:17:ee:ad 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c6b5a948-5763-4847-a02b-6010ab49c3da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce1e1062-6685-441b-8278-667224375e38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8318b65fa88942a99937a0d198a04a9c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc91778b-466d-4bf2-b0e0-b4af5293ed3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4cdcf07-f310-4572-944c-43bd8e74f763, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=fd73f99f-9c18-4285-83fe-e782392df556) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.047 162546 INFO neutron.agent.ovn.metadata.agent [-] Port fd73f99f-9c18-4285-83fe-e782392df556 in datapath ce1e1062-6685-441b-8278-667224375e38 bound to our chassis#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.056 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce1e1062-6685-441b-8278-667224375e38#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.071 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c6b8e0-ffd9-4273-a393-4186188dbef0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.072 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce1e1062-61 in ovnmeta-ce1e1062-6685-441b-8278-667224375e38 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.079 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce1e1062-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.079 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca6806c-1c00-4d69-92dc-5a1b87f82ac6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.081 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[de35872c-ebb2-43cf-a582-fd8bbe18c018]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.094 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[c8135880-ce0c-46e0-b4ee-6a1b74b2a723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.136 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2b773c04-3594-41a6-8790-5902a2ab6a89]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.171 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[ae7cb0fd-7456-434b-82e7-72df00bbd372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:36 np0005464891 NetworkManager[44940]: <info>  [1759337736.1848] manager: (tapce1e1062-60): new Veth device (/org/freedesktop/NetworkManager/Devices/93)
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.184 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[7a24fa21-c1dd-45d5-bc5a-c0aa7148b258]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.219 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c94fe4-0f21-44b8-8bf3-b45345088edc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.222 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[60993c24-87d2-419a-b418-34140a0f17fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:36 np0005464891 NetworkManager[44940]: <info>  [1759337736.2522] device (tapce1e1062-60): carrier: link connected
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.264 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[6e96bf79-8b4f-4ff4-8987-02aa7d536b81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.283 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b860c7-3b32-4c33-abb1-c157574263b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce1e1062-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:87:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468482, 'reachable_time': 17542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292813, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:55:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.302 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca3ba27-068c-4290-ab9a-985303c8b0a6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec5:872c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468482, 'tstamp': 468482}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292814, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.319 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2ecd22de-7189-4a3e-8698-b2eaed13fd90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce1e1062-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:87:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468482, 'reachable_time': 17542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292815, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.361 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[621848b7-409b-495e-9f70-61c60021f25a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:36 np0005464891 nova_compute[259907]: 2025-10-01 16:55:36.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.475 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab961e2-7e8f-449a-aa88-bae979435c9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.476 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce1e1062-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.477 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.477 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce1e1062-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:55:36 np0005464891 kernel: tapce1e1062-60: entered promiscuous mode
Oct  1 12:55:36 np0005464891 nova_compute[259907]: 2025-10-01 16:55:36.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:36 np0005464891 NetworkManager[44940]: <info>  [1759337736.4868] manager: (tapce1e1062-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Oct  1 12:55:36 np0005464891 nova_compute[259907]: 2025-10-01 16:55:36.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.488 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce1e1062-60, col_values=(('external_ids', {'iface-id': 'd971881d-8d8b-44dc-b0b0-4cd0065c0105'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:55:36 np0005464891 ovn_controller[152409]: 2025-10-01T16:55:36Z|00155|binding|INFO|Releasing lport d971881d-8d8b-44dc-b0b0-4cd0065c0105 from this chassis (sb_readonly=0)
Oct  1 12:55:36 np0005464891 nova_compute[259907]: 2025-10-01 16:55:36.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:36 np0005464891 nova_compute[259907]: 2025-10-01 16:55:36.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.491 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce1e1062-6685-441b-8278-667224375e38.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce1e1062-6685-441b-8278-667224375e38.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.494 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[08342560-47e0-42ad-ae03-0915b778c31b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.495 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-ce1e1062-6685-441b-8278-667224375e38
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/ce1e1062-6685-441b-8278-667224375e38.pid.haproxy
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID ce1e1062-6685-441b-8278-667224375e38
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:55:36 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:36.496 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'env', 'PROCESS_TAG=haproxy-ce1e1062-6685-441b-8278-667224375e38', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce1e1062-6685-441b-8278-667224375e38.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:55:36 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Oct  1 12:55:36 np0005464891 nova_compute[259907]: 2025-10-01 16:55:36.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:36 np0005464891 nova_compute[259907]: 2025-10-01 16:55:36.739 2 DEBUG nova.compute.manager [req-a4e0f7d1-629e-40f1-8822-351be4c58d56 req-ca1d588d-4e22-4a97-ba82-8db153f7ca76 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Received event network-vif-plugged-fd73f99f-9c18-4285-83fe-e782392df556 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:55:36 np0005464891 nova_compute[259907]: 2025-10-01 16:55:36.740 2 DEBUG oslo_concurrency.lockutils [req-a4e0f7d1-629e-40f1-8822-351be4c58d56 req-ca1d588d-4e22-4a97-ba82-8db153f7ca76 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:36 np0005464891 nova_compute[259907]: 2025-10-01 16:55:36.740 2 DEBUG oslo_concurrency.lockutils [req-a4e0f7d1-629e-40f1-8822-351be4c58d56 req-ca1d588d-4e22-4a97-ba82-8db153f7ca76 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:36 np0005464891 nova_compute[259907]: 2025-10-01 16:55:36.740 2 DEBUG oslo_concurrency.lockutils [req-a4e0f7d1-629e-40f1-8822-351be4c58d56 req-ca1d588d-4e22-4a97-ba82-8db153f7ca76 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:36 np0005464891 nova_compute[259907]: 2025-10-01 16:55:36.740 2 DEBUG nova.compute.manager [req-a4e0f7d1-629e-40f1-8822-351be4c58d56 req-ca1d588d-4e22-4a97-ba82-8db153f7ca76 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Processing event network-vif-plugged-fd73f99f-9c18-4285-83fe-e782392df556 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:55:36 np0005464891 podman[292860]: 2025-10-01 16:55:36.84977932 +0000 UTC m=+0.029489288 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:55:37 np0005464891 nova_compute[259907]: 2025-10-01 16:55:37.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1634: 321 pgs: 321 active+clean; 170 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 25 KiB/s wr, 93 op/s
Oct  1 12:55:37 np0005464891 nova_compute[259907]: 2025-10-01 16:55:37.200 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Updating instance_info_cache with network_info: [{"id": "a11a83be-c1d2-47f1-92f5-556ead33435e", "address": "fa:16:3e:b5:a9:d3", "network": {"id": "f871e885-fd92-424f-b0b3-6d810367183a", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1323819196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6195d07ebe4991a5be01fb7ba2afdc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa11a83be-c1", "ovs_interfaceid": "a11a83be-c1d2-47f1-92f5-556ead33435e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:55:37 np0005464891 nova_compute[259907]: 2025-10-01 16:55:37.264 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Releasing lock "refresh_cache-eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:55:37 np0005464891 nova_compute[259907]: 2025-10-01 16:55:37.265 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  1 12:55:37 np0005464891 nova_compute[259907]: 2025-10-01 16:55:37.265 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:55:37 np0005464891 nova_compute[259907]: 2025-10-01 16:55:37.266 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:55:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:55:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1841201492' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:55:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:55:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1841201492' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:55:37 np0005464891 podman[292860]: 2025-10-01 16:55:37.551589473 +0000 UTC m=+0.731299421 container create c10fe69076a3708d90cc5fa5d36b8efb6443285c86332b63b1cd9ecf0d18b672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:55:37 np0005464891 nova_compute[259907]: 2025-10-01 16:55:37.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:55:37 np0005464891 nova_compute[259907]: 2025-10-01 16:55:37.910 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:55:37 np0005464891 nova_compute[259907]: 2025-10-01 16:55:37.910 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:55:37 np0005464891 systemd[1]: Started libpod-conmon-c10fe69076a3708d90cc5fa5d36b8efb6443285c86332b63b1cd9ecf0d18b672.scope.
Oct  1 12:55:38 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:55:38 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c635c8b43e0bd9944447cdd2e04cec10451a36867bef06bed3dab5696554bb4e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:55:38 np0005464891 podman[292860]: 2025-10-01 16:55:38.540163216 +0000 UTC m=+1.719873204 container init c10fe69076a3708d90cc5fa5d36b8efb6443285c86332b63b1cd9ecf0d18b672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  1 12:55:38 np0005464891 podman[292860]: 2025-10-01 16:55:38.552812128 +0000 UTC m=+1.732522116 container start c10fe69076a3708d90cc5fa5d36b8efb6443285c86332b63b1cd9ecf0d18b672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  1 12:55:38 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[292896]: [NOTICE]   (292911) : New worker (292913) forked
Oct  1 12:55:38 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[292896]: [NOTICE]   (292911) : Loading success.
Oct  1 12:55:38 np0005464891 podman[292898]: 2025-10-01 16:55:38.94555253 +0000 UTC m=+0.949303152 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  1 12:55:38 np0005464891 nova_compute[259907]: 2025-10-01 16:55:38.994 2 DEBUG nova.compute.manager [req-77e6b47b-df67-41f9-8623-5f3813a31774 req-282c7c87-2c0a-481e-b1a0-cd15f6afabd6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Received event network-vif-plugged-fd73f99f-9c18-4285-83fe-e782392df556 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:55:38 np0005464891 nova_compute[259907]: 2025-10-01 16:55:38.994 2 DEBUG oslo_concurrency.lockutils [req-77e6b47b-df67-41f9-8623-5f3813a31774 req-282c7c87-2c0a-481e-b1a0-cd15f6afabd6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:38 np0005464891 nova_compute[259907]: 2025-10-01 16:55:38.995 2 DEBUG oslo_concurrency.lockutils [req-77e6b47b-df67-41f9-8623-5f3813a31774 req-282c7c87-2c0a-481e-b1a0-cd15f6afabd6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:38 np0005464891 nova_compute[259907]: 2025-10-01 16:55:38.995 2 DEBUG oslo_concurrency.lockutils [req-77e6b47b-df67-41f9-8623-5f3813a31774 req-282c7c87-2c0a-481e-b1a0-cd15f6afabd6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:38 np0005464891 nova_compute[259907]: 2025-10-01 16:55:38.995 2 DEBUG nova.compute.manager [req-77e6b47b-df67-41f9-8623-5f3813a31774 req-282c7c87-2c0a-481e-b1a0-cd15f6afabd6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] No waiting events found dispatching network-vif-plugged-fd73f99f-9c18-4285-83fe-e782392df556 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:55:38 np0005464891 nova_compute[259907]: 2025-10-01 16:55:38.995 2 WARNING nova.compute.manager [req-77e6b47b-df67-41f9-8623-5f3813a31774 req-282c7c87-2c0a-481e-b1a0-cd15f6afabd6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Received unexpected event network-vif-plugged-fd73f99f-9c18-4285-83fe-e782392df556 for instance with vm_state building and task_state spawning.#033[00m
Oct  1 12:55:39 np0005464891 nova_compute[259907]: 2025-10-01 16:55:39.081 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:55:39 np0005464891 nova_compute[259907]: 2025-10-01 16:55:39.081 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  1 12:55:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 170 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 19 KiB/s wr, 77 op/s
Oct  1 12:55:39 np0005464891 nova_compute[259907]: 2025-10-01 16:55:39.431 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  1 12:55:39 np0005464891 nova_compute[259907]: 2025-10-01 16:55:39.432 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:55:39 np0005464891 nova_compute[259907]: 2025-10-01 16:55:39.432 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  1 12:55:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Oct  1 12:55:40 np0005464891 nova_compute[259907]: 2025-10-01 16:55:40.388 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337740.3877125, c6b5a948-5763-4847-a02b-6010ab49c3da => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:55:40 np0005464891 nova_compute[259907]: 2025-10-01 16:55:40.388 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] VM Started (Lifecycle Event)#033[00m
Oct  1 12:55:40 np0005464891 nova_compute[259907]: 2025-10-01 16:55:40.390 2 DEBUG nova.compute.manager [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:55:40 np0005464891 nova_compute[259907]: 2025-10-01 16:55:40.393 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:55:40 np0005464891 nova_compute[259907]: 2025-10-01 16:55:40.397 2 INFO nova.virt.libvirt.driver [-] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Instance spawned successfully.#033[00m
Oct  1 12:55:40 np0005464891 nova_compute[259907]: 2025-10-01 16:55:40.397 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:55:40 np0005464891 nova_compute[259907]: 2025-10-01 16:55:40.624 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:55:40 np0005464891 nova_compute[259907]: 2025-10-01 16:55:40.627 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:55:40 np0005464891 nova_compute[259907]: 2025-10-01 16:55:40.964 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:55:40 np0005464891 nova_compute[259907]: 2025-10-01 16:55:40.965 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:55:40 np0005464891 nova_compute[259907]: 2025-10-01 16:55:40.966 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:55:40 np0005464891 nova_compute[259907]: 2025-10-01 16:55:40.967 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:55:40 np0005464891 nova_compute[259907]: 2025-10-01 16:55:40.968 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:55:40 np0005464891 nova_compute[259907]: 2025-10-01 16:55:40.969 2 DEBUG nova.virt.libvirt.driver [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:55:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1636: 321 pgs: 321 active+clean; 170 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 37 KiB/s wr, 51 op/s
Oct  1 12:55:41 np0005464891 nova_compute[259907]: 2025-10-01 16:55:41.199 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:55:41 np0005464891 nova_compute[259907]: 2025-10-01 16:55:41.200 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337740.3878818, c6b5a948-5763-4847-a02b-6010ab49c3da => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:55:41 np0005464891 nova_compute[259907]: 2025-10-01 16:55:41.200 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:55:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Oct  1 12:55:41 np0005464891 nova_compute[259907]: 2025-10-01 16:55:41.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:41 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Oct  1 12:55:41 np0005464891 nova_compute[259907]: 2025-10-01 16:55:41.496 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:55:41 np0005464891 nova_compute[259907]: 2025-10-01 16:55:41.501 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337740.392888, c6b5a948-5763-4847-a02b-6010ab49c3da => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:55:41 np0005464891 nova_compute[259907]: 2025-10-01 16:55:41.501 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:55:41 np0005464891 nova_compute[259907]: 2025-10-01 16:55:41.634 2 INFO nova.compute.manager [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Took 12.41 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:55:41 np0005464891 nova_compute[259907]: 2025-10-01 16:55:41.635 2 DEBUG nova.compute.manager [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:55:41 np0005464891 nova_compute[259907]: 2025-10-01 16:55:41.722 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:55:41 np0005464891 nova_compute[259907]: 2025-10-01 16:55:41.727 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:55:42 np0005464891 nova_compute[259907]: 2025-10-01 16:55:42.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:55:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:55:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:55:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:55:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:55:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:55:42 np0005464891 nova_compute[259907]: 2025-10-01 16:55:42.286 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:55:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Oct  1 12:55:42 np0005464891 nova_compute[259907]: 2025-10-01 16:55:42.829 2 INFO nova.compute.manager [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Took 15.72 seconds to build instance.#033[00m
Oct  1 12:55:42 np0005464891 nova_compute[259907]: 2025-10-01 16:55:42.947 2 DEBUG oslo_concurrency.lockutils [None req-bac61743-7093-4e49-9271-20eec4ccd8e8 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "c6b5a948-5763-4847-a02b-6010ab49c3da" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Oct  1 12:55:42 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Oct  1 12:55:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1639: 321 pgs: 321 active+clean; 170 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 25 KiB/s wr, 31 op/s
Oct  1 12:55:43 np0005464891 podman[292946]: 2025-10-01 16:55:43.960710963 +0000 UTC m=+0.077279690 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  1 12:55:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1640: 321 pgs: 321 active+clean; 170 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 21 KiB/s wr, 33 op/s
Oct  1 12:55:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:55:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Oct  1 12:55:46 np0005464891 nova_compute[259907]: 2025-10-01 16:55:46.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Oct  1 12:55:46 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Oct  1 12:55:46 np0005464891 nova_compute[259907]: 2025-10-01 16:55:46.899 2 DEBUG oslo_concurrency.lockutils [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "c6b5a948-5763-4847-a02b-6010ab49c3da" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:46 np0005464891 nova_compute[259907]: 2025-10-01 16:55:46.900 2 DEBUG oslo_concurrency.lockutils [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "c6b5a948-5763-4847-a02b-6010ab49c3da" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:46 np0005464891 nova_compute[259907]: 2025-10-01 16:55:46.900 2 DEBUG oslo_concurrency.lockutils [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:46 np0005464891 nova_compute[259907]: 2025-10-01 16:55:46.901 2 DEBUG oslo_concurrency.lockutils [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:46 np0005464891 nova_compute[259907]: 2025-10-01 16:55:46.901 2 DEBUG oslo_concurrency.lockutils [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:46 np0005464891 nova_compute[259907]: 2025-10-01 16:55:46.902 2 INFO nova.compute.manager [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Terminating instance#033[00m
Oct  1 12:55:46 np0005464891 nova_compute[259907]: 2025-10-01 16:55:46.903 2 DEBUG nova.compute.manager [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:55:47 np0005464891 nova_compute[259907]: 2025-10-01 16:55:47.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1642: 321 pgs: 321 active+clean; 170 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Oct  1 12:55:47 np0005464891 kernel: tapfd73f99f-9c (unregistering): left promiscuous mode
Oct  1 12:55:47 np0005464891 NetworkManager[44940]: <info>  [1759337747.4378] device (tapfd73f99f-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:55:47 np0005464891 nova_compute[259907]: 2025-10-01 16:55:47.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:47 np0005464891 ovn_controller[152409]: 2025-10-01T16:55:47Z|00156|binding|INFO|Releasing lport fd73f99f-9c18-4285-83fe-e782392df556 from this chassis (sb_readonly=0)
Oct  1 12:55:47 np0005464891 ovn_controller[152409]: 2025-10-01T16:55:47Z|00157|binding|INFO|Setting lport fd73f99f-9c18-4285-83fe-e782392df556 down in Southbound
Oct  1 12:55:47 np0005464891 ovn_controller[152409]: 2025-10-01T16:55:47Z|00158|binding|INFO|Removing iface tapfd73f99f-9c ovn-installed in OVS
Oct  1 12:55:47 np0005464891 nova_compute[259907]: 2025-10-01 16:55:47.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:47 np0005464891 nova_compute[259907]: 2025-10-01 16:55:47.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:47 np0005464891 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct  1 12:55:47 np0005464891 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 3.856s CPU time.
Oct  1 12:55:47 np0005464891 systemd-machined[214891]: Machine qemu-16-instance-00000010 terminated.
Oct  1 12:55:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:47.529 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:ee:ad 10.100.0.10'], port_security=['fa:16:3e:17:ee:ad 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c6b5a948-5763-4847-a02b-6010ab49c3da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce1e1062-6685-441b-8278-667224375e38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8318b65fa88942a99937a0d198a04a9c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc91778b-466d-4bf2-b0e0-b4af5293ed3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4cdcf07-f310-4572-944c-43bd8e74f763, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=fd73f99f-9c18-4285-83fe-e782392df556) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:55:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:47.531 162546 INFO neutron.agent.ovn.metadata.agent [-] Port fd73f99f-9c18-4285-83fe-e782392df556 in datapath ce1e1062-6685-441b-8278-667224375e38 unbound from our chassis#033[00m
Oct  1 12:55:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:47.532 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce1e1062-6685-441b-8278-667224375e38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:55:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:47.533 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3669eda8-4dbf-449e-9b69-b4b9ef4c2598]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:47.534 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce1e1062-6685-441b-8278-667224375e38 namespace which is not needed anymore#033[00m
Oct  1 12:55:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Oct  1 12:55:47 np0005464891 nova_compute[259907]: 2025-10-01 16:55:47.750 2 INFO nova.virt.libvirt.driver [-] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Instance destroyed successfully.#033[00m
Oct  1 12:55:47 np0005464891 nova_compute[259907]: 2025-10-01 16:55:47.751 2 DEBUG nova.objects.instance [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lazy-loading 'resources' on Instance uuid c6b5a948-5763-4847-a02b-6010ab49c3da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:55:47 np0005464891 nova_compute[259907]: 2025-10-01 16:55:47.841 2 DEBUG nova.virt.libvirt.vif [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:55:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1658957626',display_name='tempest-TestVolumeBootPattern-server-1658957626',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1658957626',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:55:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-bbaa26ff',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:55:42Z,user_data=None,user_id='1280014cdfb74333ae8d71c78116e646',uuid=c6b5a948-5763-4847-a02b-6010ab49c3da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd73f99f-9c18-4285-83fe-e782392df556", "address": "fa:16:3e:17:ee:ad", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd73f99f-9c", "ovs_interfaceid": "fd73f99f-9c18-4285-83fe-e782392df556", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:55:47 np0005464891 nova_compute[259907]: 2025-10-01 16:55:47.842 2 DEBUG nova.network.os_vif_util [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "fd73f99f-9c18-4285-83fe-e782392df556", "address": "fa:16:3e:17:ee:ad", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd73f99f-9c", "ovs_interfaceid": "fd73f99f-9c18-4285-83fe-e782392df556", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:55:47 np0005464891 nova_compute[259907]: 2025-10-01 16:55:47.864 2 DEBUG nova.network.os_vif_util [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:ee:ad,bridge_name='br-int',has_traffic_filtering=True,id=fd73f99f-9c18-4285-83fe-e782392df556,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd73f99f-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:55:47 np0005464891 nova_compute[259907]: 2025-10-01 16:55:47.865 2 DEBUG os_vif [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:ee:ad,bridge_name='br-int',has_traffic_filtering=True,id=fd73f99f-9c18-4285-83fe-e782392df556,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd73f99f-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:55:47 np0005464891 nova_compute[259907]: 2025-10-01 16:55:47.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:47 np0005464891 nova_compute[259907]: 2025-10-01 16:55:47.868 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd73f99f-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:55:47 np0005464891 nova_compute[259907]: 2025-10-01 16:55:47.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:47 np0005464891 nova_compute[259907]: 2025-10-01 16:55:47.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:55:47 np0005464891 nova_compute[259907]: 2025-10-01 16:55:47.876 2 INFO os_vif [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:ee:ad,bridge_name='br-int',has_traffic_filtering=True,id=fd73f99f-9c18-4285-83fe-e782392df556,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd73f99f-9c')#033[00m
Oct  1 12:55:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Oct  1 12:55:47 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[292896]: [NOTICE]   (292911) : haproxy version is 2.8.14-c23fe91
Oct  1 12:55:47 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[292896]: [NOTICE]   (292911) : path to executable is /usr/sbin/haproxy
Oct  1 12:55:47 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[292896]: [WARNING]  (292911) : Exiting Master process...
Oct  1 12:55:47 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[292896]: [ALERT]    (292911) : Current worker (292913) exited with code 143 (Terminated)
Oct  1 12:55:47 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[292896]: [WARNING]  (292911) : All workers exited. Exiting... (0)
Oct  1 12:55:47 np0005464891 systemd[1]: libpod-c10fe69076a3708d90cc5fa5d36b8efb6443285c86332b63b1cd9ecf0d18b672.scope: Deactivated successfully.
Oct  1 12:55:47 np0005464891 podman[292990]: 2025-10-01 16:55:47.931425034 +0000 UTC m=+0.313555412 container died c10fe69076a3708d90cc5fa5d36b8efb6443285c86332b63b1cd9ecf0d18b672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  1 12:55:48 np0005464891 nova_compute[259907]: 2025-10-01 16:55:48.080 2 DEBUG nova.compute.manager [req-fe835ea2-13b1-470e-a421-8fce5d5c3692 req-bb392911-824e-4634-9d18-5ac493f72bc5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Received event network-vif-unplugged-fd73f99f-9c18-4285-83fe-e782392df556 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:55:48 np0005464891 nova_compute[259907]: 2025-10-01 16:55:48.080 2 DEBUG oslo_concurrency.lockutils [req-fe835ea2-13b1-470e-a421-8fce5d5c3692 req-bb392911-824e-4634-9d18-5ac493f72bc5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:48 np0005464891 nova_compute[259907]: 2025-10-01 16:55:48.081 2 DEBUG oslo_concurrency.lockutils [req-fe835ea2-13b1-470e-a421-8fce5d5c3692 req-bb392911-824e-4634-9d18-5ac493f72bc5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:48 np0005464891 nova_compute[259907]: 2025-10-01 16:55:48.081 2 DEBUG oslo_concurrency.lockutils [req-fe835ea2-13b1-470e-a421-8fce5d5c3692 req-bb392911-824e-4634-9d18-5ac493f72bc5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:48 np0005464891 nova_compute[259907]: 2025-10-01 16:55:48.081 2 DEBUG nova.compute.manager [req-fe835ea2-13b1-470e-a421-8fce5d5c3692 req-bb392911-824e-4634-9d18-5ac493f72bc5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] No waiting events found dispatching network-vif-unplugged-fd73f99f-9c18-4285-83fe-e782392df556 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:55:48 np0005464891 nova_compute[259907]: 2025-10-01 16:55:48.081 2 DEBUG nova.compute.manager [req-fe835ea2-13b1-470e-a421-8fce5d5c3692 req-bb392911-824e-4634-9d18-5ac493f72bc5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Received event network-vif-unplugged-fd73f99f-9c18-4285-83fe-e782392df556 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:55:48 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Oct  1 12:55:48 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c10fe69076a3708d90cc5fa5d36b8efb6443285c86332b63b1cd9ecf0d18b672-userdata-shm.mount: Deactivated successfully.
Oct  1 12:55:48 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c635c8b43e0bd9944447cdd2e04cec10451a36867bef06bed3dab5696554bb4e-merged.mount: Deactivated successfully.
Oct  1 12:55:48 np0005464891 podman[293049]: 2025-10-01 16:55:48.400607973 +0000 UTC m=+0.074648339 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  1 12:55:48 np0005464891 podman[292990]: 2025-10-01 16:55:48.559402634 +0000 UTC m=+0.941533032 container cleanup c10fe69076a3708d90cc5fa5d36b8efb6443285c86332b63b1cd9ecf0d18b672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 12:55:48 np0005464891 systemd[1]: libpod-conmon-c10fe69076a3708d90cc5fa5d36b8efb6443285c86332b63b1cd9ecf0d18b672.scope: Deactivated successfully.
Oct  1 12:55:48 np0005464891 podman[293072]: 2025-10-01 16:55:48.981522559 +0000 UTC m=+0.393843022 container remove c10fe69076a3708d90cc5fa5d36b8efb6443285c86332b63b1cd9ecf0d18b672 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  1 12:55:48 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:48.996 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[dfcfcb05-ce54-43fb-8cb7-d6e447688dcd]: (4, ('Wed Oct  1 04:55:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38 (c10fe69076a3708d90cc5fa5d36b8efb6443285c86332b63b1cd9ecf0d18b672)\nc10fe69076a3708d90cc5fa5d36b8efb6443285c86332b63b1cd9ecf0d18b672\nWed Oct  1 04:55:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38 (c10fe69076a3708d90cc5fa5d36b8efb6443285c86332b63b1cd9ecf0d18b672)\nc10fe69076a3708d90cc5fa5d36b8efb6443285c86332b63b1cd9ecf0d18b672\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:48 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:48.998 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[fc5c520e-3b70-41f9-aa47-e52fc1c3c9bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:48.999 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce1e1062-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:49 np0005464891 kernel: tapce1e1062-60: left promiscuous mode
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:49.024 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[edf1e310-5f05-48e5-9556-2107c593934d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:49.065 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[104bb2ca-f948-4f84-8357-06cfb139d1b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:49.066 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d973962b-228e-49f2-ad67-fb530c80dde2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:49.084 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[30473e9d-7b18-41bf-b4d5-f407e8db93ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468473, 'reachable_time': 32562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293088, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:49 np0005464891 systemd[1]: run-netns-ovnmeta\x2dce1e1062\x2d6685\x2d441b\x2d8278\x2d667224375e38.mount: Deactivated successfully.
Oct  1 12:55:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:49.088 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce1e1062-6685-441b-8278-667224375e38 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:55:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:49.089 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[69ca250a-d795-48f4-b2f0-4d839d8c5cc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1644: 321 pgs: 321 active+clean; 170 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.3 KiB/s wr, 14 op/s
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.292 2 INFO nova.virt.libvirt.driver [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Deleting instance files /var/lib/nova/instances/c6b5a948-5763-4847-a02b-6010ab49c3da_del#033[00m
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.293 2 INFO nova.virt.libvirt.driver [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Deletion of /var/lib/nova/instances/c6b5a948-5763-4847-a02b-6010ab49c3da_del complete#033[00m
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.350 2 INFO nova.compute.manager [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Took 2.45 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.351 2 DEBUG oslo.service.loopingcall [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.352 2 DEBUG nova.compute.manager [-] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.352 2 DEBUG nova.network.neutron [-] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:55:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:55:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2487409422' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:55:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:55:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2487409422' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.743 2 DEBUG oslo_concurrency.lockutils [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.743 2 DEBUG oslo_concurrency.lockutils [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.743 2 DEBUG oslo_concurrency.lockutils [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.744 2 DEBUG oslo_concurrency.lockutils [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.744 2 DEBUG oslo_concurrency.lockutils [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.745 2 INFO nova.compute.manager [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Terminating instance#033[00m
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.745 2 DEBUG nova.compute.manager [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:55:49 np0005464891 kernel: tapa11a83be-c1 (unregistering): left promiscuous mode
Oct  1 12:55:49 np0005464891 NetworkManager[44940]: <info>  [1759337749.7977] device (tapa11a83be-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:55:49 np0005464891 ovn_controller[152409]: 2025-10-01T16:55:49Z|00159|binding|INFO|Releasing lport a11a83be-c1d2-47f1-92f5-556ead33435e from this chassis (sb_readonly=0)
Oct  1 12:55:49 np0005464891 ovn_controller[152409]: 2025-10-01T16:55:49Z|00160|binding|INFO|Setting lport a11a83be-c1d2-47f1-92f5-556ead33435e down in Southbound
Oct  1 12:55:49 np0005464891 ovn_controller[152409]: 2025-10-01T16:55:49Z|00161|binding|INFO|Removing iface tapa11a83be-c1 ovn-installed in OVS
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:49.813 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:a9:d3 10.100.0.7'], port_security=['fa:16:3e:b5:a9:d3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eef473c3-8fff-4cd4-a5f8-ef9b89b7439a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f871e885-fd92-424f-b0b3-6d810367183a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6195d07ebe4991a5be01fb7ba2afdc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b6795c28-c4d2-4c23-9300-5a320196f859 fa9ad8e8-60f0-4036-9b1b-a940940c2e2e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d856bf9-7949-405b-8a21-06a5e8d1a429, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=a11a83be-c1d2-47f1-92f5-556ead33435e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:55:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:49.814 162546 INFO neutron.agent.ovn.metadata.agent [-] Port a11a83be-c1d2-47f1-92f5-556ead33435e in datapath f871e885-fd92-424f-b0b3-6d810367183a unbound from our chassis#033[00m
Oct  1 12:55:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:49.815 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f871e885-fd92-424f-b0b3-6d810367183a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:55:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:49.816 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[41d768bf-df9b-4f14-96cb-1955b9d9e24a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:49.816 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a namespace which is not needed anymore#033[00m
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:49 np0005464891 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct  1 12:55:49 np0005464891 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 18.599s CPU time.
Oct  1 12:55:49 np0005464891 systemd-machined[214891]: Machine qemu-15-instance-0000000f terminated.
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.977 2 DEBUG nova.network.neutron [-] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:55:49 np0005464891 neutron-haproxy-ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a[291081]: [NOTICE]   (291098) : haproxy version is 2.8.14-c23fe91
Oct  1 12:55:49 np0005464891 neutron-haproxy-ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a[291081]: [NOTICE]   (291098) : path to executable is /usr/sbin/haproxy
Oct  1 12:55:49 np0005464891 neutron-haproxy-ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a[291081]: [WARNING]  (291098) : Exiting Master process...
Oct  1 12:55:49 np0005464891 neutron-haproxy-ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a[291081]: [ALERT]    (291098) : Current worker (291102) exited with code 143 (Terminated)
Oct  1 12:55:49 np0005464891 neutron-haproxy-ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a[291081]: [WARNING]  (291098) : All workers exited. Exiting... (0)
Oct  1 12:55:49 np0005464891 systemd[1]: libpod-b6fb98775c33ec1b897fe4e58adac9a4b36dadaaae327a8def4ba98427cbe9cb.scope: Deactivated successfully.
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.986 2 INFO nova.virt.libvirt.driver [-] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Instance destroyed successfully.#033[00m
Oct  1 12:55:49 np0005464891 nova_compute[259907]: 2025-10-01 16:55:49.987 2 DEBUG nova.objects.instance [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lazy-loading 'resources' on Instance uuid eef473c3-8fff-4cd4-a5f8-ef9b89b7439a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:55:49 np0005464891 podman[293110]: 2025-10-01 16:55:49.989218738 +0000 UTC m=+0.079308144 container died b6fb98775c33ec1b897fe4e58adac9a4b36dadaaae327a8def4ba98427cbe9cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.002 2 INFO nova.compute.manager [-] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Took 0.65 seconds to deallocate network for instance.#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.011 2 DEBUG nova.virt.libvirt.vif [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:54:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1844506122',display_name='tempest-SnapshotDataIntegrityTests-server-1844506122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1844506122',id=15,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEnrHCkN4tXnW26maD9DFNY004Z2A+ODEW0hXAFiLnkZTejfo4yGwI1auNgqnB9srNoiYwRYFXiPTQ/EiqFhro8485VJkjlEg8R1WH/ORqVOcXHDgWBC9f5dDJho5Yosg==',key_name='tempest-keypair-1851487111',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:54:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f6195d07ebe4991a5be01fb7ba2afdc',ramdisk_id='',reservation_id='r-n3jgfpdn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SnapshotDataIntegrityTests-1433560761',owner_user_name='tempest-SnapshotDataIntegrityTests-1433560761-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:54:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9dcf2401f8724e5b8337ca100dda75db',uuid=eef473c3-8fff-4cd4-a5f8-ef9b89b7439a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a11a83be-c1d2-47f1-92f5-556ead33435e", "address": "fa:16:3e:b5:a9:d3", "network": {"id": "f871e885-fd92-424f-b0b3-6d810367183a", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1323819196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6195d07ebe4991a5be01fb7ba2afdc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa11a83be-c1", "ovs_interfaceid": "a11a83be-c1d2-47f1-92f5-556ead33435e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.014 2 DEBUG nova.network.os_vif_util [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Converting VIF {"id": "a11a83be-c1d2-47f1-92f5-556ead33435e", "address": "fa:16:3e:b5:a9:d3", "network": {"id": "f871e885-fd92-424f-b0b3-6d810367183a", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1323819196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6195d07ebe4991a5be01fb7ba2afdc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa11a83be-c1", "ovs_interfaceid": "a11a83be-c1d2-47f1-92f5-556ead33435e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.016 2 DEBUG nova.network.os_vif_util [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b5:a9:d3,bridge_name='br-int',has_traffic_filtering=True,id=a11a83be-c1d2-47f1-92f5-556ead33435e,network=Network(f871e885-fd92-424f-b0b3-6d810367183a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa11a83be-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.017 2 DEBUG os_vif [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:a9:d3,bridge_name='br-int',has_traffic_filtering=True,id=a11a83be-c1d2-47f1-92f5-556ead33435e,network=Network(f871e885-fd92-424f-b0b3-6d810367183a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa11a83be-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.020 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa11a83be-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:55:50 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b6fb98775c33ec1b897fe4e58adac9a4b36dadaaae327a8def4ba98427cbe9cb-userdata-shm.mount: Deactivated successfully.
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.030 2 INFO os_vif [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:a9:d3,bridge_name='br-int',has_traffic_filtering=True,id=a11a83be-c1d2-47f1-92f5-556ead33435e,network=Network(f871e885-fd92-424f-b0b3-6d810367183a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa11a83be-c1')#033[00m
Oct  1 12:55:50 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e2efa1d5b9934c68b2afabd6fd57d68c5027290a65fa5f30a6f7f06122d5c451-merged.mount: Deactivated successfully.
Oct  1 12:55:50 np0005464891 podman[293110]: 2025-10-01 16:55:50.044741838 +0000 UTC m=+0.134831244 container cleanup b6fb98775c33ec1b897fe4e58adac9a4b36dadaaae327a8def4ba98427cbe9cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 12:55:50 np0005464891 systemd[1]: libpod-conmon-b6fb98775c33ec1b897fe4e58adac9a4b36dadaaae327a8def4ba98427cbe9cb.scope: Deactivated successfully.
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.075 2 DEBUG nova.compute.manager [req-3c534d8a-c194-4f0b-b3f8-559589ab6815 req-f58db60b-37b3-4533-bd9b-0730e2c009d5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Received event network-vif-deleted-fd73f99f-9c18-4285-83fe-e782392df556 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:55:50 np0005464891 podman[293165]: 2025-10-01 16:55:50.126852527 +0000 UTC m=+0.054448492 container remove b6fb98775c33ec1b897fe4e58adac9a4b36dadaaae327a8def4ba98427cbe9cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 12:55:50 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:50.139 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5310dd-75ef-43b7-951b-505b60cb2352]: (4, ('Wed Oct  1 04:55:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a (b6fb98775c33ec1b897fe4e58adac9a4b36dadaaae327a8def4ba98427cbe9cb)\nb6fb98775c33ec1b897fe4e58adac9a4b36dadaaae327a8def4ba98427cbe9cb\nWed Oct  1 04:55:50 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a (b6fb98775c33ec1b897fe4e58adac9a4b36dadaaae327a8def4ba98427cbe9cb)\nb6fb98775c33ec1b897fe4e58adac9a4b36dadaaae327a8def4ba98427cbe9cb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:50 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:50.141 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[32684c37-785b-4060-ab93-6d14935c9359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:50 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:50.143 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf871e885-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:50 np0005464891 kernel: tapf871e885-f0: left promiscuous mode
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:50 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:50.165 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[a3270374-6171-4187-b0d8-cf0b468ed363]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:55:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2658261633' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:55:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:55:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2658261633' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.199 2 DEBUG nova.compute.manager [req-ec9611ca-eb31-4c69-b78f-7179d1273874 req-14ac1402-81be-44d1-807d-4aff8efee6f5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Received event network-vif-plugged-fd73f99f-9c18-4285-83fe-e782392df556 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.200 2 DEBUG oslo_concurrency.lockutils [req-ec9611ca-eb31-4c69-b78f-7179d1273874 req-14ac1402-81be-44d1-807d-4aff8efee6f5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.200 2 DEBUG oslo_concurrency.lockutils [req-ec9611ca-eb31-4c69-b78f-7179d1273874 req-14ac1402-81be-44d1-807d-4aff8efee6f5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.200 2 DEBUG oslo_concurrency.lockutils [req-ec9611ca-eb31-4c69-b78f-7179d1273874 req-14ac1402-81be-44d1-807d-4aff8efee6f5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "c6b5a948-5763-4847-a02b-6010ab49c3da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.200 2 DEBUG nova.compute.manager [req-ec9611ca-eb31-4c69-b78f-7179d1273874 req-14ac1402-81be-44d1-807d-4aff8efee6f5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] No waiting events found dispatching network-vif-plugged-fd73f99f-9c18-4285-83fe-e782392df556 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.201 2 WARNING nova.compute.manager [req-ec9611ca-eb31-4c69-b78f-7179d1273874 req-14ac1402-81be-44d1-807d-4aff8efee6f5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Received unexpected event network-vif-plugged-fd73f99f-9c18-4285-83fe-e782392df556 for instance with vm_state active and task_state deleting.#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.201 2 DEBUG nova.compute.manager [req-ec9611ca-eb31-4c69-b78f-7179d1273874 req-14ac1402-81be-44d1-807d-4aff8efee6f5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Received event network-vif-unplugged-a11a83be-c1d2-47f1-92f5-556ead33435e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.201 2 DEBUG oslo_concurrency.lockutils [req-ec9611ca-eb31-4c69-b78f-7179d1273874 req-14ac1402-81be-44d1-807d-4aff8efee6f5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.201 2 DEBUG oslo_concurrency.lockutils [req-ec9611ca-eb31-4c69-b78f-7179d1273874 req-14ac1402-81be-44d1-807d-4aff8efee6f5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.201 2 DEBUG oslo_concurrency.lockutils [req-ec9611ca-eb31-4c69-b78f-7179d1273874 req-14ac1402-81be-44d1-807d-4aff8efee6f5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.201 2 DEBUG nova.compute.manager [req-ec9611ca-eb31-4c69-b78f-7179d1273874 req-14ac1402-81be-44d1-807d-4aff8efee6f5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] No waiting events found dispatching network-vif-unplugged-a11a83be-c1d2-47f1-92f5-556ead33435e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.201 2 DEBUG nova.compute.manager [req-ec9611ca-eb31-4c69-b78f-7179d1273874 req-14ac1402-81be-44d1-807d-4aff8efee6f5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Received event network-vif-unplugged-a11a83be-c1d2-47f1-92f5-556ead33435e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:55:50 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:50.203 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[41798110-347c-4373-b30f-80e21d405a4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:50 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:50.205 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[6c12a523-89fa-4d1e-b78a-5b3443d444d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:50 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:50.220 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8c1cf40e-e973-4c9b-929e-632b5e30a4ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460732, 'reachable_time': 33096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293184, 'error': None, 'target': 'ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:50 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:50.222 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f871e885-fd92-424f-b0b3-6d810367183a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:55:50 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:55:50.222 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8a038c-3715-45bf-9c29-d8bdb5ec4773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:55:50 np0005464891 systemd[1]: run-netns-ovnmeta\x2df871e885\x2dfd92\x2d424f\x2db0b3\x2d6d810367183a.mount: Deactivated successfully.
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.237 2 INFO nova.compute.manager [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Took 0.23 seconds to detach 1 volumes for instance.#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.276 2 DEBUG oslo_concurrency.lockutils [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.277 2 DEBUG oslo_concurrency.lockutils [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.347 2 DEBUG oslo_concurrency.processutils [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.450 2 INFO nova.virt.libvirt.driver [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Deleting instance files /var/lib/nova/instances/eef473c3-8fff-4cd4-a5f8-ef9b89b7439a_del#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.451 2 INFO nova.virt.libvirt.driver [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Deletion of /var/lib/nova/instances/eef473c3-8fff-4cd4-a5f8-ef9b89b7439a_del complete#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.520 2 INFO nova.compute.manager [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.521 2 DEBUG oslo.service.loopingcall [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.521 2 DEBUG nova.compute.manager [-] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.521 2 DEBUG nova.network.neutron [-] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:55:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:55:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/526485519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.781 2 DEBUG oslo_concurrency.processutils [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.787 2 DEBUG nova.compute.provider_tree [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.805 2 DEBUG nova.scheduler.client.report [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.832 2 DEBUG oslo_concurrency.lockutils [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.865 2 INFO nova.scheduler.client.report [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Deleted allocations for instance c6b5a948-5763-4847-a02b-6010ab49c3da#033[00m
Oct  1 12:55:50 np0005464891 nova_compute[259907]: 2025-10-01 16:55:50.937 2 DEBUG oslo_concurrency.lockutils [None req-49b4e0d5-e635-4b60-a4ba-8422bd01657b 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "c6b5a948-5763-4847-a02b-6010ab49c3da" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1645: 321 pgs: 321 active+clean; 167 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 3.0 KiB/s wr, 72 op/s
Oct  1 12:55:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:55:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Oct  1 12:55:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Oct  1 12:55:51 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Oct  1 12:55:51 np0005464891 nova_compute[259907]: 2025-10-01 16:55:51.445 2 DEBUG nova.network.neutron [-] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:55:51 np0005464891 nova_compute[259907]: 2025-10-01 16:55:51.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:51 np0005464891 nova_compute[259907]: 2025-10-01 16:55:51.465 2 INFO nova.compute.manager [-] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Took 0.94 seconds to deallocate network for instance.#033[00m
Oct  1 12:55:51 np0005464891 nova_compute[259907]: 2025-10-01 16:55:51.508 2 DEBUG oslo_concurrency.lockutils [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:51 np0005464891 nova_compute[259907]: 2025-10-01 16:55:51.509 2 DEBUG oslo_concurrency.lockutils [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:51 np0005464891 nova_compute[259907]: 2025-10-01 16:55:51.560 2 DEBUG oslo_concurrency.processutils [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:55:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:55:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3077943603' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:55:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:55:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3077943603' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:55:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:55:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/444534835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:55:51 np0005464891 nova_compute[259907]: 2025-10-01 16:55:51.968 2 DEBUG oslo_concurrency.processutils [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:55:51 np0005464891 nova_compute[259907]: 2025-10-01 16:55:51.976 2 DEBUG nova.compute.provider_tree [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:55:51 np0005464891 nova_compute[259907]: 2025-10-01 16:55:51.996 2 DEBUG nova.scheduler.client.report [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:55:52 np0005464891 nova_compute[259907]: 2025-10-01 16:55:52.015 2 DEBUG oslo_concurrency.lockutils [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:52 np0005464891 nova_compute[259907]: 2025-10-01 16:55:52.047 2 INFO nova.scheduler.client.report [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Deleted allocations for instance eef473c3-8fff-4cd4-a5f8-ef9b89b7439a#033[00m
Oct  1 12:55:52 np0005464891 nova_compute[259907]: 2025-10-01 16:55:52.140 2 DEBUG oslo_concurrency.lockutils [None req-68083a66-7844-49c0-acb8-1e50846ed120 9dcf2401f8724e5b8337ca100dda75db 6f6195d07ebe4991a5be01fb7ba2afdc - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.397s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:52 np0005464891 nova_compute[259907]: 2025-10-01 16:55:52.156 2 DEBUG nova.compute.manager [req-85034969-1612-4169-a387-bc285e4cf434 req-d1f77803-3ea2-49aa-85d0-71b9d87c7742 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Received event network-vif-deleted-a11a83be-c1d2-47f1-92f5-556ead33435e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:55:52 np0005464891 nova_compute[259907]: 2025-10-01 16:55:52.263 2 DEBUG nova.compute.manager [req-43d35c13-3feb-4e12-83af-24cc575e98af req-d0febcc7-1e2a-417b-8a7c-304453e14927 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Received event network-vif-plugged-a11a83be-c1d2-47f1-92f5-556ead33435e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:55:52 np0005464891 nova_compute[259907]: 2025-10-01 16:55:52.264 2 DEBUG oslo_concurrency.lockutils [req-43d35c13-3feb-4e12-83af-24cc575e98af req-d0febcc7-1e2a-417b-8a7c-304453e14927 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:55:52 np0005464891 nova_compute[259907]: 2025-10-01 16:55:52.264 2 DEBUG oslo_concurrency.lockutils [req-43d35c13-3feb-4e12-83af-24cc575e98af req-d0febcc7-1e2a-417b-8a7c-304453e14927 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:55:52 np0005464891 nova_compute[259907]: 2025-10-01 16:55:52.264 2 DEBUG oslo_concurrency.lockutils [req-43d35c13-3feb-4e12-83af-24cc575e98af req-d0febcc7-1e2a-417b-8a7c-304453e14927 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "eef473c3-8fff-4cd4-a5f8-ef9b89b7439a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:55:52 np0005464891 nova_compute[259907]: 2025-10-01 16:55:52.264 2 DEBUG nova.compute.manager [req-43d35c13-3feb-4e12-83af-24cc575e98af req-d0febcc7-1e2a-417b-8a7c-304453e14927 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] No waiting events found dispatching network-vif-plugged-a11a83be-c1d2-47f1-92f5-556ead33435e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:55:52 np0005464891 nova_compute[259907]: 2025-10-01 16:55:52.265 2 WARNING nova.compute.manager [req-43d35c13-3feb-4e12-83af-24cc575e98af req-d0febcc7-1e2a-417b-8a7c-304453e14927 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Received unexpected event network-vif-plugged-a11a83be-c1d2-47f1-92f5-556ead33435e for instance with vm_state deleted and task_state None.#033[00m
Oct  1 12:55:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Oct  1 12:55:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Oct  1 12:55:52 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Oct  1 12:55:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1648: 321 pgs: 321 active+clean; 126 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 5.8 KiB/s wr, 163 op/s
Oct  1 12:55:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Oct  1 12:55:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Oct  1 12:55:54 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Oct  1 12:55:55 np0005464891 nova_compute[259907]: 2025-10-01 16:55:55.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1650: 321 pgs: 321 active+clean; 88 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 9.8 KiB/s wr, 227 op/s
Oct  1 12:55:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:55:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/838138142' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:55:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:55:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/838138142' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:55:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:55:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Oct  1 12:55:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Oct  1 12:55:56 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Oct  1 12:55:56 np0005464891 nova_compute[259907]: 2025-10-01 16:55:56.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:56 np0005464891 nova_compute[259907]: 2025-10-01 16:55:56.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:57 np0005464891 nova_compute[259907]: 2025-10-01 16:55:57.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:55:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1652: 321 pgs: 321 active+clean; 88 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.5 KiB/s wr, 153 op/s
Oct  1 12:55:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1653: 321 pgs: 321 active+clean; 88 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.0 KiB/s wr, 138 op/s
Oct  1 12:55:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Oct  1 12:55:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Oct  1 12:55:59 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Oct  1 12:55:59 np0005464891 podman[293235]: 2025-10-01 16:55:59.943891359 +0000 UTC m=+0.055225523 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  1 12:56:00 np0005464891 nova_compute[259907]: 2025-10-01 16:56:00.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Oct  1 12:56:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Oct  1 12:56:00 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Oct  1 12:56:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 113 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 825 KiB/s rd, 1.7 MiB/s wr, 99 op/s
Oct  1 12:56:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:56:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Oct  1 12:56:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Oct  1 12:56:01 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Oct  1 12:56:01 np0005464891 nova_compute[259907]: 2025-10-01 16:56:01.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:02 np0005464891 nova_compute[259907]: 2025-10-01 16:56:02.747 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337747.7462955, c6b5a948-5763-4847-a02b-6010ab49c3da => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:56:02 np0005464891 nova_compute[259907]: 2025-10-01 16:56:02.748 2 INFO nova.compute.manager [-] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:56:02 np0005464891 nova_compute[259907]: 2025-10-01 16:56:02.767 2 DEBUG nova.compute.manager [None req-83617bc3-d723-4a17-a03e-e51249e45d0a - - - - - -] [instance: c6b5a948-5763-4847-a02b-6010ab49c3da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:56:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:56:02 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2142182932' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:56:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:56:02 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2142182932' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:56:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 868 KiB/s rd, 3.5 MiB/s wr, 156 op/s
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.171 2 DEBUG oslo_concurrency.lockutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.171 2 DEBUG oslo_concurrency.lockutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.189 2 DEBUG nova.compute.manager [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.261 2 DEBUG oslo_concurrency.lockutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.262 2 DEBUG oslo_concurrency.lockutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.271 2 DEBUG nova.virt.hardware [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.272 2 INFO nova.compute.claims [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.444 2 DEBUG oslo_concurrency.processutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:56:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:56:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1753685895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.852 2 DEBUG oslo_concurrency.processutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.858 2 DEBUG nova.compute.provider_tree [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.876 2 DEBUG nova.scheduler.client.report [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.906 2 DEBUG oslo_concurrency.lockutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.907 2 DEBUG nova.compute.manager [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.959 2 DEBUG nova.compute.manager [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.960 2 DEBUG nova.network.neutron [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.983 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337749.9817944, eef473c3-8fff-4cd4-a5f8-ef9b89b7439a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.983 2 INFO nova.compute.manager [-] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:56:04 np0005464891 nova_compute[259907]: 2025-10-01 16:56:04.987 2 INFO nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:56:05 np0005464891 nova_compute[259907]: 2025-10-01 16:56:05.004 2 DEBUG nova.compute.manager [None req-bda89403-7ebf-4b4f-a080-08c20f751e25 - - - - - -] [instance: eef473c3-8fff-4cd4-a5f8-ef9b89b7439a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:56:05 np0005464891 nova_compute[259907]: 2025-10-01 16:56:05.009 2 DEBUG nova.compute.manager [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:56:05 np0005464891 nova_compute[259907]: 2025-10-01 16:56:05.067 2 INFO nova.virt.block_device [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Booting with volume snapshot 0d8e5d3d-2ed5-47f9-808f-0a4407b16be5 at /dev/vda#033[00m
Oct  1 12:56:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1659: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 872 KiB/s rd, 3.5 MiB/s wr, 165 op/s
Oct  1 12:56:05 np0005464891 nova_compute[259907]: 2025-10-01 16:56:05.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:05 np0005464891 nova_compute[259907]: 2025-10-01 16:56:05.302 2 DEBUG nova.policy [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1280014cdfb74333ae8d71c78116e646', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8318b65fa88942a99937a0d198a04a9c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:56:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Oct  1 12:56:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Oct  1 12:56:05 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Oct  1 12:56:06 np0005464891 nova_compute[259907]: 2025-10-01 16:56:06.115 2 DEBUG nova.network.neutron [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Successfully created port: 4c696563-943f-4bb5-bcc0-ae044321b33b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:56:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:56:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Oct  1 12:56:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Oct  1 12:56:06 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Oct  1 12:56:06 np0005464891 nova_compute[259907]: 2025-10-01 16:56:06.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct  1 12:56:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Oct  1 12:56:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Oct  1 12:56:07 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Oct  1 12:56:08 np0005464891 nova_compute[259907]: 2025-10-01 16:56:08.176 2 DEBUG nova.network.neutron [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Successfully updated port: 4c696563-943f-4bb5-bcc0-ae044321b33b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:56:08 np0005464891 nova_compute[259907]: 2025-10-01 16:56:08.191 2 DEBUG oslo_concurrency.lockutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "refresh_cache-b408cbe8-e33e-4d19-9bec-ea1664d387d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:56:08 np0005464891 nova_compute[259907]: 2025-10-01 16:56:08.191 2 DEBUG oslo_concurrency.lockutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquired lock "refresh_cache-b408cbe8-e33e-4d19-9bec-ea1664d387d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:56:08 np0005464891 nova_compute[259907]: 2025-10-01 16:56:08.191 2 DEBUG nova.network.neutron [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:56:08 np0005464891 nova_compute[259907]: 2025-10-01 16:56:08.287 2 DEBUG nova.compute.manager [req-4639f006-eeae-466c-b686-0de9fe7887a8 req-000ed9d7-ddca-4f73-aa8b-ad54839b67ed af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Received event network-changed-4c696563-943f-4bb5-bcc0-ae044321b33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:56:08 np0005464891 nova_compute[259907]: 2025-10-01 16:56:08.287 2 DEBUG nova.compute.manager [req-4639f006-eeae-466c-b686-0de9fe7887a8 req-000ed9d7-ddca-4f73-aa8b-ad54839b67ed af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Refreshing instance network info cache due to event network-changed-4c696563-943f-4bb5-bcc0-ae044321b33b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:56:08 np0005464891 nova_compute[259907]: 2025-10-01 16:56:08.288 2 DEBUG oslo_concurrency.lockutils [req-4639f006-eeae-466c-b686-0de9fe7887a8 req-000ed9d7-ddca-4f73-aa8b-ad54839b67ed af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-b408cbe8-e33e-4d19-9bec-ea1664d387d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:56:08 np0005464891 nova_compute[259907]: 2025-10-01 16:56:08.400 2 DEBUG nova.network.neutron [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:56:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.3 KiB/s wr, 22 op/s
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.315 2 DEBUG os_brick.utils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.316 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.335 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.335 741 DEBUG oslo.privsep.daemon [-] privsep: reply[db12e929-2397-45c0-8327-3281ce4bf2f8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.336 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.346 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.346 741 DEBUG oslo.privsep.daemon [-] privsep: reply[bf5ae344-97b6-4508-a951-0d5aa851b5f8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.348 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.356 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.357 741 DEBUG oslo.privsep.daemon [-] privsep: reply[83258ad5-4a38-4b93-bfc0-f4b08dbce90d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.358 741 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4e0b48-8cfe-4ab3-8206-e6861723d096]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.358 2 DEBUG oslo_concurrency.processutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.378 2 DEBUG oslo_concurrency.processutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.382 2 DEBUG os_brick.initiator.connectors.lightos [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.382 2 DEBUG os_brick.initiator.connectors.lightos [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.383 2 DEBUG os_brick.initiator.connectors.lightos [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.384 2 DEBUG os_brick.utils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.384 2 DEBUG nova.virt.block_device [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Updating existing volume attachment record: db59816f-2fc9-4603-9df3-badc12eb4c2f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:56:09 np0005464891 nova_compute[259907]: 2025-10-01 16:56:09.917 2 DEBUG nova.network.neutron [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Updating instance_info_cache with network_info: [{"id": "4c696563-943f-4bb5-bcc0-ae044321b33b", "address": "fa:16:3e:91:e9:db", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c696563-94", "ovs_interfaceid": "4c696563-943f-4bb5-bcc0-ae044321b33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:56:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:56:09 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/493541081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:56:10 np0005464891 podman[293288]: 2025-10-01 16:56:10.009562562 +0000 UTC m=+0.111490134 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.012 2 DEBUG oslo_concurrency.lockutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Releasing lock "refresh_cache-b408cbe8-e33e-4d19-9bec-ea1664d387d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.012 2 DEBUG nova.compute.manager [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Instance network_info: |[{"id": "4c696563-943f-4bb5-bcc0-ae044321b33b", "address": "fa:16:3e:91:e9:db", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c696563-94", "ovs_interfaceid": "4c696563-943f-4bb5-bcc0-ae044321b33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.013 2 DEBUG oslo_concurrency.lockutils [req-4639f006-eeae-466c-b686-0de9fe7887a8 req-000ed9d7-ddca-4f73-aa8b-ad54839b67ed af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-b408cbe8-e33e-4d19-9bec-ea1664d387d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.013 2 DEBUG nova.network.neutron [req-4639f006-eeae-466c-b686-0de9fe7887a8 req-000ed9d7-ddca-4f73-aa8b-ad54839b67ed af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Refreshing network info cache for port 4c696563-943f-4bb5-bcc0-ae044321b33b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.819 2 DEBUG nova.compute.manager [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.820 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.821 2 INFO nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Creating image(s)#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.821 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.821 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Ensure instance console log exists: /var/lib/nova/instances/b408cbe8-e33e-4d19-9bec-ea1664d387d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.822 2 DEBUG oslo_concurrency.lockutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.822 2 DEBUG oslo_concurrency.lockutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.822 2 DEBUG oslo_concurrency.lockutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.824 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Start _get_guest_xml network_info=[{"id": "4c696563-943f-4bb5-bcc0-ae044321b33b", "address": "fa:16:3e:91:e9:db", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c696563-94", "ovs_interfaceid": "4c696563-943f-4bb5-bcc0-ae044321b33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'mount_device': '/dev/vda', 'attachment_id': 'db59816f-2fc9-4603-9df3-badc12eb4c2f', 'disk_bus': 'virtio', 'delete_on_termination': True, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d77cea6c-1f8e-4472-95f1-26f306e1d9c6', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd77cea6c-1f8e-4472-95f1-26f306e1d9c6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'b408cbe8-e33e-4d19-9bec-ea1664d387d3', 'attached_at': '', 'detached_at': '', 'volume_id': 'd77cea6c-1f8e-4472-95f1-26f306e1d9c6', 'serial': 'd77cea6c-1f8e-4472-95f1-26f306e1d9c6'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.829 2 WARNING nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.833 2 DEBUG nova.virt.libvirt.host [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.833 2 DEBUG nova.virt.libvirt.host [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.844 2 DEBUG nova.virt.libvirt.host [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.844 2 DEBUG nova.virt.libvirt.host [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.845 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.845 2 DEBUG nova.virt.hardware [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.845 2 DEBUG nova.virt.hardware [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.846 2 DEBUG nova.virt.hardware [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.846 2 DEBUG nova.virt.hardware [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.846 2 DEBUG nova.virt.hardware [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.846 2 DEBUG nova.virt.hardware [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.846 2 DEBUG nova.virt.hardware [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.847 2 DEBUG nova.virt.hardware [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.847 2 DEBUG nova.virt.hardware [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.847 2 DEBUG nova.virt.hardware [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.847 2 DEBUG nova.virt.hardware [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.876 2 DEBUG nova.storage.rbd_utils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image b408cbe8-e33e-4d19-9bec-ea1664d387d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:56:10 np0005464891 nova_compute[259907]: 2025-10-01 16:56:10.881 2 DEBUG oslo_concurrency.processutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:56:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.7 KiB/s wr, 69 op/s
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.298 2 DEBUG nova.network.neutron [req-4639f006-eeae-466c-b686-0de9fe7887a8 req-000ed9d7-ddca-4f73-aa8b-ad54839b67ed af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Updated VIF entry in instance network info cache for port 4c696563-943f-4bb5-bcc0-ae044321b33b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.299 2 DEBUG nova.network.neutron [req-4639f006-eeae-466c-b686-0de9fe7887a8 req-000ed9d7-ddca-4f73-aa8b-ad54839b67ed af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Updating instance_info_cache with network_info: [{"id": "4c696563-943f-4bb5-bcc0-ae044321b33b", "address": "fa:16:3e:91:e9:db", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c696563-94", "ovs_interfaceid": "4c696563-943f-4bb5-bcc0-ae044321b33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:56:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:56:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2746130723' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.324 2 DEBUG oslo_concurrency.lockutils [req-4639f006-eeae-466c-b686-0de9fe7887a8 req-000ed9d7-ddca-4f73-aa8b-ad54839b67ed af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-b408cbe8-e33e-4d19-9bec-ea1664d387d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.337 2 DEBUG oslo_concurrency.processutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:56:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.530 2 DEBUG nova.virt.libvirt.vif [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:56:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-401507390',display_name='tempest-TestVolumeBootPattern-server-401507390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-401507390',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-2bd3va35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:56:05Z,user_data=None,user_id='1280014cdfb74333ae8d71c78116e646',uuid=b408cbe8-e33e-4d19-9bec-ea1664d387d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c696563-943f-4bb5-bcc0-ae044321b33b", "address": "fa:16:3e:91:e9:db", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c696563-94", "ovs_interfaceid": "4c696563-943f-4bb5-bcc0-ae044321b33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.531 2 DEBUG nova.network.os_vif_util [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "4c696563-943f-4bb5-bcc0-ae044321b33b", "address": "fa:16:3e:91:e9:db", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c696563-94", "ovs_interfaceid": "4c696563-943f-4bb5-bcc0-ae044321b33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.532 2 DEBUG nova.network.os_vif_util [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=4c696563-943f-4bb5-bcc0-ae044321b33b,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c696563-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.534 2 DEBUG nova.objects.instance [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lazy-loading 'pci_devices' on Instance uuid b408cbe8-e33e-4d19-9bec-ea1664d387d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.612 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  <uuid>b408cbe8-e33e-4d19-9bec-ea1664d387d3</uuid>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  <name>instance-00000011</name>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <nova:name>tempest-TestVolumeBootPattern-server-401507390</nova:name>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:56:10</nova:creationTime>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:        <nova:user uuid="1280014cdfb74333ae8d71c78116e646">tempest-TestVolumeBootPattern-582136054-project-member</nova:user>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:        <nova:project uuid="8318b65fa88942a99937a0d198a04a9c">tempest-TestVolumeBootPattern-582136054</nova:project>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:        <nova:port uuid="4c696563-943f-4bb5-bcc0-ae044321b33b">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <entry name="serial">b408cbe8-e33e-4d19-9bec-ea1664d387d3</entry>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <entry name="uuid">b408cbe8-e33e-4d19-9bec-ea1664d387d3</entry>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/b408cbe8-e33e-4d19-9bec-ea1664d387d3_disk.config">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="volumes/volume-d77cea6c-1f8e-4472-95f1-26f306e1d9c6">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <serial>d77cea6c-1f8e-4472-95f1-26f306e1d9c6</serial>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:91:e9:db"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <target dev="tap4c696563-94"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/b408cbe8-e33e-4d19-9bec-ea1664d387d3/console.log" append="off"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:56:11 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:56:11 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:56:11 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:56:11 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.613 2 DEBUG nova.compute.manager [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Preparing to wait for external event network-vif-plugged-4c696563-943f-4bb5-bcc0-ae044321b33b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.613 2 DEBUG oslo_concurrency.lockutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.614 2 DEBUG oslo_concurrency.lockutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.614 2 DEBUG oslo_concurrency.lockutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.615 2 DEBUG nova.virt.libvirt.vif [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:56:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-401507390',display_name='tempest-TestVolumeBootPattern-server-401507390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-401507390',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-2bd3va35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:56:05Z,user_data=None,user_id='1280014cdfb74333ae8d71c78116e646',uuid=b408cbe8-e33e-4d19-9bec-ea1664d387d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c696563-943f-4bb5-bcc0-ae044321b33b", "address": "fa:16:3e:91:e9:db", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c696563-94", "ovs_interfaceid": "4c696563-943f-4bb5-bcc0-ae044321b33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.615 2 DEBUG nova.network.os_vif_util [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "4c696563-943f-4bb5-bcc0-ae044321b33b", "address": "fa:16:3e:91:e9:db", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c696563-94", "ovs_interfaceid": "4c696563-943f-4bb5-bcc0-ae044321b33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.615 2 DEBUG nova.network.os_vif_util [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=4c696563-943f-4bb5-bcc0-ae044321b33b,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c696563-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.616 2 DEBUG os_vif [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=4c696563-943f-4bb5-bcc0-ae044321b33b,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c696563-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.617 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.617 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c696563-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4c696563-94, col_values=(('external_ids', {'iface-id': '4c696563-943f-4bb5-bcc0-ae044321b33b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:e9:db', 'vm-uuid': 'b408cbe8-e33e-4d19-9bec-ea1664d387d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:11 np0005464891 NetworkManager[44940]: <info>  [1759337771.6235] manager: (tap4c696563-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.631 2 INFO os_vif [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=4c696563-943f-4bb5-bcc0-ae044321b33b,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c696563-94')#033[00m
Oct  1 12:56:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:56:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3553101852' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:56:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:56:11 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3553101852' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.872 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.873 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.874 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No VIF found with MAC fa:16:3e:91:e9:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.874 2 INFO nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Using config drive#033[00m
Oct  1 12:56:11 np0005464891 nova_compute[259907]: 2025-10-01 16:56:11.989 2 DEBUG nova.storage.rbd_utils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image b408cbe8-e33e-4d19-9bec-ea1664d387d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:56:12
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['images', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta']
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:56:12 np0005464891 nova_compute[259907]: 2025-10-01 16:56:12.374 2 INFO nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Creating config drive at /var/lib/nova/instances/b408cbe8-e33e-4d19-9bec-ea1664d387d3/disk.config#033[00m
Oct  1 12:56:12 np0005464891 nova_compute[259907]: 2025-10-01 16:56:12.383 2 DEBUG oslo_concurrency.processutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b408cbe8-e33e-4d19-9bec-ea1664d387d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp31sgrg9t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:56:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:56:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:12.457 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:12.458 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:12.458 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:12 np0005464891 nova_compute[259907]: 2025-10-01 16:56:12.534 2 DEBUG oslo_concurrency.processutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b408cbe8-e33e-4d19-9bec-ea1664d387d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp31sgrg9t" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:56:12 np0005464891 nova_compute[259907]: 2025-10-01 16:56:12.567 2 DEBUG nova.storage.rbd_utils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image b408cbe8-e33e-4d19-9bec-ea1664d387d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:56:12 np0005464891 nova_compute[259907]: 2025-10-01 16:56:12.570 2 DEBUG oslo_concurrency.processutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b408cbe8-e33e-4d19-9bec-ea1664d387d3/disk.config b408cbe8-e33e-4d19-9bec-ea1664d387d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:56:12 np0005464891 nova_compute[259907]: 2025-10-01 16:56:12.752 2 DEBUG oslo_concurrency.processutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b408cbe8-e33e-4d19-9bec-ea1664d387d3/disk.config b408cbe8-e33e-4d19-9bec-ea1664d387d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:56:12 np0005464891 nova_compute[259907]: 2025-10-01 16:56:12.753 2 INFO nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Deleting local config drive /var/lib/nova/instances/b408cbe8-e33e-4d19-9bec-ea1664d387d3/disk.config because it was imported into RBD.#033[00m
Oct  1 12:56:12 np0005464891 kernel: tap4c696563-94: entered promiscuous mode
Oct  1 12:56:12 np0005464891 NetworkManager[44940]: <info>  [1759337772.8316] manager: (tap4c696563-94): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Oct  1 12:56:12 np0005464891 nova_compute[259907]: 2025-10-01 16:56:12.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:12 np0005464891 ovn_controller[152409]: 2025-10-01T16:56:12Z|00162|binding|INFO|Claiming lport 4c696563-943f-4bb5-bcc0-ae044321b33b for this chassis.
Oct  1 12:56:12 np0005464891 ovn_controller[152409]: 2025-10-01T16:56:12Z|00163|binding|INFO|4c696563-943f-4bb5-bcc0-ae044321b33b: Claiming fa:16:3e:91:e9:db 10.100.0.13
Oct  1 12:56:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:12.851 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:e9:db 10.100.0.13'], port_security=['fa:16:3e:91:e9:db 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'b408cbe8-e33e-4d19-9bec-ea1664d387d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce1e1062-6685-441b-8278-667224375e38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8318b65fa88942a99937a0d198a04a9c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc91778b-466d-4bf2-b0e0-b4af5293ed3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4cdcf07-f310-4572-944c-43bd8e74f763, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=4c696563-943f-4bb5-bcc0-ae044321b33b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:56:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:12.854 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 4c696563-943f-4bb5-bcc0-ae044321b33b in datapath ce1e1062-6685-441b-8278-667224375e38 bound to our chassis#033[00m
Oct  1 12:56:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:12.857 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce1e1062-6685-441b-8278-667224375e38#033[00m
Oct  1 12:56:12 np0005464891 systemd-udevd[293428]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:56:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:12.871 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[c95525f1-8d9c-4efa-ae15-4a1b84d9c91a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:12.872 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce1e1062-61 in ovnmeta-ce1e1062-6685-441b-8278-667224375e38 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:56:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:12.875 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce1e1062-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:56:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:12.875 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b138625f-e28d-4d9f-a5bb-774b1cddfade]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:12 np0005464891 systemd-machined[214891]: New machine qemu-17-instance-00000011.
Oct  1 12:56:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:12.876 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8cc0b1-43bf-4c65-8933-89e110fb585d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:12 np0005464891 NetworkManager[44940]: <info>  [1759337772.8888] device (tap4c696563-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:56:12 np0005464891 NetworkManager[44940]: <info>  [1759337772.8899] device (tap4c696563-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:56:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:12.890 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[44552f16-f34a-4726-b5bf-af12396dfc21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:12 np0005464891 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Oct  1 12:56:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:12.916 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[605a4b45-df4b-4df7-af85-b17051bd7936]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:12 np0005464891 nova_compute[259907]: 2025-10-01 16:56:12.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:12 np0005464891 ovn_controller[152409]: 2025-10-01T16:56:12Z|00164|binding|INFO|Setting lport 4c696563-943f-4bb5-bcc0-ae044321b33b ovn-installed in OVS
Oct  1 12:56:12 np0005464891 ovn_controller[152409]: 2025-10-01T16:56:12Z|00165|binding|INFO|Setting lport 4c696563-943f-4bb5-bcc0-ae044321b33b up in Southbound
Oct  1 12:56:12 np0005464891 nova_compute[259907]: 2025-10-01 16:56:12.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:12.964 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[82b891b6-0c61-45e2-8bea-ac8437cc1d51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:12 np0005464891 NetworkManager[44940]: <info>  [1759337772.9720] manager: (tapce1e1062-60): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
Oct  1 12:56:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:12.970 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd1d17c-5f1c-42bb-a8b3-c99dda155d62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:12 np0005464891 systemd-udevd[293433]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:13.012 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[39da06ff-c3ca-4c60-b8e5-861156e3d4b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:13.015 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[9ada062c-3747-4704-bad9-bfcc837951b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:13 np0005464891 NetworkManager[44940]: <info>  [1759337773.0399] device (tapce1e1062-60): carrier: link connected
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:13.045 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[1562187e-e5d9-4d5c-b5c6-695e5593c6d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:13.064 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[a0dd92dd-1df6-4552-a0ae-99ff4d22bc2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce1e1062-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:87:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472161, 'reachable_time': 16691, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293462, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:13.080 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc526b9-93d8-4a82-8c53-41b43f27fd64]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec5:872c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472161, 'tstamp': 472161}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293463, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:13.101 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[94de833e-8ea2-4384-ab63-fd270893210a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce1e1062-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:87:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472161, 'reachable_time': 16691, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293464, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 3.5 KiB/s wr, 63 op/s
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:13.132 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5b34e01e-2340-449e-aea7-6421922ff061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:13.200 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e8064230-5bea-46e2-b17a-2467dcb8b5cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:13.202 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce1e1062-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:13.203 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:13.203 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce1e1062-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:56:13 np0005464891 nova_compute[259907]: 2025-10-01 16:56:13.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:13 np0005464891 NetworkManager[44940]: <info>  [1759337773.2068] manager: (tapce1e1062-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Oct  1 12:56:13 np0005464891 kernel: tapce1e1062-60: entered promiscuous mode
Oct  1 12:56:13 np0005464891 nova_compute[259907]: 2025-10-01 16:56:13.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:13.209 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce1e1062-60, col_values=(('external_ids', {'iface-id': 'd971881d-8d8b-44dc-b0b0-4cd0065c0105'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:56:13 np0005464891 nova_compute[259907]: 2025-10-01 16:56:13.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:13 np0005464891 ovn_controller[152409]: 2025-10-01T16:56:13Z|00166|binding|INFO|Releasing lport d971881d-8d8b-44dc-b0b0-4cd0065c0105 from this chassis (sb_readonly=0)
Oct  1 12:56:13 np0005464891 nova_compute[259907]: 2025-10-01 16:56:13.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:13.212 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce1e1062-6685-441b-8278-667224375e38.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce1e1062-6685-441b-8278-667224375e38.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:13.212 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[6e05a7c7-bbb4-4b98-ace3-5875aec32da1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:13.213 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-ce1e1062-6685-441b-8278-667224375e38
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/ce1e1062-6685-441b-8278-667224375e38.pid.haproxy
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID ce1e1062-6685-441b-8278-667224375e38
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:56:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:13.214 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'env', 'PROCESS_TAG=haproxy-ce1e1062-6685-441b-8278-667224375e38', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce1e1062-6685-441b-8278-667224375e38.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:56:13 np0005464891 nova_compute[259907]: 2025-10-01 16:56:13.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:13 np0005464891 podman[293540]: 2025-10-01 16:56:13.634363117 +0000 UTC m=+0.026417965 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:56:13 np0005464891 podman[293540]: 2025-10-01 16:56:13.751542493 +0000 UTC m=+0.143597341 container create 4d31d627d90860bb10488fe1443cc3e6d30ee9f73bbc283af6df17e798db6698 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:56:13 np0005464891 systemd[1]: Started libpod-conmon-4d31d627d90860bb10488fe1443cc3e6d30ee9f73bbc283af6df17e798db6698.scope.
Oct  1 12:56:13 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:56:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d01e1c5fda94702af6b0ec28d1e7bfb103e793882480c182a9a50abc3f7f684/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:56:13 np0005464891 podman[293540]: 2025-10-01 16:56:13.882612064 +0000 UTC m=+0.274666922 container init 4d31d627d90860bb10488fe1443cc3e6d30ee9f73bbc283af6df17e798db6698 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 12:56:13 np0005464891 podman[293540]: 2025-10-01 16:56:13.889068289 +0000 UTC m=+0.281123117 container start 4d31d627d90860bb10488fe1443cc3e6d30ee9f73bbc283af6df17e798db6698 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:56:13 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[293555]: [NOTICE]   (293559) : New worker (293561) forked
Oct  1 12:56:13 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[293555]: [NOTICE]   (293559) : Loading success.
Oct  1 12:56:13 np0005464891 nova_compute[259907]: 2025-10-01 16:56:13.975 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337773.9750085, b408cbe8-e33e-4d19-9bec-ea1664d387d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:56:13 np0005464891 nova_compute[259907]: 2025-10-01 16:56:13.976 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] VM Started (Lifecycle Event)#033[00m
Oct  1 12:56:14 np0005464891 nova_compute[259907]: 2025-10-01 16:56:14.018 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:56:14 np0005464891 nova_compute[259907]: 2025-10-01 16:56:14.023 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337773.975421, b408cbe8-e33e-4d19-9bec-ea1664d387d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:56:14 np0005464891 nova_compute[259907]: 2025-10-01 16:56:14.023 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:56:14 np0005464891 nova_compute[259907]: 2025-10-01 16:56:14.195 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:56:14 np0005464891 nova_compute[259907]: 2025-10-01 16:56:14.200 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:56:14 np0005464891 nova_compute[259907]: 2025-10-01 16:56:14.301 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:56:14 np0005464891 podman[293571]: 2025-10-01 16:56:14.992608977 +0000 UTC m=+0.087882505 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:56:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 20 KiB/s wr, 70 op/s
Oct  1 12:56:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:56:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Oct  1 12:56:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Oct  1 12:56:16 np0005464891 nova_compute[259907]: 2025-10-01 16:56:16.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:16 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Oct  1 12:56:16 np0005464891 nova_compute[259907]: 2025-10-01 16:56:16.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 18 KiB/s wr, 64 op/s
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.842 2 DEBUG nova.compute.manager [req-3cc7c5b7-af8f-4b7e-9c65-58edb98516db req-1febfd3c-0caa-4f81-9f1a-d29333239517 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Received event network-vif-plugged-4c696563-943f-4bb5-bcc0-ae044321b33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.842 2 DEBUG oslo_concurrency.lockutils [req-3cc7c5b7-af8f-4b7e-9c65-58edb98516db req-1febfd3c-0caa-4f81-9f1a-d29333239517 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.843 2 DEBUG oslo_concurrency.lockutils [req-3cc7c5b7-af8f-4b7e-9c65-58edb98516db req-1febfd3c-0caa-4f81-9f1a-d29333239517 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.843 2 DEBUG oslo_concurrency.lockutils [req-3cc7c5b7-af8f-4b7e-9c65-58edb98516db req-1febfd3c-0caa-4f81-9f1a-d29333239517 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.844 2 DEBUG nova.compute.manager [req-3cc7c5b7-af8f-4b7e-9c65-58edb98516db req-1febfd3c-0caa-4f81-9f1a-d29333239517 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Processing event network-vif-plugged-4c696563-943f-4bb5-bcc0-ae044321b33b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.845 2 DEBUG nova.compute.manager [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.849 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337777.8491027, b408cbe8-e33e-4d19-9bec-ea1664d387d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.850 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.853 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.858 2 INFO nova.virt.libvirt.driver [-] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Instance spawned successfully.#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.859 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.875 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.879 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.905 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.906 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.906 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.906 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.907 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:56:17 np0005464891 nova_compute[259907]: 2025-10-01 16:56:17.907 2 DEBUG nova.virt.libvirt.driver [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:56:18 np0005464891 nova_compute[259907]: 2025-10-01 16:56:18.026 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:56:18 np0005464891 nova_compute[259907]: 2025-10-01 16:56:18.433 2 INFO nova.compute.manager [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Took 7.61 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:56:18 np0005464891 nova_compute[259907]: 2025-10-01 16:56:18.434 2 DEBUG nova.compute.manager [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:56:18 np0005464891 nova_compute[259907]: 2025-10-01 16:56:18.687 2 INFO nova.compute.manager [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Took 14.46 seconds to build instance.#033[00m
Oct  1 12:56:18 np0005464891 nova_compute[259907]: 2025-10-01 16:56:18.690 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:56:18 np0005464891 nova_compute[259907]: 2025-10-01 16:56:18.762 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Triggering sync for uuid b408cbe8-e33e-4d19-9bec-ea1664d387d3 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  1 12:56:18 np0005464891 nova_compute[259907]: 2025-10-01 16:56:18.763 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:18 np0005464891 nova_compute[259907]: 2025-10-01 16:56:18.891 2 DEBUG oslo_concurrency.lockutils [None req-6040fe4b-fc76-4a5c-8bee-e3ccccd7c72a 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:18 np0005464891 nova_compute[259907]: 2025-10-01 16:56:18.891 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:18 np0005464891 podman[293591]: 2025-10-01 16:56:18.982535289 +0000 UTC m=+0.085282486 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 12:56:19 np0005464891 nova_compute[259907]: 2025-10-01 16:56:19.101 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 17 KiB/s wr, 61 op/s
Oct  1 12:56:19 np0005464891 nova_compute[259907]: 2025-10-01 16:56:19.936 2 DEBUG nova.compute.manager [req-283db8ee-fe74-4002-be15-653a0bf3a256 req-903c26bc-905c-4938-87d6-9c4e3bb5fdfe af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Received event network-vif-plugged-4c696563-943f-4bb5-bcc0-ae044321b33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:56:19 np0005464891 nova_compute[259907]: 2025-10-01 16:56:19.936 2 DEBUG oslo_concurrency.lockutils [req-283db8ee-fe74-4002-be15-653a0bf3a256 req-903c26bc-905c-4938-87d6-9c4e3bb5fdfe af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:19 np0005464891 nova_compute[259907]: 2025-10-01 16:56:19.936 2 DEBUG oslo_concurrency.lockutils [req-283db8ee-fe74-4002-be15-653a0bf3a256 req-903c26bc-905c-4938-87d6-9c4e3bb5fdfe af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:19 np0005464891 nova_compute[259907]: 2025-10-01 16:56:19.937 2 DEBUG oslo_concurrency.lockutils [req-283db8ee-fe74-4002-be15-653a0bf3a256 req-903c26bc-905c-4938-87d6-9c4e3bb5fdfe af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:19 np0005464891 nova_compute[259907]: 2025-10-01 16:56:19.937 2 DEBUG nova.compute.manager [req-283db8ee-fe74-4002-be15-653a0bf3a256 req-903c26bc-905c-4938-87d6-9c4e3bb5fdfe af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] No waiting events found dispatching network-vif-plugged-4c696563-943f-4bb5-bcc0-ae044321b33b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:56:19 np0005464891 nova_compute[259907]: 2025-10-01 16:56:19.937 2 WARNING nova.compute.manager [req-283db8ee-fe74-4002-be15-653a0bf3a256 req-903c26bc-905c-4938-87d6-9c4e3bb5fdfe af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Received unexpected event network-vif-plugged-4c696563-943f-4bb5-bcc0-ae044321b33b for instance with vm_state active and task_state None.#033[00m
Oct  1 12:56:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Oct  1 12:56:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Oct  1 12:56:20 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Oct  1 12:56:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1672: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 19 KiB/s wr, 65 op/s
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.304 2 DEBUG oslo_concurrency.lockutils [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.305 2 DEBUG oslo_concurrency.lockutils [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.306 2 DEBUG oslo_concurrency.lockutils [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.306 2 DEBUG oslo_concurrency.lockutils [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.306 2 DEBUG oslo_concurrency.lockutils [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.308 2 INFO nova.compute.manager [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Terminating instance#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.309 2 DEBUG nova.compute.manager [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:56:21 np0005464891 kernel: tap4c696563-94 (unregistering): left promiscuous mode
Oct  1 12:56:21 np0005464891 NetworkManager[44940]: <info>  [1759337781.3639] device (tap4c696563-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:56:21 np0005464891 ovn_controller[152409]: 2025-10-01T16:56:21Z|00167|binding|INFO|Releasing lport 4c696563-943f-4bb5-bcc0-ae044321b33b from this chassis (sb_readonly=0)
Oct  1 12:56:21 np0005464891 ovn_controller[152409]: 2025-10-01T16:56:21Z|00168|binding|INFO|Setting lport 4c696563-943f-4bb5-bcc0-ae044321b33b down in Southbound
Oct  1 12:56:21 np0005464891 ovn_controller[152409]: 2025-10-01T16:56:21Z|00169|binding|INFO|Removing iface tap4c696563-94 ovn-installed in OVS
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:21.386 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:e9:db 10.100.0.13'], port_security=['fa:16:3e:91:e9:db 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'b408cbe8-e33e-4d19-9bec-ea1664d387d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce1e1062-6685-441b-8278-667224375e38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8318b65fa88942a99937a0d198a04a9c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc91778b-466d-4bf2-b0e0-b4af5293ed3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4cdcf07-f310-4572-944c-43bd8e74f763, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=4c696563-943f-4bb5-bcc0-ae044321b33b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:56:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:21.388 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 4c696563-943f-4bb5-bcc0-ae044321b33b in datapath ce1e1062-6685-441b-8278-667224375e38 unbound from our chassis#033[00m
Oct  1 12:56:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:21.390 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce1e1062-6685-441b-8278-667224375e38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:56:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:21.392 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ef3cfd75-9737-4ca0-9a33-c59b2aed39e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:21.392 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce1e1062-6685-441b-8278-667224375e38 namespace which is not needed anymore#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:56:21 np0005464891 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Oct  1 12:56:21 np0005464891 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 4.656s CPU time.
Oct  1 12:56:21 np0005464891 systemd-machined[214891]: Machine qemu-17-instance-00000011 terminated.
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:21 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[293555]: [NOTICE]   (293559) : haproxy version is 2.8.14-c23fe91
Oct  1 12:56:21 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[293555]: [NOTICE]   (293559) : path to executable is /usr/sbin/haproxy
Oct  1 12:56:21 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[293555]: [WARNING]  (293559) : Exiting Master process...
Oct  1 12:56:21 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[293555]: [WARNING]  (293559) : Exiting Master process...
Oct  1 12:56:21 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[293555]: [ALERT]    (293559) : Current worker (293561) exited with code 143 (Terminated)
Oct  1 12:56:21 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[293555]: [WARNING]  (293559) : All workers exited. Exiting... (0)
Oct  1 12:56:21 np0005464891 systemd[1]: libpod-4d31d627d90860bb10488fe1443cc3e6d30ee9f73bbc283af6df17e798db6698.scope: Deactivated successfully.
Oct  1 12:56:21 np0005464891 podman[293634]: 2025-10-01 16:56:21.538276357 +0000 UTC m=+0.046303193 container died 4d31d627d90860bb10488fe1443cc3e6d30ee9f73bbc283af6df17e798db6698 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.544 2 INFO nova.virt.libvirt.driver [-] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Instance destroyed successfully.#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.545 2 DEBUG nova.objects.instance [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lazy-loading 'resources' on Instance uuid b408cbe8-e33e-4d19-9bec-ea1664d387d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.570 2 DEBUG nova.virt.libvirt.vif [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:56:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-401507390',display_name='tempest-TestVolumeBootPattern-server-401507390',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-401507390',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:56:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-2bd3va35',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:56:18Z,user_data=None,user_id='1280014cdfb74333ae8d71c78116e646',uuid=b408cbe8-e33e-4d19-9bec-ea1664d387d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c696563-943f-4bb5-bcc0-ae044321b33b", "address": "fa:16:3e:91:e9:db", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c696563-94", "ovs_interfaceid": "4c696563-943f-4bb5-bcc0-ae044321b33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.571 2 DEBUG nova.network.os_vif_util [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "4c696563-943f-4bb5-bcc0-ae044321b33b", "address": "fa:16:3e:91:e9:db", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c696563-94", "ovs_interfaceid": "4c696563-943f-4bb5-bcc0-ae044321b33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.571 2 DEBUG nova.network.os_vif_util [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=4c696563-943f-4bb5-bcc0-ae044321b33b,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c696563-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.572 2 DEBUG os_vif [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=4c696563-943f-4bb5-bcc0-ae044321b33b,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c696563-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.573 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c696563-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:21 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9d01e1c5fda94702af6b0ec28d1e7bfb103e793882480c182a9a50abc3f7f684-merged.mount: Deactivated successfully.
Oct  1 12:56:21 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4d31d627d90860bb10488fe1443cc3e6d30ee9f73bbc283af6df17e798db6698-userdata-shm.mount: Deactivated successfully.
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.583 2 INFO os_vif [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:e9:db,bridge_name='br-int',has_traffic_filtering=True,id=4c696563-943f-4bb5-bcc0-ae044321b33b,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c696563-94')#033[00m
Oct  1 12:56:21 np0005464891 podman[293634]: 2025-10-01 16:56:21.589221373 +0000 UTC m=+0.097248149 container cleanup 4d31d627d90860bb10488fe1443cc3e6d30ee9f73bbc283af6df17e798db6698 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:56:21 np0005464891 systemd[1]: libpod-conmon-4d31d627d90860bb10488fe1443cc3e6d30ee9f73bbc283af6df17e798db6698.scope: Deactivated successfully.
Oct  1 12:56:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Oct  1 12:56:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Oct  1 12:56:21 np0005464891 podman[293690]: 2025-10-01 16:56:21.652065862 +0000 UTC m=+0.039846649 container remove 4d31d627d90860bb10488fe1443cc3e6d30ee9f73bbc283af6df17e798db6698 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:56:21 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Oct  1 12:56:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:21.669 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[c092f779-e883-4284-923b-7b049e2b0f49]: (4, ('Wed Oct  1 04:56:21 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38 (4d31d627d90860bb10488fe1443cc3e6d30ee9f73bbc283af6df17e798db6698)\n4d31d627d90860bb10488fe1443cc3e6d30ee9f73bbc283af6df17e798db6698\nWed Oct  1 04:56:21 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38 (4d31d627d90860bb10488fe1443cc3e6d30ee9f73bbc283af6df17e798db6698)\n4d31d627d90860bb10488fe1443cc3e6d30ee9f73bbc283af6df17e798db6698\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:21.672 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[86dfe6d5-674b-41f1-87b7-e2e7e57fda2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:21.673 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce1e1062-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:21 np0005464891 kernel: tapce1e1062-60: left promiscuous mode
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:21.697 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[16844252-5503-41b7-b949-c670ebdfd5fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:21.722 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ee4e2db6-2f8c-481e-b644-c1f298b31f90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:21.724 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9fc83c-8700-479a-8415-eb9e6e0b001d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:21.738 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b78ff577-b2c1-4082-bd72-760282fc03d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472152, 'reachable_time': 40107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293708, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:21 np0005464891 systemd[1]: run-netns-ovnmeta\x2dce1e1062\x2d6685\x2d441b\x2d8278\x2d667224375e38.mount: Deactivated successfully.
Oct  1 12:56:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:21.743 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce1e1062-6685-441b-8278-667224375e38 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:56:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:21.744 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[48618eaf-9487-4802-b71d-90a884711c02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.865 2 INFO nova.virt.libvirt.driver [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Deleting instance files /var/lib/nova/instances/b408cbe8-e33e-4d19-9bec-ea1664d387d3_del#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.865 2 INFO nova.virt.libvirt.driver [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Deletion of /var/lib/nova/instances/b408cbe8-e33e-4d19-9bec-ea1664d387d3_del complete#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.944 2 INFO nova.compute.manager [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.945 2 DEBUG oslo.service.loopingcall [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.946 2 DEBUG nova.compute.manager [-] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:56:21 np0005464891 nova_compute[259907]: 2025-10-01 16:56:21.947 2 DEBUG nova.network.neutron [-] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.042 2 DEBUG nova.compute.manager [req-3f76680b-6ff8-4b72-92c7-3f69ac39f594 req-c3285d21-33b8-4023-a274-c20e598fed9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Received event network-vif-unplugged-4c696563-943f-4bb5-bcc0-ae044321b33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.043 2 DEBUG oslo_concurrency.lockutils [req-3f76680b-6ff8-4b72-92c7-3f69ac39f594 req-c3285d21-33b8-4023-a274-c20e598fed9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.043 2 DEBUG oslo_concurrency.lockutils [req-3f76680b-6ff8-4b72-92c7-3f69ac39f594 req-c3285d21-33b8-4023-a274-c20e598fed9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.044 2 DEBUG oslo_concurrency.lockutils [req-3f76680b-6ff8-4b72-92c7-3f69ac39f594 req-c3285d21-33b8-4023-a274-c20e598fed9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.045 2 DEBUG nova.compute.manager [req-3f76680b-6ff8-4b72-92c7-3f69ac39f594 req-c3285d21-33b8-4023-a274-c20e598fed9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] No waiting events found dispatching network-vif-unplugged-4c696563-943f-4bb5-bcc0-ae044321b33b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.045 2 DEBUG nova.compute.manager [req-3f76680b-6ff8-4b72-92c7-3f69ac39f594 req-c3285d21-33b8-4023-a274-c20e598fed9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Received event network-vif-unplugged-4c696563-943f-4bb5-bcc0-ae044321b33b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.046 2 DEBUG nova.compute.manager [req-3f76680b-6ff8-4b72-92c7-3f69ac39f594 req-c3285d21-33b8-4023-a274-c20e598fed9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Received event network-vif-plugged-4c696563-943f-4bb5-bcc0-ae044321b33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.046 2 DEBUG oslo_concurrency.lockutils [req-3f76680b-6ff8-4b72-92c7-3f69ac39f594 req-c3285d21-33b8-4023-a274-c20e598fed9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.047 2 DEBUG oslo_concurrency.lockutils [req-3f76680b-6ff8-4b72-92c7-3f69ac39f594 req-c3285d21-33b8-4023-a274-c20e598fed9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.047 2 DEBUG oslo_concurrency.lockutils [req-3f76680b-6ff8-4b72-92c7-3f69ac39f594 req-c3285d21-33b8-4023-a274-c20e598fed9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.048 2 DEBUG nova.compute.manager [req-3f76680b-6ff8-4b72-92c7-3f69ac39f594 req-c3285d21-33b8-4023-a274-c20e598fed9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] No waiting events found dispatching network-vif-plugged-4c696563-943f-4bb5-bcc0-ae044321b33b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.048 2 WARNING nova.compute.manager [req-3f76680b-6ff8-4b72-92c7-3f69ac39f594 req-c3285d21-33b8-4023-a274-c20e598fed9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Received unexpected event network-vif-plugged-4c696563-943f-4bb5-bcc0-ae044321b33b for instance with vm_state active and task_state deleting.#033[00m
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006933930780751449 of space, bias 1.0, pg target 0.20801792342254347 quantized to 32 (current 32)
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:56:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.475 2 DEBUG nova.network.neutron [-] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.497 2 INFO nova.compute.manager [-] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Took 0.55 seconds to deallocate network for instance.#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.705 2 INFO nova.compute.manager [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Took 0.21 seconds to detach 1 volumes for instance.#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.707 2 DEBUG nova.compute.manager [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Deleting volume: d77cea6c-1f8e-4472-95f1-26f306e1d9c6 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.977 2 DEBUG oslo_concurrency.lockutils [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:22 np0005464891 nova_compute[259907]: 2025-10-01 16:56:22.977 2 DEBUG oslo_concurrency.lockutils [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:23 np0005464891 nova_compute[259907]: 2025-10-01 16:56:23.042 2 DEBUG oslo_concurrency.processutils [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:56:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 321 active+clean; 134 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.0 KiB/s wr, 177 op/s
Oct  1 12:56:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:56:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3115478276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:56:23 np0005464891 nova_compute[259907]: 2025-10-01 16:56:23.597 2 DEBUG oslo_concurrency.processutils [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:56:23 np0005464891 nova_compute[259907]: 2025-10-01 16:56:23.605 2 DEBUG nova.compute.provider_tree [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:56:23 np0005464891 nova_compute[259907]: 2025-10-01 16:56:23.650 2 DEBUG nova.scheduler.client.report [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:56:23 np0005464891 nova_compute[259907]: 2025-10-01 16:56:23.716 2 DEBUG oslo_concurrency.lockutils [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:23 np0005464891 nova_compute[259907]: 2025-10-01 16:56:23.825 2 INFO nova.scheduler.client.report [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Deleted allocations for instance b408cbe8-e33e-4d19-9bec-ea1664d387d3#033[00m
Oct  1 12:56:23 np0005464891 nova_compute[259907]: 2025-10-01 16:56:23.984 2 DEBUG oslo_concurrency.lockutils [None req-59b7bf9b-fda9-4484-a5c6-d437833f7cdd 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "b408cbe8-e33e-4d19-9bec-ea1664d387d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:24 np0005464891 nova_compute[259907]: 2025-10-01 16:56:24.161 2 DEBUG nova.compute.manager [req-bb8321a3-959a-462b-a6d1-c17f4fe87002 req-214b818b-b7eb-4de8-86fd-a2b993c4f64e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Received event network-vif-deleted-4c696563-943f-4bb5-bcc0-ae044321b33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:56:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:56:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3037925655' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:56:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:56:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3037925655' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:56:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.6 KiB/s wr, 178 op/s
Oct  1 12:56:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:56:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2445076563' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:56:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:56:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2445076563' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:56:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:56:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Oct  1 12:56:26 np0005464891 nova_compute[259907]: 2025-10-01 16:56:26.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:26 np0005464891 nova_compute[259907]: 2025-10-01 16:56:26.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.6 KiB/s wr, 174 op/s
Oct  1 12:56:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Oct  1 12:56:27 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Oct  1 12:56:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.6 KiB/s wr, 142 op/s
Oct  1 12:56:29 np0005464891 nova_compute[259907]: 2025-10-01 16:56:29.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:56:29 np0005464891 nova_compute[259907]: 2025-10-01 16:56:29.804 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:56:30 np0005464891 podman[293806]: 2025-10-01 16:56:30.08559425 +0000 UTC m=+0.091032010 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct  1 12:56:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:56:30 np0005464891 nova_compute[259907]: 2025-10-01 16:56:30.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:56:30 np0005464891 nova_compute[259907]: 2025-10-01 16:56:30.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:56:30 np0005464891 nova_compute[259907]: 2025-10-01 16:56:30.936 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:30 np0005464891 nova_compute[259907]: 2025-10-01 16:56:30.937 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:30 np0005464891 nova_compute[259907]: 2025-10-01 16:56:30.937 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:30 np0005464891 nova_compute[259907]: 2025-10-01 16:56:30.938 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:56:30 np0005464891 nova_compute[259907]: 2025-10-01 16:56:30.938 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:56:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 KiB/s wr, 129 op/s
Oct  1 12:56:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Oct  1 12:56:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:56:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:56:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:56:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3897526644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:56:31 np0005464891 nova_compute[259907]: 2025-10-01 16:56:31.396 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:56:31 np0005464891 nova_compute[259907]: 2025-10-01 16:56:31.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Oct  1 12:56:31 np0005464891 nova_compute[259907]: 2025-10-01 16:56:31.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:31 np0005464891 nova_compute[259907]: 2025-10-01 16:56:31.582 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:56:31 np0005464891 nova_compute[259907]: 2025-10-01 16:56:31.583 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4461MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:56:31 np0005464891 nova_compute[259907]: 2025-10-01 16:56:31.584 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:31 np0005464891 nova_compute[259907]: 2025-10-01 16:56:31.584 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:31 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Oct  1 12:56:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:56:32 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:56:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1681: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1023 B/s wr, 27 op/s
Oct  1 12:56:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:56:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:56:33 np0005464891 nova_compute[259907]: 2025-10-01 16:56:33.286 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:56:33 np0005464891 nova_compute[259907]: 2025-10-01 16:56:33.287 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:56:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:56:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:56:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:56:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:56:33 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 6e99aee3-5811-45b9-a735-587464f0b799 does not exist
Oct  1 12:56:33 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 642012ff-a774-48cb-b63f-952d0b374d46 does not exist
Oct  1 12:56:33 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 67481746-22b2-4026-8d39-7e5eb57c6b58 does not exist
Oct  1 12:56:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:56:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:56:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:56:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:56:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:56:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:56:33 np0005464891 nova_compute[259907]: 2025-10-01 16:56:33.894 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:56:34 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:56:34 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:56:34 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:56:34 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:56:34 np0005464891 podman[294184]: 2025-10-01 16:56:34.27383012 +0000 UTC m=+0.024254237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:56:34 np0005464891 podman[294184]: 2025-10-01 16:56:34.779157073 +0000 UTC m=+0.529581200 container create 096cba9f2d09db934401d617f9f4763c49eb65551fa88c8e0385ae61a0cf5be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 12:56:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 134 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.9 KiB/s wr, 31 op/s
Oct  1 12:56:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:56:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/376395193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:56:35 np0005464891 nova_compute[259907]: 2025-10-01 16:56:35.183 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:56:35 np0005464891 nova_compute[259907]: 2025-10-01 16:56:35.195 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:56:35 np0005464891 systemd[1]: Started libpod-conmon-096cba9f2d09db934401d617f9f4763c49eb65551fa88c8e0385ae61a0cf5be7.scope.
Oct  1 12:56:35 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:56:35 np0005464891 nova_compute[259907]: 2025-10-01 16:56:35.477 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:56:35 np0005464891 podman[294184]: 2025-10-01 16:56:35.755181107 +0000 UTC m=+1.505605274 container init 096cba9f2d09db934401d617f9f4763c49eb65551fa88c8e0385ae61a0cf5be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:56:35 np0005464891 podman[294184]: 2025-10-01 16:56:35.765014432 +0000 UTC m=+1.515438549 container start 096cba9f2d09db934401d617f9f4763c49eb65551fa88c8e0385ae61a0cf5be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_nobel, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 12:56:35 np0005464891 strange_nobel[294202]: 167 167
Oct  1 12:56:35 np0005464891 systemd[1]: libpod-096cba9f2d09db934401d617f9f4763c49eb65551fa88c8e0385ae61a0cf5be7.scope: Deactivated successfully.
Oct  1 12:56:36 np0005464891 podman[294184]: 2025-10-01 16:56:36.074340721 +0000 UTC m=+1.824764858 container attach 096cba9f2d09db934401d617f9f4763c49eb65551fa88c8e0385ae61a0cf5be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:56:36 np0005464891 podman[294184]: 2025-10-01 16:56:36.075237685 +0000 UTC m=+1.825661792 container died 096cba9f2d09db934401d617f9f4763c49eb65551fa88c8e0385ae61a0cf5be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:56:36 np0005464891 nova_compute[259907]: 2025-10-01 16:56:36.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:36 np0005464891 nova_compute[259907]: 2025-10-01 16:56:36.541 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337781.5407465, b408cbe8-e33e-4d19-9bec-ea1664d387d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:56:36 np0005464891 nova_compute[259907]: 2025-10-01 16:56:36.542 2 INFO nova.compute.manager [-] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:56:36 np0005464891 nova_compute[259907]: 2025-10-01 16:56:36.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:56:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 134 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Oct  1 12:56:37 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d51e046f22354a4ef812d7d684d5c2b245ab9f2b3b7b6d4b74ba4e1989aed38c-merged.mount: Deactivated successfully.
Oct  1 12:56:37 np0005464891 nova_compute[259907]: 2025-10-01 16:56:37.431 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:56:37 np0005464891 nova_compute[259907]: 2025-10-01 16:56:37.431 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:56:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045437239' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:56:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:56:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045437239' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:56:37 np0005464891 nova_compute[259907]: 2025-10-01 16:56:37.522 2 DEBUG nova.compute.manager [None req-fbd63d84-4364-49f4-9b06-a0c23582ab10 - - - - - -] [instance: b408cbe8-e33e-4d19-9bec-ea1664d387d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:56:37 np0005464891 podman[294184]: 2025-10-01 16:56:37.83109107 +0000 UTC m=+3.581515157 container remove 096cba9f2d09db934401d617f9f4763c49eb65551fa88c8e0385ae61a0cf5be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_nobel, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:56:37 np0005464891 systemd[1]: libpod-conmon-096cba9f2d09db934401d617f9f4763c49eb65551fa88c8e0385ae61a0cf5be7.scope: Deactivated successfully.
Oct  1 12:56:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Oct  1 12:56:38 np0005464891 podman[294228]: 2025-10-01 16:56:38.048758752 +0000 UTC m=+0.022277933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:56:38 np0005464891 podman[294228]: 2025-10-01 16:56:38.568374192 +0000 UTC m=+0.541893353 container create 11aa54ee36fe282481f7c6f28a6fa2923240f21fe8b18c45d1c3996f1de40f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khorana, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 12:56:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Oct  1 12:56:38 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Oct  1 12:56:39 np0005464891 systemd[1]: Started libpod-conmon-11aa54ee36fe282481f7c6f28a6fa2923240f21fe8b18c45d1c3996f1de40f8d.scope.
Oct  1 12:56:39 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:56:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a53c5922ccfd9a6e7d4fa75794d76573ca17eae3fe1f968db4a762bddd24403/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:56:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a53c5922ccfd9a6e7d4fa75794d76573ca17eae3fe1f968db4a762bddd24403/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:56:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a53c5922ccfd9a6e7d4fa75794d76573ca17eae3fe1f968db4a762bddd24403/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:56:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a53c5922ccfd9a6e7d4fa75794d76573ca17eae3fe1f968db4a762bddd24403/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:56:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a53c5922ccfd9a6e7d4fa75794d76573ca17eae3fe1f968db4a762bddd24403/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:56:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1685: 321 pgs: 321 active+clean; 134 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 1.1 KiB/s wr, 9 op/s
Oct  1 12:56:39 np0005464891 podman[294228]: 2025-10-01 16:56:39.373889229 +0000 UTC m=+1.347408430 container init 11aa54ee36fe282481f7c6f28a6fa2923240f21fe8b18c45d1c3996f1de40f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:56:39 np0005464891 podman[294228]: 2025-10-01 16:56:39.386472778 +0000 UTC m=+1.359991939 container start 11aa54ee36fe282481f7c6f28a6fa2923240f21fe8b18c45d1c3996f1de40f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khorana, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 12:56:39 np0005464891 nova_compute[259907]: 2025-10-01 16:56:39.427 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:56:39 np0005464891 nova_compute[259907]: 2025-10-01 16:56:39.428 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:56:39 np0005464891 nova_compute[259907]: 2025-10-01 16:56:39.428 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:56:39 np0005464891 nova_compute[259907]: 2025-10-01 16:56:39.428 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:56:39 np0005464891 nova_compute[259907]: 2025-10-01 16:56:39.480 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 12:56:39 np0005464891 nova_compute[259907]: 2025-10-01 16:56:39.480 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:56:39 np0005464891 nova_compute[259907]: 2025-10-01 16:56:39.480 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:56:39 np0005464891 nova_compute[259907]: 2025-10-01 16:56:39.481 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:56:39 np0005464891 podman[294228]: 2025-10-01 16:56:39.585446765 +0000 UTC m=+1.558966026 container attach 11aa54ee36fe282481f7c6f28a6fa2923240f21fe8b18c45d1c3996f1de40f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khorana, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:56:39 np0005464891 nova_compute[259907]: 2025-10-01 16:56:39.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:56:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Oct  1 12:56:40 np0005464891 blissful_khorana[294245]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:56:40 np0005464891 blissful_khorana[294245]: --> relative data size: 1.0
Oct  1 12:56:40 np0005464891 blissful_khorana[294245]: --> All data devices are unavailable
Oct  1 12:56:40 np0005464891 systemd[1]: libpod-11aa54ee36fe282481f7c6f28a6fa2923240f21fe8b18c45d1c3996f1de40f8d.scope: Deactivated successfully.
Oct  1 12:56:40 np0005464891 systemd[1]: libpod-11aa54ee36fe282481f7c6f28a6fa2923240f21fe8b18c45d1c3996f1de40f8d.scope: Consumed 1.115s CPU time.
Oct  1 12:56:40 np0005464891 podman[294228]: 2025-10-01 16:56:40.893160191 +0000 UTC m=+2.866679352 container died 11aa54ee36fe282481f7c6f28a6fa2923240f21fe8b18c45d1c3996f1de40f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 12:56:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Oct  1 12:56:41 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Oct  1 12:56:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 134 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 33 KiB/s wr, 15 op/s
Oct  1 12:56:41 np0005464891 systemd[1]: var-lib-containers-storage-overlay-7a53c5922ccfd9a6e7d4fa75794d76573ca17eae3fe1f968db4a762bddd24403-merged.mount: Deactivated successfully.
Oct  1 12:56:41 np0005464891 nova_compute[259907]: 2025-10-01 16:56:41.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:41 np0005464891 nova_compute[259907]: 2025-10-01 16:56:41.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:56:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Oct  1 12:56:41 np0005464891 podman[294228]: 2025-10-01 16:56:41.607985526 +0000 UTC m=+3.581504687 container remove 11aa54ee36fe282481f7c6f28a6fa2923240f21fe8b18c45d1c3996f1de40f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:56:41 np0005464891 systemd[1]: libpod-conmon-11aa54ee36fe282481f7c6f28a6fa2923240f21fe8b18c45d1c3996f1de40f8d.scope: Deactivated successfully.
Oct  1 12:56:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Oct  1 12:56:41 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Oct  1 12:56:41 np0005464891 podman[294274]: 2025-10-01 16:56:41.737838595 +0000 UTC m=+0.833033511 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  1 12:56:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:56:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:56:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:56:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:56:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:56:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:56:42 np0005464891 podman[294453]: 2025-10-01 16:56:42.33868479 +0000 UTC m=+0.024066402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:56:42 np0005464891 podman[294453]: 2025-10-01 16:56:42.499222347 +0000 UTC m=+0.184603969 container create 6cfab7b21003bc27db59be8b21c5fe2c3070f696aca0e558951f105df846c0f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 12:56:42 np0005464891 systemd[1]: Started libpod-conmon-6cfab7b21003bc27db59be8b21c5fe2c3070f696aca0e558951f105df846c0f0.scope.
Oct  1 12:56:42 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:56:42 np0005464891 podman[294453]: 2025-10-01 16:56:42.757658291 +0000 UTC m=+0.443039963 container init 6cfab7b21003bc27db59be8b21c5fe2c3070f696aca0e558951f105df846c0f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brown, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:56:42 np0005464891 podman[294453]: 2025-10-01 16:56:42.76686569 +0000 UTC m=+0.452247282 container start 6cfab7b21003bc27db59be8b21c5fe2c3070f696aca0e558951f105df846c0f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct  1 12:56:42 np0005464891 peaceful_brown[294469]: 167 167
Oct  1 12:56:42 np0005464891 systemd[1]: libpod-6cfab7b21003bc27db59be8b21c5fe2c3070f696aca0e558951f105df846c0f0.scope: Deactivated successfully.
Oct  1 12:56:42 np0005464891 podman[294453]: 2025-10-01 16:56:42.838854635 +0000 UTC m=+0.524236237 container attach 6cfab7b21003bc27db59be8b21c5fe2c3070f696aca0e558951f105df846c0f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brown, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 12:56:42 np0005464891 podman[294453]: 2025-10-01 16:56:42.839253516 +0000 UTC m=+0.524635108 container died 6cfab7b21003bc27db59be8b21c5fe2c3070f696aca0e558951f105df846c0f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 12:56:43 np0005464891 systemd[1]: var-lib-containers-storage-overlay-da217159025e9f77177bf596b3ba7dffb14d0f2da9807fd5326d1acfa029b181-merged.mount: Deactivated successfully.
Oct  1 12:56:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1689: 321 pgs: 321 active+clean; 134 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 45 KiB/s wr, 36 op/s
Oct  1 12:56:43 np0005464891 podman[294453]: 2025-10-01 16:56:43.49042066 +0000 UTC m=+1.175802242 container remove 6cfab7b21003bc27db59be8b21c5fe2c3070f696aca0e558951f105df846c0f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brown, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 12:56:43 np0005464891 systemd[1]: libpod-conmon-6cfab7b21003bc27db59be8b21c5fe2c3070f696aca0e558951f105df846c0f0.scope: Deactivated successfully.
Oct  1 12:56:43 np0005464891 podman[294493]: 2025-10-01 16:56:43.736596583 +0000 UTC m=+0.044385171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:56:43 np0005464891 podman[294493]: 2025-10-01 16:56:43.857636813 +0000 UTC m=+0.165425321 container create 915415295e1685dd37736d007744e47cd3d37105bd0655e38e955443a00a5c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:56:43 np0005464891 systemd[1]: Started libpod-conmon-915415295e1685dd37736d007744e47cd3d37105bd0655e38e955443a00a5c3a.scope.
Oct  1 12:56:43 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:56:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3ee9896ffa23f32b1bdcb5ef63489a41a00a7791de8534fac56f073bcfb9b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:56:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3ee9896ffa23f32b1bdcb5ef63489a41a00a7791de8534fac56f073bcfb9b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:56:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3ee9896ffa23f32b1bdcb5ef63489a41a00a7791de8534fac56f073bcfb9b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:56:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3ee9896ffa23f32b1bdcb5ef63489a41a00a7791de8534fac56f073bcfb9b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:56:44 np0005464891 podman[294493]: 2025-10-01 16:56:44.04367364 +0000 UTC m=+0.351462168 container init 915415295e1685dd37736d007744e47cd3d37105bd0655e38e955443a00a5c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:56:44 np0005464891 podman[294493]: 2025-10-01 16:56:44.052653283 +0000 UTC m=+0.360441831 container start 915415295e1685dd37736d007744e47cd3d37105bd0655e38e955443a00a5c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 12:56:44 np0005464891 podman[294493]: 2025-10-01 16:56:44.123503877 +0000 UTC m=+0.431292395 container attach 915415295e1685dd37736d007744e47cd3d37105bd0655e38e955443a00a5c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:56:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Oct  1 12:56:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Oct  1 12:56:44 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]: {
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:    "0": [
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:        {
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "devices": [
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "/dev/loop3"
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            ],
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "lv_name": "ceph_lv0",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "lv_size": "21470642176",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "name": "ceph_lv0",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "tags": {
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.cluster_name": "ceph",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.crush_device_class": "",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.encrypted": "0",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.osd_id": "0",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.type": "block",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.vdo": "0"
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            },
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "type": "block",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "vg_name": "ceph_vg0"
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:        }
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:    ],
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:    "1": [
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:        {
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "devices": [
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "/dev/loop4"
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            ],
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "lv_name": "ceph_lv1",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "lv_size": "21470642176",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "name": "ceph_lv1",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "tags": {
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.cluster_name": "ceph",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.crush_device_class": "",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.encrypted": "0",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.osd_id": "1",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.type": "block",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.vdo": "0"
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            },
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "type": "block",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "vg_name": "ceph_vg1"
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:        }
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:    ],
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:    "2": [
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:        {
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "devices": [
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "/dev/loop5"
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            ],
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "lv_name": "ceph_lv2",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "lv_size": "21470642176",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "name": "ceph_lv2",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "tags": {
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.cluster_name": "ceph",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.crush_device_class": "",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.encrypted": "0",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.osd_id": "2",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.type": "block",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:                "ceph.vdo": "0"
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            },
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "type": "block",
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:            "vg_name": "ceph_vg2"
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:        }
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]:    ]
Oct  1 12:56:44 np0005464891 gallant_kilby[294509]: }
Oct  1 12:56:44 np0005464891 systemd[1]: libpod-915415295e1685dd37736d007744e47cd3d37105bd0655e38e955443a00a5c3a.scope: Deactivated successfully.
Oct  1 12:56:44 np0005464891 podman[294493]: 2025-10-01 16:56:44.968385577 +0000 UTC m=+1.276174095 container died 915415295e1685dd37736d007744e47cd3d37105bd0655e38e955443a00a5c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:56:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1691: 321 pgs: 321 active+clean; 109 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 48 KiB/s wr, 93 op/s
Oct  1 12:56:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:56:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2780310256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:56:45 np0005464891 systemd[1]: var-lib-containers-storage-overlay-fb3ee9896ffa23f32b1bdcb5ef63489a41a00a7791de8534fac56f073bcfb9b3-merged.mount: Deactivated successfully.
Oct  1 12:56:45 np0005464891 podman[294493]: 2025-10-01 16:56:45.820377397 +0000 UTC m=+2.128165905 container remove 915415295e1685dd37736d007744e47cd3d37105bd0655e38e955443a00a5c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:56:45 np0005464891 systemd[1]: libpod-conmon-915415295e1685dd37736d007744e47cd3d37105bd0655e38e955443a00a5c3a.scope: Deactivated successfully.
Oct  1 12:56:45 np0005464891 podman[294530]: 2025-10-01 16:56:45.905616501 +0000 UTC m=+0.719901344 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  1 12:56:46 np0005464891 nova_compute[259907]: 2025-10-01 16:56:46.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:46 np0005464891 nova_compute[259907]: 2025-10-01 16:56:46.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:46 np0005464891 podman[294693]: 2025-10-01 16:56:46.554701669 +0000 UTC m=+0.031818350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:56:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:56:46 np0005464891 podman[294693]: 2025-10-01 16:56:46.773175403 +0000 UTC m=+0.250292004 container create 39bf76de4277981f74c2f82c8c7a31ecde5e6c6d264420da41d8616e24d42223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 12:56:46 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct  1 12:56:46 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:46.860613) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 12:56:46 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct  1 12:56:46 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337806860708, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2315, "num_deletes": 271, "total_data_size": 3264533, "memory_usage": 3313736, "flush_reason": "Manual Compaction"}
Oct  1 12:56:46 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337807017271, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 2276530, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31684, "largest_seqno": 33998, "table_properties": {"data_size": 2267424, "index_size": 5472, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 22548, "raw_average_key_size": 22, "raw_value_size": 2247982, "raw_average_value_size": 2219, "num_data_blocks": 240, "num_entries": 1013, "num_filter_entries": 1013, "num_deletions": 271, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759337654, "oldest_key_time": 1759337654, "file_creation_time": 1759337806, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 156731 microseconds, and 12649 cpu microseconds.
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:56:47 np0005464891 systemd[1]: Started libpod-conmon-39bf76de4277981f74c2f82c8c7a31ecde5e6c6d264420da41d8616e24d42223.scope.
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:47.017358) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 2276530 bytes OK
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:47.017391) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:47.088964) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:47.089032) EVENT_LOG_v1 {"time_micros": 1759337807089016, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:47.089067) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3254405, prev total WAL file size 3255706, number of live WAL files 2.
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:47.090819) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(2223KB)], [65(10MB)]
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337807090901, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 13082262, "oldest_snapshot_seqno": -1}
Oct  1 12:56:47 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:56:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1692: 321 pgs: 321 active+clean; 109 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 5.1 KiB/s wr, 78 op/s
Oct  1 12:56:47 np0005464891 podman[294693]: 2025-10-01 16:56:47.533080496 +0000 UTC m=+1.010197167 container init 39bf76de4277981f74c2f82c8c7a31ecde5e6c6d264420da41d8616e24d42223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_grothendieck, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6581 keys, 10696642 bytes, temperature: kUnknown
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337807543472, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 10696642, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10646945, "index_size": 32113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 164496, "raw_average_key_size": 24, "raw_value_size": 10523096, "raw_average_value_size": 1599, "num_data_blocks": 1301, "num_entries": 6581, "num_filter_entries": 6581, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759337807, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:56:47 np0005464891 podman[294693]: 2025-10-01 16:56:47.54690913 +0000 UTC m=+1.024025731 container start 39bf76de4277981f74c2f82c8c7a31ecde5e6c6d264420da41d8616e24d42223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 12:56:47 np0005464891 vigorous_grothendieck[294710]: 167 167
Oct  1 12:56:47 np0005464891 systemd[1]: libpod-39bf76de4277981f74c2f82c8c7a31ecde5e6c6d264420da41d8616e24d42223.scope: Deactivated successfully.
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3687029987' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3687029987' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:47.543769) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10696642 bytes
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:47.610858) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 28.9 rd, 23.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 10.3 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(10.4) write-amplify(4.7) OK, records in: 7050, records dropped: 469 output_compression: NoCompression
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:47.610907) EVENT_LOG_v1 {"time_micros": 1759337807610884, "job": 36, "event": "compaction_finished", "compaction_time_micros": 452696, "compaction_time_cpu_micros": 33719, "output_level": 6, "num_output_files": 1, "total_output_size": 10696642, "num_input_records": 7050, "num_output_records": 6581, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337807612366, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337807616387, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:47.090717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:47.616569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:47.616578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:47.616582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:47.616587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:56:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:47.616591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:56:47 np0005464891 podman[294693]: 2025-10-01 16:56:47.647366264 +0000 UTC m=+1.124482885 container attach 39bf76de4277981f74c2f82c8c7a31ecde5e6c6d264420da41d8616e24d42223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_grothendieck, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 12:56:47 np0005464891 podman[294693]: 2025-10-01 16:56:47.648478914 +0000 UTC m=+1.125595515 container died 39bf76de4277981f74c2f82c8c7a31ecde5e6c6d264420da41d8616e24d42223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_grothendieck, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:56:47 np0005464891 systemd[1]: var-lib-containers-storage-overlay-2a9867d64a855b5267fd56acd0c7d4dbab060fb9221706104ac2b4aba3322cf1-merged.mount: Deactivated successfully.
Oct  1 12:56:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:47.989 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:56:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:47.991 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:56:47 np0005464891 nova_compute[259907]: 2025-10-01 16:56:47.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:48 np0005464891 podman[294693]: 2025-10-01 16:56:48.057572877 +0000 UTC m=+1.534689468 container remove 39bf76de4277981f74c2f82c8c7a31ecde5e6c6d264420da41d8616e24d42223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_grothendieck, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 12:56:48 np0005464891 systemd[1]: libpod-conmon-39bf76de4277981f74c2f82c8c7a31ecde5e6c6d264420da41d8616e24d42223.scope: Deactivated successfully.
Oct  1 12:56:48 np0005464891 podman[294734]: 2025-10-01 16:56:48.29707516 +0000 UTC m=+0.077389583 container create b65745051ba124aaa1aaf22863c3d9303e442cbcd53e308e3fa65cd746f2b7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:56:48 np0005464891 podman[294734]: 2025-10-01 16:56:48.241854347 +0000 UTC m=+0.022168790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:56:48 np0005464891 systemd[1]: Started libpod-conmon-b65745051ba124aaa1aaf22863c3d9303e442cbcd53e308e3fa65cd746f2b7cd.scope.
Oct  1 12:56:48 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:56:48 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b3544004bb9b2de3ffef2c0a49cb31d9f558d4c3a6a69f734994973af85153/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:56:48 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b3544004bb9b2de3ffef2c0a49cb31d9f558d4c3a6a69f734994973af85153/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:56:48 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b3544004bb9b2de3ffef2c0a49cb31d9f558d4c3a6a69f734994973af85153/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:56:48 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b3544004bb9b2de3ffef2c0a49cb31d9f558d4c3a6a69f734994973af85153/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:56:48 np0005464891 podman[294734]: 2025-10-01 16:56:48.506849448 +0000 UTC m=+0.287163881 container init b65745051ba124aaa1aaf22863c3d9303e442cbcd53e308e3fa65cd746f2b7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:56:48 np0005464891 podman[294734]: 2025-10-01 16:56:48.515567973 +0000 UTC m=+0.295882396 container start b65745051ba124aaa1aaf22863c3d9303e442cbcd53e308e3fa65cd746f2b7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:56:48 np0005464891 podman[294734]: 2025-10-01 16:56:48.555357878 +0000 UTC m=+0.335672321 container attach b65745051ba124aaa1aaf22863c3d9303e442cbcd53e308e3fa65cd746f2b7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:56:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 88 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 3.9 KiB/s wr, 74 op/s
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]: {
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:        "osd_id": 2,
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:        "type": "bluestore"
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:    },
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:        "osd_id": 0,
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:        "type": "bluestore"
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:    },
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:        "osd_id": 1,
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:        "type": "bluestore"
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]:    }
Oct  1 12:56:49 np0005464891 gallant_sinoussi[294750]: }
Oct  1 12:56:49 np0005464891 systemd[1]: libpod-b65745051ba124aaa1aaf22863c3d9303e442cbcd53e308e3fa65cd746f2b7cd.scope: Deactivated successfully.
Oct  1 12:56:49 np0005464891 podman[294734]: 2025-10-01 16:56:49.591798364 +0000 UTC m=+1.372112787 container died b65745051ba124aaa1aaf22863c3d9303e442cbcd53e308e3fa65cd746f2b7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 12:56:49 np0005464891 systemd[1]: libpod-b65745051ba124aaa1aaf22863c3d9303e442cbcd53e308e3fa65cd746f2b7cd.scope: Consumed 1.072s CPU time.
Oct  1 12:56:49 np0005464891 systemd[1]: var-lib-containers-storage-overlay-24b3544004bb9b2de3ffef2c0a49cb31d9f558d4c3a6a69f734994973af85153-merged.mount: Deactivated successfully.
Oct  1 12:56:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:56:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2953257636' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:56:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:56:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2953257636' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:56:49 np0005464891 podman[294734]: 2025-10-01 16:56:49.812266451 +0000 UTC m=+1.592580914 container remove b65745051ba124aaa1aaf22863c3d9303e442cbcd53e308e3fa65cd746f2b7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 12:56:49 np0005464891 systemd[1]: libpod-conmon-b65745051ba124aaa1aaf22863c3d9303e442cbcd53e308e3fa65cd746f2b7cd.scope: Deactivated successfully.
Oct  1 12:56:49 np0005464891 podman[294784]: 2025-10-01 16:56:49.856046493 +0000 UTC m=+0.229235584 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  1 12:56:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:56:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:56:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:56:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:56:49 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 2d53cdc0-1c17-41c4-8ef4-55ed4b207510 does not exist
Oct  1 12:56:49 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 7181890e-fd93-422d-a03e-195c14d0432b does not exist
Oct  1 12:56:50 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:56:50 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:56:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1694: 321 pgs: 321 active+clean; 88 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 KiB/s wr, 103 op/s
Oct  1 12:56:51 np0005464891 nova_compute[259907]: 2025-10-01 16:56:51.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:51 np0005464891 nova_compute[259907]: 2025-10-01 16:56:51.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:56:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Oct  1 12:56:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Oct  1 12:56:52 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Oct  1 12:56:52 np0005464891 ovn_controller[152409]: 2025-10-01T16:56:52Z|00170|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct  1 12:56:52 np0005464891 nova_compute[259907]: 2025-10-01 16:56:52.989 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "03ad1fe8-a967-4d62-a904-ceda4729227a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:52 np0005464891 nova_compute[259907]: 2025-10-01 16:56:52.990 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Oct  1 12:56:53 np0005464891 nova_compute[259907]: 2025-10-01 16:56:53.025 2 DEBUG nova.compute.manager [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:56:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Oct  1 12:56:53 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Oct  1 12:56:53 np0005464891 nova_compute[259907]: 2025-10-01 16:56:53.127 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:53 np0005464891 nova_compute[259907]: 2025-10-01 16:56:53.127 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:53 np0005464891 nova_compute[259907]: 2025-10-01 16:56:53.136 2 DEBUG nova.virt.hardware [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:56:53 np0005464891 nova_compute[259907]: 2025-10-01 16:56:53.137 2 INFO nova.compute.claims [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:56:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 88 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 895 B/s wr, 83 op/s
Oct  1 12:56:53 np0005464891 nova_compute[259907]: 2025-10-01 16:56:53.300 2 DEBUG oslo_concurrency.processutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:56:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:56:53 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3725961014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:56:53 np0005464891 nova_compute[259907]: 2025-10-01 16:56:53.781 2 DEBUG oslo_concurrency.processutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:56:53 np0005464891 nova_compute[259907]: 2025-10-01 16:56:53.790 2 DEBUG nova.compute.provider_tree [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:56:53 np0005464891 nova_compute[259907]: 2025-10-01 16:56:53.811 2 DEBUG nova.scheduler.client.report [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:56:53 np0005464891 nova_compute[259907]: 2025-10-01 16:56:53.839 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:53 np0005464891 nova_compute[259907]: 2025-10-01 16:56:53.840 2 DEBUG nova.compute.manager [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:56:53 np0005464891 nova_compute[259907]: 2025-10-01 16:56:53.913 2 DEBUG nova.compute.manager [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:56:53 np0005464891 nova_compute[259907]: 2025-10-01 16:56:53.914 2 DEBUG nova.network.neutron [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:56:53 np0005464891 nova_compute[259907]: 2025-10-01 16:56:53.942 2 INFO nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:56:53 np0005464891 nova_compute[259907]: 2025-10-01 16:56:53.967 2 DEBUG nova.compute.manager [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:56:54 np0005464891 nova_compute[259907]: 2025-10-01 16:56:54.082 2 DEBUG nova.compute.manager [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:56:54 np0005464891 nova_compute[259907]: 2025-10-01 16:56:54.084 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:56:54 np0005464891 nova_compute[259907]: 2025-10-01 16:56:54.084 2 INFO nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Creating image(s)#033[00m
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Oct  1 12:56:54 np0005464891 nova_compute[259907]: 2025-10-01 16:56:54.114 2 DEBUG nova.storage.rbd_utils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] rbd image 03ad1fe8-a967-4d62-a904-ceda4729227a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:54.173192) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337814173256, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 373, "num_deletes": 252, "total_data_size": 224984, "memory_usage": 232200, "flush_reason": "Manual Compaction"}
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337814184550, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 223108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33999, "largest_seqno": 34371, "table_properties": {"data_size": 220747, "index_size": 461, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6098, "raw_average_key_size": 19, "raw_value_size": 215931, "raw_average_value_size": 685, "num_data_blocks": 19, "num_entries": 315, "num_filter_entries": 315, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759337807, "oldest_key_time": 1759337807, "file_creation_time": 1759337814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 11375 microseconds, and 1162 cpu microseconds.
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:54.184591) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 223108 bytes OK
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:54.184611) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:54.192379) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:54.192407) EVENT_LOG_v1 {"time_micros": 1759337814192400, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:54.192430) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 222481, prev total WAL file size 222522, number of live WAL files 2.
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:54.192888) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(217KB)], [68(10MB)]
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337814192946, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10919750, "oldest_snapshot_seqno": -1}
Oct  1 12:56:54 np0005464891 nova_compute[259907]: 2025-10-01 16:56:54.220 2 DEBUG nova.storage.rbd_utils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] rbd image 03ad1fe8-a967-4d62-a904-ceda4729227a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:56:54 np0005464891 nova_compute[259907]: 2025-10-01 16:56:54.246 2 DEBUG nova.storage.rbd_utils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] rbd image 03ad1fe8-a967-4d62-a904-ceda4729227a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:56:54 np0005464891 nova_compute[259907]: 2025-10-01 16:56:54.249 2 DEBUG oslo_concurrency.processutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:56:54 np0005464891 nova_compute[259907]: 2025-10-01 16:56:54.280 2 DEBUG nova.policy [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '906d3d29e27b49c1860f5397c6028d96', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bb5e44f7928546dfb674d53cd3727027', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6378 keys, 9103094 bytes, temperature: kUnknown
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337814307047, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9103094, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9056599, "index_size": 29425, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16005, "raw_key_size": 161040, "raw_average_key_size": 25, "raw_value_size": 8938016, "raw_average_value_size": 1401, "num_data_blocks": 1178, "num_entries": 6378, "num_filter_entries": 6378, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759337814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:54.307700) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9103094 bytes
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:54.312927) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.6 rd, 79.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.2 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(89.7) write-amplify(40.8) OK, records in: 6896, records dropped: 518 output_compression: NoCompression
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:54.312950) EVENT_LOG_v1 {"time_micros": 1759337814312939, "job": 38, "event": "compaction_finished", "compaction_time_micros": 114181, "compaction_time_cpu_micros": 31687, "output_level": 6, "num_output_files": 1, "total_output_size": 9103094, "num_input_records": 6896, "num_output_records": 6378, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337814313282, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337814315209, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:54.192759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:54.315247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:54.315254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:54.315266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:54.315269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:56:54 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:56:54.315272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:56:54 np0005464891 nova_compute[259907]: 2025-10-01 16:56:54.341 2 DEBUG oslo_concurrency.processutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:56:54 np0005464891 nova_compute[259907]: 2025-10-01 16:56:54.342 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:54 np0005464891 nova_compute[259907]: 2025-10-01 16:56:54.343 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:54 np0005464891 nova_compute[259907]: 2025-10-01 16:56:54.343 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:54 np0005464891 nova_compute[259907]: 2025-10-01 16:56:54.365 2 DEBUG nova.storage.rbd_utils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] rbd image 03ad1fe8-a967-4d62-a904-ceda4729227a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:56:54 np0005464891 nova_compute[259907]: 2025-10-01 16:56:54.368 2 DEBUG oslo_concurrency.processutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 03ad1fe8-a967-4d62-a904-ceda4729227a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:56:55 np0005464891 nova_compute[259907]: 2025-10-01 16:56:55.092 2 DEBUG nova.network.neutron [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Successfully created port: 7094fed9-935c-41be-bfa9-a61118606ba8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:56:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 91 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 144 KiB/s wr, 169 op/s
Oct  1 12:56:55 np0005464891 nova_compute[259907]: 2025-10-01 16:56:55.368 2 DEBUG oslo_concurrency.processutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 03ad1fe8-a967-4d62-a904-ceda4729227a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.999s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:56:55 np0005464891 nova_compute[259907]: 2025-10-01 16:56:55.441 2 DEBUG nova.storage.rbd_utils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] resizing rbd image 03ad1fe8-a967-4d62-a904-ceda4729227a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  1 12:56:55 np0005464891 nova_compute[259907]: 2025-10-01 16:56:55.696 2 DEBUG nova.objects.instance [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lazy-loading 'migration_context' on Instance uuid 03ad1fe8-a967-4d62-a904-ceda4729227a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:56:55 np0005464891 nova_compute[259907]: 2025-10-01 16:56:55.734 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  1 12:56:55 np0005464891 nova_compute[259907]: 2025-10-01 16:56:55.735 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Ensure instance console log exists: /var/lib/nova/instances/03ad1fe8-a967-4d62-a904-ceda4729227a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:56:55 np0005464891 nova_compute[259907]: 2025-10-01 16:56:55.736 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:55 np0005464891 nova_compute[259907]: 2025-10-01 16:56:55.737 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:55 np0005464891 nova_compute[259907]: 2025-10-01 16:56:55.737 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:56 np0005464891 nova_compute[259907]: 2025-10-01 16:56:56.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:56 np0005464891 nova_compute[259907]: 2025-10-01 16:56:56.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:56 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:56:56.994 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:56:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:56:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 91 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 822 KiB/s rd, 144 KiB/s wr, 107 op/s
Oct  1 12:56:57 np0005464891 nova_compute[259907]: 2025-10-01 16:56:57.237 2 DEBUG nova.network.neutron [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Successfully updated port: 7094fed9-935c-41be-bfa9-a61118606ba8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:56:57 np0005464891 nova_compute[259907]: 2025-10-01 16:56:57.273 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "refresh_cache-03ad1fe8-a967-4d62-a904-ceda4729227a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:56:57 np0005464891 nova_compute[259907]: 2025-10-01 16:56:57.274 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquired lock "refresh_cache-03ad1fe8-a967-4d62-a904-ceda4729227a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:56:57 np0005464891 nova_compute[259907]: 2025-10-01 16:56:57.274 2 DEBUG nova.network.neutron [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:56:57 np0005464891 nova_compute[259907]: 2025-10-01 16:56:57.395 2 DEBUG nova.compute.manager [req-e2c27cad-90c4-42e2-8ce6-9fd79c2751b4 req-4069fc4d-5fbe-4adc-8146-b2c063a9a188 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Received event network-changed-7094fed9-935c-41be-bfa9-a61118606ba8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:56:57 np0005464891 nova_compute[259907]: 2025-10-01 16:56:57.395 2 DEBUG nova.compute.manager [req-e2c27cad-90c4-42e2-8ce6-9fd79c2751b4 req-4069fc4d-5fbe-4adc-8146-b2c063a9a188 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Refreshing instance network info cache due to event network-changed-7094fed9-935c-41be-bfa9-a61118606ba8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:56:57 np0005464891 nova_compute[259907]: 2025-10-01 16:56:57.395 2 DEBUG oslo_concurrency.lockutils [req-e2c27cad-90c4-42e2-8ce6-9fd79c2751b4 req-4069fc4d-5fbe-4adc-8146-b2c063a9a188 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-03ad1fe8-a967-4d62-a904-ceda4729227a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:56:57 np0005464891 nova_compute[259907]: 2025-10-01 16:56:57.574 2 DEBUG nova.network.neutron [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:56:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:56:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3547512400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:56:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:56:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3547512400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.635 2 DEBUG nova.network.neutron [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Updating instance_info_cache with network_info: [{"id": "7094fed9-935c-41be-bfa9-a61118606ba8", "address": "fa:16:3e:fd:c3:c3", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7094fed9-93", "ovs_interfaceid": "7094fed9-935c-41be-bfa9-a61118606ba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.760 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Releasing lock "refresh_cache-03ad1fe8-a967-4d62-a904-ceda4729227a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.761 2 DEBUG nova.compute.manager [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Instance network_info: |[{"id": "7094fed9-935c-41be-bfa9-a61118606ba8", "address": "fa:16:3e:fd:c3:c3", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7094fed9-93", "ovs_interfaceid": "7094fed9-935c-41be-bfa9-a61118606ba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.761 2 DEBUG oslo_concurrency.lockutils [req-e2c27cad-90c4-42e2-8ce6-9fd79c2751b4 req-4069fc4d-5fbe-4adc-8146-b2c063a9a188 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-03ad1fe8-a967-4d62-a904-ceda4729227a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.761 2 DEBUG nova.network.neutron [req-e2c27cad-90c4-42e2-8ce6-9fd79c2751b4 req-4069fc4d-5fbe-4adc-8146-b2c063a9a188 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Refreshing network info cache for port 7094fed9-935c-41be-bfa9-a61118606ba8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.764 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Start _get_guest_xml network_info=[{"id": "7094fed9-935c-41be-bfa9-a61118606ba8", "address": "fa:16:3e:fd:c3:c3", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7094fed9-93", "ovs_interfaceid": "7094fed9-935c-41be-bfa9-a61118606ba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'image_id': 'f01c1e7c-fea3-4433-a44a-d71153552c78'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.768 2 WARNING nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.773 2 DEBUG nova.virt.libvirt.host [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.774 2 DEBUG nova.virt.libvirt.host [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.776 2 DEBUG nova.virt.libvirt.host [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.777 2 DEBUG nova.virt.libvirt.host [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.778 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.778 2 DEBUG nova.virt.hardware [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.778 2 DEBUG nova.virt.hardware [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.778 2 DEBUG nova.virt.hardware [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.779 2 DEBUG nova.virt.hardware [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.779 2 DEBUG nova.virt.hardware [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.779 2 DEBUG nova.virt.hardware [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.779 2 DEBUG nova.virt.hardware [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.780 2 DEBUG nova.virt.hardware [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.780 2 DEBUG nova.virt.hardware [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.780 2 DEBUG nova.virt.hardware [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.780 2 DEBUG nova.virt.hardware [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:56:58 np0005464891 nova_compute[259907]: 2025-10-01 16:56:58.783 2 DEBUG oslo_concurrency.processutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:56:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 119 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 761 KiB/s rd, 2.1 MiB/s wr, 185 op/s
Oct  1 12:56:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:56:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3203393714' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.242 2 DEBUG oslo_concurrency.processutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.262 2 DEBUG nova.storage.rbd_utils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] rbd image 03ad1fe8-a967-4d62-a904-ceda4729227a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.267 2 DEBUG oslo_concurrency.processutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:56:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:56:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2527931002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.719 2 DEBUG oslo_concurrency.processutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.722 2 DEBUG nova.virt.libvirt.vif [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:56:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-829299339',display_name='tempest-TestEncryptedCinderVolumes-server-829299339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-829299339',id=18,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIl8GG+Hu3ZeIB1jTbep6CoWVksHXyZXyjvntmOv7OGRe4G98GRtUibF6/2O1ilX4yVyQx2ndKQDONwIhDbTq9iQHoxJ5BxTIpatSro6LGX2MFYFIPrpekYlMom8yztJVQ==',key_name='tempest-keypair-341137682',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bb5e44f7928546dfb674d53cd3727027',ramdisk_id='',reservation_id='r-qpd0blyf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-803701988',owner_user_name='tempest-TestEncryptedCinderVolumes-803701988-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:56:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='906d3d29e27b49c1860f5397c6028d96',uuid=03ad1fe8-a967-4d62-a904-ceda4729227a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7094fed9-935c-41be-bfa9-a61118606ba8", "address": "fa:16:3e:fd:c3:c3", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7094fed9-93", "ovs_interfaceid": "7094fed9-935c-41be-bfa9-a61118606ba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.722 2 DEBUG nova.network.os_vif_util [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converting VIF {"id": "7094fed9-935c-41be-bfa9-a61118606ba8", "address": "fa:16:3e:fd:c3:c3", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7094fed9-93", "ovs_interfaceid": "7094fed9-935c-41be-bfa9-a61118606ba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.724 2 DEBUG nova.network.os_vif_util [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:c3:c3,bridge_name='br-int',has_traffic_filtering=True,id=7094fed9-935c-41be-bfa9-a61118606ba8,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7094fed9-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.726 2 DEBUG nova.objects.instance [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lazy-loading 'pci_devices' on Instance uuid 03ad1fe8-a967-4d62-a904-ceda4729227a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.751 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  <uuid>03ad1fe8-a967-4d62-a904-ceda4729227a</uuid>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  <name>instance-00000012</name>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-829299339</nova:name>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:56:58</nova:creationTime>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:        <nova:user uuid="906d3d29e27b49c1860f5397c6028d96">tempest-TestEncryptedCinderVolumes-803701988-project-member</nova:user>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:        <nova:project uuid="bb5e44f7928546dfb674d53cd3727027">tempest-TestEncryptedCinderVolumes-803701988</nova:project>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="f01c1e7c-fea3-4433-a44a-d71153552c78"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:        <nova:port uuid="7094fed9-935c-41be-bfa9-a61118606ba8">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <entry name="serial">03ad1fe8-a967-4d62-a904-ceda4729227a</entry>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <entry name="uuid">03ad1fe8-a967-4d62-a904-ceda4729227a</entry>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/03ad1fe8-a967-4d62-a904-ceda4729227a_disk">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/03ad1fe8-a967-4d62-a904-ceda4729227a_disk.config">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:fd:c3:c3"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <target dev="tap7094fed9-93"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/03ad1fe8-a967-4d62-a904-ceda4729227a/console.log" append="off"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:56:59 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:56:59 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:56:59 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:56:59 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.752 2 DEBUG nova.compute.manager [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Preparing to wait for external event network-vif-plugged-7094fed9-935c-41be-bfa9-a61118606ba8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.753 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.754 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.755 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.756 2 DEBUG nova.virt.libvirt.vif [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:56:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-829299339',display_name='tempest-TestEncryptedCinderVolumes-server-829299339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-829299339',id=18,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIl8GG+Hu3ZeIB1jTbep6CoWVksHXyZXyjvntmOv7OGRe4G98GRtUibF6/2O1ilX4yVyQx2ndKQDONwIhDbTq9iQHoxJ5BxTIpatSro6LGX2MFYFIPrpekYlMom8yztJVQ==',key_name='tempest-keypair-341137682',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bb5e44f7928546dfb674d53cd3727027',ramdisk_id='',reservation_id='r-qpd0blyf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-803701988',owner_user_name='tempest-TestEncryptedCinderVolumes-803701988-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:56:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='906d3d29e27b49c1860f5397c6028d96',uuid=03ad1fe8-a967-4d62-a904-ceda4729227a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7094fed9-935c-41be-bfa9-a61118606ba8", "address": "fa:16:3e:fd:c3:c3", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7094fed9-93", "ovs_interfaceid": "7094fed9-935c-41be-bfa9-a61118606ba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.757 2 DEBUG nova.network.os_vif_util [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converting VIF {"id": "7094fed9-935c-41be-bfa9-a61118606ba8", "address": "fa:16:3e:fd:c3:c3", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7094fed9-93", "ovs_interfaceid": "7094fed9-935c-41be-bfa9-a61118606ba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.758 2 DEBUG nova.network.os_vif_util [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:c3:c3,bridge_name='br-int',has_traffic_filtering=True,id=7094fed9-935c-41be-bfa9-a61118606ba8,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7094fed9-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.759 2 DEBUG os_vif [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:c3:c3,bridge_name='br-int',has_traffic_filtering=True,id=7094fed9-935c-41be-bfa9-a61118606ba8,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7094fed9-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.762 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.763 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.768 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7094fed9-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.769 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7094fed9-93, col_values=(('external_ids', {'iface-id': '7094fed9-935c-41be-bfa9-a61118606ba8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fd:c3:c3', 'vm-uuid': '03ad1fe8-a967-4d62-a904-ceda4729227a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:56:59 np0005464891 NetworkManager[44940]: <info>  [1759337819.8206] manager: (tap7094fed9-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.828 2 INFO os_vif [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:c3:c3,bridge_name='br-int',has_traffic_filtering=True,id=7094fed9-935c-41be-bfa9-a61118606ba8,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7094fed9-93')#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.902 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.903 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.903 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] No VIF found with MAC fa:16:3e:fd:c3:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.904 2 INFO nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Using config drive#033[00m
Oct  1 12:56:59 np0005464891 nova_compute[259907]: 2025-10-01 16:56:59.934 2 DEBUG nova.storage.rbd_utils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] rbd image 03ad1fe8-a967-4d62-a904-ceda4729227a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:57:00 np0005464891 nova_compute[259907]: 2025-10-01 16:57:00.361 2 INFO nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Creating config drive at /var/lib/nova/instances/03ad1fe8-a967-4d62-a904-ceda4729227a/disk.config#033[00m
Oct  1 12:57:00 np0005464891 nova_compute[259907]: 2025-10-01 16:57:00.369 2 DEBUG oslo_concurrency.processutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/03ad1fe8-a967-4d62-a904-ceda4729227a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4aod3vxg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:00 np0005464891 nova_compute[259907]: 2025-10-01 16:57:00.408 2 DEBUG nova.network.neutron [req-e2c27cad-90c4-42e2-8ce6-9fd79c2751b4 req-4069fc4d-5fbe-4adc-8146-b2c063a9a188 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Updated VIF entry in instance network info cache for port 7094fed9-935c-41be-bfa9-a61118606ba8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:57:00 np0005464891 nova_compute[259907]: 2025-10-01 16:57:00.409 2 DEBUG nova.network.neutron [req-e2c27cad-90c4-42e2-8ce6-9fd79c2751b4 req-4069fc4d-5fbe-4adc-8146-b2c063a9a188 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Updating instance_info_cache with network_info: [{"id": "7094fed9-935c-41be-bfa9-a61118606ba8", "address": "fa:16:3e:fd:c3:c3", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7094fed9-93", "ovs_interfaceid": "7094fed9-935c-41be-bfa9-a61118606ba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:57:00 np0005464891 nova_compute[259907]: 2025-10-01 16:57:00.434 2 DEBUG oslo_concurrency.lockutils [req-e2c27cad-90c4-42e2-8ce6-9fd79c2751b4 req-4069fc4d-5fbe-4adc-8146-b2c063a9a188 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-03ad1fe8-a967-4d62-a904-ceda4729227a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:57:00 np0005464891 nova_compute[259907]: 2025-10-01 16:57:00.516 2 DEBUG oslo_concurrency.processutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/03ad1fe8-a967-4d62-a904-ceda4729227a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4aod3vxg" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:00 np0005464891 nova_compute[259907]: 2025-10-01 16:57:00.544 2 DEBUG nova.storage.rbd_utils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] rbd image 03ad1fe8-a967-4d62-a904-ceda4729227a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:57:00 np0005464891 nova_compute[259907]: 2025-10-01 16:57:00.548 2 DEBUG oslo_concurrency.processutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/03ad1fe8-a967-4d62-a904-ceda4729227a/disk.config 03ad1fe8-a967-4d62-a904-ceda4729227a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Oct  1 12:57:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Oct  1 12:57:00 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Oct  1 12:57:00 np0005464891 nova_compute[259907]: 2025-10-01 16:57:00.714 2 DEBUG oslo_concurrency.processutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/03ad1fe8-a967-4d62-a904-ceda4729227a/disk.config 03ad1fe8-a967-4d62-a904-ceda4729227a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:00 np0005464891 nova_compute[259907]: 2025-10-01 16:57:00.717 2 INFO nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Deleting local config drive /var/lib/nova/instances/03ad1fe8-a967-4d62-a904-ceda4729227a/disk.config because it was imported into RBD.#033[00m
Oct  1 12:57:00 np0005464891 kernel: tap7094fed9-93: entered promiscuous mode
Oct  1 12:57:00 np0005464891 NetworkManager[44940]: <info>  [1759337820.7734] manager: (tap7094fed9-93): new Tun device (/org/freedesktop/NetworkManager/Devices/100)
Oct  1 12:57:00 np0005464891 nova_compute[259907]: 2025-10-01 16:57:00.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:00 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:00Z|00171|binding|INFO|Claiming lport 7094fed9-935c-41be-bfa9-a61118606ba8 for this chassis.
Oct  1 12:57:00 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:00Z|00172|binding|INFO|7094fed9-935c-41be-bfa9-a61118606ba8: Claiming fa:16:3e:fd:c3:c3 10.100.0.9
Oct  1 12:57:00 np0005464891 nova_compute[259907]: 2025-10-01 16:57:00.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:00 np0005464891 nova_compute[259907]: 2025-10-01 16:57:00.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:00.804 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:c3:c3 10.100.0.9'], port_security=['fa:16:3e:fd:c3:c3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '03ad1fe8-a967-4d62-a904-ceda4729227a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bb5e44f7928546dfb674d53cd3727027', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7748abdc-2492-422e-a502-5b4edc6dc141', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08e741b0-61e8-4126-b98f-610a01494f2d, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=7094fed9-935c-41be-bfa9-a61118606ba8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:57:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:00.805 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 7094fed9-935c-41be-bfa9-a61118606ba8 in datapath 2345ad6b-d676-4546-a17e-6f7405ff5f24 bound to our chassis#033[00m
Oct  1 12:57:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:00.807 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2345ad6b-d676-4546-a17e-6f7405ff5f24#033[00m
Oct  1 12:57:00 np0005464891 systemd-udevd[295191]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:57:00 np0005464891 systemd-machined[214891]: New machine qemu-18-instance-00000012.
Oct  1 12:57:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:00.827 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d749d6-3e43-494a-8c25-88103f104ccb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:00.828 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2345ad6b-d1 in ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:57:00 np0005464891 NetworkManager[44940]: <info>  [1759337820.8311] device (tap7094fed9-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:57:00 np0005464891 NetworkManager[44940]: <info>  [1759337820.8319] device (tap7094fed9-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:57:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:00.832 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2345ad6b-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:57:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:00.832 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b84bd6a6-8941-46bc-a6fa-b70d883bd1cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:00.833 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[6244fe05-3f78-4d66-9671-3c2c3074eab4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:00 np0005464891 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Oct  1 12:57:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:00.855 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[da3013e2-edb7-49c9-b9c3-aadcff8fa312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:00 np0005464891 podman[295184]: 2025-10-01 16:57:00.879501954 +0000 UTC m=+0.080462625 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 12:57:00 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:00Z|00173|binding|INFO|Setting lport 7094fed9-935c-41be-bfa9-a61118606ba8 ovn-installed in OVS
Oct  1 12:57:00 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:00Z|00174|binding|INFO|Setting lport 7094fed9-935c-41be-bfa9-a61118606ba8 up in Southbound
Oct  1 12:57:00 np0005464891 nova_compute[259907]: 2025-10-01 16:57:00.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:00.915 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f17c32f9-bbc2-45cf-b9cb-456127f676f2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:00.946 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[82197e90-5dfd-4102-8c94-f713b651f22d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:00 np0005464891 NetworkManager[44940]: <info>  [1759337820.9536] manager: (tap2345ad6b-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/101)
Oct  1 12:57:00 np0005464891 systemd-udevd[295202]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:57:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:00.951 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d77d1e35-be5d-4706-b7d7-6cd86a2871c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:00.987 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[3def6738-de63-4496-a3e3-d24c99e47152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:00.990 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[3efbbb2e-262a-4c0e-9e09-08d09dbfc885]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:01 np0005464891 NetworkManager[44940]: <info>  [1759337821.0089] device (tap2345ad6b-d0): carrier: link connected
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:01.012 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a077bf-0f71-443f-b66d-75bfa86928a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:01.028 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d02bcd48-d158-42a1-aed4-28d8cef291ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2345ad6b-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:95:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476958, 'reachable_time': 30600, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295238, 'error': None, 'target': 'ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:01.044 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[952b06db-a270-4279-93a7-3fbd03b6b231]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:9597'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476958, 'tstamp': 476958}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295239, 'error': None, 'target': 'ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:01.063 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[aea111b2-f518-4b5e-8c80-6c86bfd95aac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2345ad6b-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:95:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476958, 'reachable_time': 30600, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295240, 'error': None, 'target': 'ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:01.093 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[a37c477c-9210-4486-bd4d-2a7cb828a083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.141 2 DEBUG nova.compute.manager [req-35828230-e277-4017-bf20-88fc13b03724 req-bec5754a-d473-4db2-bb21-98d3840817cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Received event network-vif-plugged-7094fed9-935c-41be-bfa9-a61118606ba8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.142 2 DEBUG oslo_concurrency.lockutils [req-35828230-e277-4017-bf20-88fc13b03724 req-bec5754a-d473-4db2-bb21-98d3840817cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.142 2 DEBUG oslo_concurrency.lockutils [req-35828230-e277-4017-bf20-88fc13b03724 req-bec5754a-d473-4db2-bb21-98d3840817cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.143 2 DEBUG oslo_concurrency.lockutils [req-35828230-e277-4017-bf20-88fc13b03724 req-bec5754a-d473-4db2-bb21-98d3840817cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.143 2 DEBUG nova.compute.manager [req-35828230-e277-4017-bf20-88fc13b03724 req-bec5754a-d473-4db2-bb21-98d3840817cd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Processing event network-vif-plugged-7094fed9-935c-41be-bfa9-a61118606ba8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:01.158 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[adc6344a-ae8b-4c35-af3d-c42e5ca0b285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 180 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 689 KiB/s rd, 5.3 MiB/s wr, 188 op/s
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:01.160 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2345ad6b-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:01.160 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:01.161 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2345ad6b-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:01 np0005464891 kernel: tap2345ad6b-d0: entered promiscuous mode
Oct  1 12:57:01 np0005464891 NetworkManager[44940]: <info>  [1759337821.1655] manager: (tap2345ad6b-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:01.172 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2345ad6b-d0, col_values=(('external_ids', {'iface-id': '459f1bd9-9c63-458d-a0ce-6bd274d1ecbb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:01 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:01Z|00175|binding|INFO|Releasing lport 459f1bd9-9c63-458d-a0ce-6bd274d1ecbb from this chassis (sb_readonly=0)
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:01.188 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2345ad6b-d676-4546-a17e-6f7405ff5f24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2345ad6b-d676-4546-a17e-6f7405ff5f24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:01.190 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[58a47526-a25d-4b61-a4e8-062a93555621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:01.191 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-2345ad6b-d676-4546-a17e-6f7405ff5f24
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/2345ad6b-d676-4546-a17e-6f7405ff5f24.pid.haproxy
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID 2345ad6b-d676-4546-a17e-6f7405ff5f24
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:57:01 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:01.192 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'env', 'PROCESS_TAG=haproxy-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2345ad6b-d676-4546-a17e-6f7405ff5f24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:01 np0005464891 podman[295314]: 2025-10-01 16:57:01.588737725 +0000 UTC m=+0.050694320 container create a064aa1ed4636262502f6f2a7b1dd6de214d26adb56ac5daaac178719e741d3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  1 12:57:01 np0005464891 systemd[1]: Started libpod-conmon-a064aa1ed4636262502f6f2a7b1dd6de214d26adb56ac5daaac178719e741d3b.scope.
Oct  1 12:57:01 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:57:01 np0005464891 podman[295314]: 2025-10-01 16:57:01.560838591 +0000 UTC m=+0.022795196 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:57:01 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f54595462b2204a762ad58839b6ac7f6e0faa1abe6b493d9105ae23a3392182/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:57:01 np0005464891 podman[295314]: 2025-10-01 16:57:01.677374353 +0000 UTC m=+0.139330968 container init a064aa1ed4636262502f6f2a7b1dd6de214d26adb56ac5daaac178719e741d3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  1 12:57:01 np0005464891 podman[295314]: 2025-10-01 16:57:01.687995694 +0000 UTC m=+0.149952299 container start a064aa1ed4636262502f6f2a7b1dd6de214d26adb56ac5daaac178719e741d3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  1 12:57:01 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[295330]: [NOTICE]   (295334) : New worker (295336) forked
Oct  1 12:57:01 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[295330]: [NOTICE]   (295334) : Loading success.
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.876 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337821.8758404, 03ad1fe8-a967-4d62-a904-ceda4729227a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.876 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] VM Started (Lifecycle Event)#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.878 2 DEBUG nova.compute.manager [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.883 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.887 2 INFO nova.virt.libvirt.driver [-] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Instance spawned successfully.#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.887 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.895 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.898 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.906 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.907 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.908 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.908 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.908 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.909 2 DEBUG nova.virt.libvirt.driver [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.915 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.915 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337821.8759985, 03ad1fe8-a967-4d62-a904-ceda4729227a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.915 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.946 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.950 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337821.8820124, 03ad1fe8-a967-4d62-a904-ceda4729227a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.950 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.978 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.981 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.990 2 INFO nova.compute.manager [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Took 7.91 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:57:01 np0005464891 nova_compute[259907]: 2025-10-01 16:57:01.991 2 DEBUG nova.compute.manager [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:57:02 np0005464891 nova_compute[259907]: 2025-10-01 16:57:02.005 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:57:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:57:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Oct  1 12:57:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Oct  1 12:57:02 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Oct  1 12:57:02 np0005464891 nova_compute[259907]: 2025-10-01 16:57:02.058 2 INFO nova.compute.manager [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Took 8.96 seconds to build instance.#033[00m
Oct  1 12:57:02 np0005464891 nova_compute[259907]: 2025-10-01 16:57:02.080 2 DEBUG oslo_concurrency.lockutils [None req-91360355-f35b-4957-8066-1f3135da61f4 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Oct  1 12:57:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Oct  1 12:57:03 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Oct  1 12:57:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 180 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 970 KiB/s rd, 7.0 MiB/s wr, 236 op/s
Oct  1 12:57:03 np0005464891 nova_compute[259907]: 2025-10-01 16:57:03.270 2 DEBUG nova.compute.manager [req-64653394-17df-4931-a69f-e9520a1d2191 req-f4d239f7-00cc-4904-a1d3-6e69243923f3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Received event network-vif-plugged-7094fed9-935c-41be-bfa9-a61118606ba8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:57:03 np0005464891 nova_compute[259907]: 2025-10-01 16:57:03.271 2 DEBUG oslo_concurrency.lockutils [req-64653394-17df-4931-a69f-e9520a1d2191 req-f4d239f7-00cc-4904-a1d3-6e69243923f3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:03 np0005464891 nova_compute[259907]: 2025-10-01 16:57:03.273 2 DEBUG oslo_concurrency.lockutils [req-64653394-17df-4931-a69f-e9520a1d2191 req-f4d239f7-00cc-4904-a1d3-6e69243923f3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:03 np0005464891 nova_compute[259907]: 2025-10-01 16:57:03.274 2 DEBUG oslo_concurrency.lockutils [req-64653394-17df-4931-a69f-e9520a1d2191 req-f4d239f7-00cc-4904-a1d3-6e69243923f3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:03 np0005464891 nova_compute[259907]: 2025-10-01 16:57:03.274 2 DEBUG nova.compute.manager [req-64653394-17df-4931-a69f-e9520a1d2191 req-f4d239f7-00cc-4904-a1d3-6e69243923f3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] No waiting events found dispatching network-vif-plugged-7094fed9-935c-41be-bfa9-a61118606ba8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:57:03 np0005464891 nova_compute[259907]: 2025-10-01 16:57:03.274 2 WARNING nova.compute.manager [req-64653394-17df-4931-a69f-e9520a1d2191 req-f4d239f7-00cc-4904-a1d3-6e69243923f3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Received unexpected event network-vif-plugged-7094fed9-935c-41be-bfa9-a61118606ba8 for instance with vm_state active and task_state None.#033[00m
Oct  1 12:57:03 np0005464891 NetworkManager[44940]: <info>  [1759337823.7861] manager: (patch-br-int-to-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Oct  1 12:57:03 np0005464891 NetworkManager[44940]: <info>  [1759337823.7879] manager: (patch-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Oct  1 12:57:03 np0005464891 nova_compute[259907]: 2025-10-01 16:57:03.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:03 np0005464891 nova_compute[259907]: 2025-10-01 16:57:03.818 2 DEBUG oslo_concurrency.lockutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "47531108-4f20-41bd-8fb8-77fae3a30b85" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:03 np0005464891 nova_compute[259907]: 2025-10-01 16:57:03.819 2 DEBUG oslo_concurrency.lockutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "47531108-4f20-41bd-8fb8-77fae3a30b85" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:03 np0005464891 nova_compute[259907]: 2025-10-01 16:57:03.846 2 DEBUG nova.compute.manager [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:57:03 np0005464891 nova_compute[259907]: 2025-10-01 16:57:03.915 2 DEBUG oslo_concurrency.lockutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:03 np0005464891 nova_compute[259907]: 2025-10-01 16:57:03.916 2 DEBUG oslo_concurrency.lockutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:03 np0005464891 nova_compute[259907]: 2025-10-01 16:57:03.926 2 DEBUG nova.virt.hardware [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:57:03 np0005464891 nova_compute[259907]: 2025-10-01 16:57:03.927 2 INFO nova.compute.claims [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:57:03 np0005464891 nova_compute[259907]: 2025-10-01 16:57:03.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:03 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:03Z|00176|binding|INFO|Releasing lport 459f1bd9-9c63-458d-a0ce-6bd274d1ecbb from this chassis (sb_readonly=0)
Oct  1 12:57:03 np0005464891 nova_compute[259907]: 2025-10-01 16:57:03.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:04 np0005464891 nova_compute[259907]: 2025-10-01 16:57:04.094 2 DEBUG oslo_concurrency.processutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:57:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1421245787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:57:04 np0005464891 nova_compute[259907]: 2025-10-01 16:57:04.549 2 DEBUG oslo_concurrency.processutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:04 np0005464891 nova_compute[259907]: 2025-10-01 16:57:04.559 2 DEBUG nova.compute.provider_tree [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:57:04 np0005464891 nova_compute[259907]: 2025-10-01 16:57:04.602 2 DEBUG nova.scheduler.client.report [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:57:04 np0005464891 nova_compute[259907]: 2025-10-01 16:57:04.666 2 DEBUG oslo_concurrency.lockutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:04 np0005464891 nova_compute[259907]: 2025-10-01 16:57:04.668 2 DEBUG nova.compute.manager [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:57:04 np0005464891 nova_compute[259907]: 2025-10-01 16:57:04.758 2 DEBUG nova.compute.manager [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:57:04 np0005464891 nova_compute[259907]: 2025-10-01 16:57:04.759 2 DEBUG nova.network.neutron [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:57:04 np0005464891 nova_compute[259907]: 2025-10-01 16:57:04.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:04 np0005464891 nova_compute[259907]: 2025-10-01 16:57:04.876 2 INFO nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:57:04 np0005464891 nova_compute[259907]: 2025-10-01 16:57:04.895 2 DEBUG nova.policy [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1280014cdfb74333ae8d71c78116e646', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8318b65fa88942a99937a0d198a04a9c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:57:04 np0005464891 nova_compute[259907]: 2025-10-01 16:57:04.928 2 DEBUG nova.compute.manager [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.037 2 INFO nova.virt.block_device [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Booting with volume e3868bbb-c720-4557-8ae5-297fa9b8743c at /dev/vda#033[00m
Oct  1 12:57:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 181 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.6 MiB/s wr, 210 op/s
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.179 2 DEBUG os_brick.utils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.181 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.201 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.201 741 DEBUG oslo.privsep.daemon [-] privsep: reply[855dbf7f-87e0-4dee-900d-92d10a0e5bb3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.203 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.217 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.217 741 DEBUG oslo.privsep.daemon [-] privsep: reply[3745d41c-0a3a-4391-85c8-ea487dcabe3a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.219 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.228 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.229 741 DEBUG oslo.privsep.daemon [-] privsep: reply[98d95c55-8907-4cc5-9cc2-e217466dd1fb]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.230 741 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6d0982-5c20-4eb6-84db-61d48f98b365]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.231 2 DEBUG oslo_concurrency.processutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.254 2 DEBUG oslo_concurrency.processutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.257 2 DEBUG os_brick.initiator.connectors.lightos [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.257 2 DEBUG os_brick.initiator.connectors.lightos [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.257 2 DEBUG os_brick.initiator.connectors.lightos [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.258 2 DEBUG os_brick.utils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.258 2 DEBUG nova.virt.block_device [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Updating existing volume attachment record: d4893310-aba1-4d62-aff7-41edc7e4bbaa _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.476 2 DEBUG nova.compute.manager [req-2a28ce47-f892-4850-8b46-b28e79430a00 req-4396d7ef-5407-4c25-b118-c57cad7f7408 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Received event network-changed-7094fed9-935c-41be-bfa9-a61118606ba8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.476 2 DEBUG nova.compute.manager [req-2a28ce47-f892-4850-8b46-b28e79430a00 req-4396d7ef-5407-4c25-b118-c57cad7f7408 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Refreshing instance network info cache due to event network-changed-7094fed9-935c-41be-bfa9-a61118606ba8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.477 2 DEBUG oslo_concurrency.lockutils [req-2a28ce47-f892-4850-8b46-b28e79430a00 req-4396d7ef-5407-4c25-b118-c57cad7f7408 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-03ad1fe8-a967-4d62-a904-ceda4729227a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.477 2 DEBUG oslo_concurrency.lockutils [req-2a28ce47-f892-4850-8b46-b28e79430a00 req-4396d7ef-5407-4c25-b118-c57cad7f7408 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-03ad1fe8-a967-4d62-a904-ceda4729227a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:57:05 np0005464891 nova_compute[259907]: 2025-10-01 16:57:05.478 2 DEBUG nova.network.neutron [req-2a28ce47-f892-4850-8b46-b28e79430a00 req-4396d7ef-5407-4c25-b118-c57cad7f7408 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Refreshing network info cache for port 7094fed9-935c-41be-bfa9-a61118606ba8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:57:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:57:05 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3988476879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:57:06 np0005464891 nova_compute[259907]: 2025-10-01 16:57:06.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:06 np0005464891 nova_compute[259907]: 2025-10-01 16:57:06.544 2 DEBUG nova.compute.manager [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:57:06 np0005464891 nova_compute[259907]: 2025-10-01 16:57:06.546 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:57:06 np0005464891 nova_compute[259907]: 2025-10-01 16:57:06.547 2 INFO nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Creating image(s)#033[00m
Oct  1 12:57:06 np0005464891 nova_compute[259907]: 2025-10-01 16:57:06.547 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  1 12:57:06 np0005464891 nova_compute[259907]: 2025-10-01 16:57:06.547 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Ensure instance console log exists: /var/lib/nova/instances/47531108-4f20-41bd-8fb8-77fae3a30b85/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:57:06 np0005464891 nova_compute[259907]: 2025-10-01 16:57:06.548 2 DEBUG oslo_concurrency.lockutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:06 np0005464891 nova_compute[259907]: 2025-10-01 16:57:06.548 2 DEBUG oslo_concurrency.lockutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:06 np0005464891 nova_compute[259907]: 2025-10-01 16:57:06.549 2 DEBUG oslo_concurrency.lockutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:57:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2894561058' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:57:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:57:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2894561058' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:57:06 np0005464891 nova_compute[259907]: 2025-10-01 16:57:06.637 2 DEBUG nova.network.neutron [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Successfully created port: 69588747-06d2-44cb-bcb8-bfa62dd280d3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:57:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:57:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 181 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 29 KiB/s wr, 139 op/s
Oct  1 12:57:07 np0005464891 nova_compute[259907]: 2025-10-01 16:57:07.341 2 DEBUG nova.network.neutron [req-2a28ce47-f892-4850-8b46-b28e79430a00 req-4396d7ef-5407-4c25-b118-c57cad7f7408 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Updated VIF entry in instance network info cache for port 7094fed9-935c-41be-bfa9-a61118606ba8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:57:07 np0005464891 nova_compute[259907]: 2025-10-01 16:57:07.342 2 DEBUG nova.network.neutron [req-2a28ce47-f892-4850-8b46-b28e79430a00 req-4396d7ef-5407-4c25-b118-c57cad7f7408 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Updating instance_info_cache with network_info: [{"id": "7094fed9-935c-41be-bfa9-a61118606ba8", "address": "fa:16:3e:fd:c3:c3", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7094fed9-93", "ovs_interfaceid": "7094fed9-935c-41be-bfa9-a61118606ba8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:57:07 np0005464891 nova_compute[259907]: 2025-10-01 16:57:07.580 2 DEBUG oslo_concurrency.lockutils [req-2a28ce47-f892-4850-8b46-b28e79430a00 req-4396d7ef-5407-4c25-b118-c57cad7f7408 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-03ad1fe8-a967-4d62-a904-ceda4729227a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:57:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 321 active+clean; 181 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 153 op/s
Oct  1 12:57:09 np0005464891 nova_compute[259907]: 2025-10-01 16:57:09.586 2 DEBUG nova.network.neutron [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Successfully updated port: 69588747-06d2-44cb-bcb8-bfa62dd280d3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:57:09 np0005464891 nova_compute[259907]: 2025-10-01 16:57:09.600 2 DEBUG oslo_concurrency.lockutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "refresh_cache-47531108-4f20-41bd-8fb8-77fae3a30b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:57:09 np0005464891 nova_compute[259907]: 2025-10-01 16:57:09.601 2 DEBUG oslo_concurrency.lockutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquired lock "refresh_cache-47531108-4f20-41bd-8fb8-77fae3a30b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:57:09 np0005464891 nova_compute[259907]: 2025-10-01 16:57:09.601 2 DEBUG nova.network.neutron [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:57:09 np0005464891 nova_compute[259907]: 2025-10-01 16:57:09.682 2 DEBUG nova.compute.manager [req-9a99fe4a-ebe3-4ccc-a66d-93b77073ad3f req-abfbb66e-8f9b-4520-8430-0b87ccb6d296 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Received event network-changed-69588747-06d2-44cb-bcb8-bfa62dd280d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:57:09 np0005464891 nova_compute[259907]: 2025-10-01 16:57:09.683 2 DEBUG nova.compute.manager [req-9a99fe4a-ebe3-4ccc-a66d-93b77073ad3f req-abfbb66e-8f9b-4520-8430-0b87ccb6d296 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Refreshing instance network info cache due to event network-changed-69588747-06d2-44cb-bcb8-bfa62dd280d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:57:09 np0005464891 nova_compute[259907]: 2025-10-01 16:57:09.683 2 DEBUG oslo_concurrency.lockutils [req-9a99fe4a-ebe3-4ccc-a66d-93b77073ad3f req-abfbb66e-8f9b-4520-8430-0b87ccb6d296 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-47531108-4f20-41bd-8fb8-77fae3a30b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:57:09 np0005464891 nova_compute[259907]: 2025-10-01 16:57:09.723 2 DEBUG nova.network.neutron [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:57:09 np0005464891 nova_compute[259907]: 2025-10-01 16:57:09.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Oct  1 12:57:10 np0005464891 nova_compute[259907]: 2025-10-01 16:57:10.730 2 DEBUG nova.network.neutron [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Updating instance_info_cache with network_info: [{"id": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "address": "fa:16:3e:e1:1c:22", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69588747-06", "ovs_interfaceid": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:57:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 321 active+clean; 181 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 21 KiB/s wr, 138 op/s
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.620 2 DEBUG oslo_concurrency.lockutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Releasing lock "refresh_cache-47531108-4f20-41bd-8fb8-77fae3a30b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.621 2 DEBUG nova.compute.manager [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Instance network_info: |[{"id": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "address": "fa:16:3e:e1:1c:22", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69588747-06", "ovs_interfaceid": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.622 2 DEBUG oslo_concurrency.lockutils [req-9a99fe4a-ebe3-4ccc-a66d-93b77073ad3f req-abfbb66e-8f9b-4520-8430-0b87ccb6d296 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-47531108-4f20-41bd-8fb8-77fae3a30b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.623 2 DEBUG nova.network.neutron [req-9a99fe4a-ebe3-4ccc-a66d-93b77073ad3f req-abfbb66e-8f9b-4520-8430-0b87ccb6d296 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Refreshing network info cache for port 69588747-06d2-44cb-bcb8-bfa62dd280d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.629 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Start _get_guest_xml network_info=[{"id": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "address": "fa:16:3e:e1:1c:22", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69588747-06", "ovs_interfaceid": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'mount_device': '/dev/vda', 'attachment_id': 'd4893310-aba1-4d62-aff7-41edc7e4bbaa', 'disk_bus': 'virtio', 'delete_on_termination': True, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-e3868bbb-c720-4557-8ae5-297fa9b8743c', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'e3868bbb-c720-4557-8ae5-297fa9b8743c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '47531108-4f20-41bd-8fb8-77fae3a30b85', 'attached_at': '', 'detached_at': '', 'volume_id': 'e3868bbb-c720-4557-8ae5-297fa9b8743c', 'serial': 'e3868bbb-c720-4557-8ae5-297fa9b8743c'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.636 2 WARNING nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.652 2 DEBUG nova.virt.libvirt.host [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.653 2 DEBUG nova.virt.libvirt.host [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.682 2 DEBUG nova.virt.libvirt.host [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.684 2 DEBUG nova.virt.libvirt.host [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.685 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.685 2 DEBUG nova.virt.hardware [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.686 2 DEBUG nova.virt.hardware [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.686 2 DEBUG nova.virt.hardware [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.686 2 DEBUG nova.virt.hardware [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.686 2 DEBUG nova.virt.hardware [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.687 2 DEBUG nova.virt.hardware [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.687 2 DEBUG nova.virt.hardware [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.687 2 DEBUG nova.virt.hardware [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.688 2 DEBUG nova.virt.hardware [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.688 2 DEBUG nova.virt.hardware [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.688 2 DEBUG nova.virt.hardware [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.900 2 DEBUG nova.storage.rbd_utils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image 47531108-4f20-41bd-8fb8-77fae3a30b85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:57:11 np0005464891 nova_compute[259907]: 2025-10-01 16:57:11.906 2 DEBUG oslo_concurrency.processutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Oct  1 12:57:11 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Oct  1 12:57:11 np0005464891 podman[295385]: 2025-10-01 16:57:11.976805354 +0000 UTC m=+0.090446649 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct  1 12:57:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:57:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:57:12
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'vms', 'volumes', 'images']
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:57:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Oct  1 12:57:12 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Oct  1 12:57:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:57:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2267721202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:57:12 np0005464891 nova_compute[259907]: 2025-10-01 16:57:12.426 2 DEBUG oslo_concurrency.processutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:12.459 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:12.460 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:12.461 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:57:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:57:12 np0005464891 nova_compute[259907]: 2025-10-01 16:57:12.833 2 DEBUG nova.virt.libvirt.vif [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:57:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-554072980',display_name='tempest-TestVolumeBootPattern-volume-backed-server-554072980',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-554072980',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIA89U+L/HbbAMUwRnUudlnbssd9D8/QXPa6lN4Le8arNbHmKfF3KR4E1oY5xNiJdAE870XWxXZRbQWs2VeTBkEYbdx/bUvxGF6RT6eWXbmql4fDNN9pQLw1Jszf6Z6rkw==',key_name='tempest-keypair-269452850',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-yq6yu1yk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:57:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1280014cdfb74333ae8d71c78116e646',uuid=47531108-4f20-41bd-8fb8-77fae3a30b85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "address": "fa:16:3e:e1:1c:22", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69588747-06", "ovs_interfaceid": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:57:12 np0005464891 nova_compute[259907]: 2025-10-01 16:57:12.834 2 DEBUG nova.network.os_vif_util [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "address": "fa:16:3e:e1:1c:22", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69588747-06", "ovs_interfaceid": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:57:12 np0005464891 nova_compute[259907]: 2025-10-01 16:57:12.836 2 DEBUG nova.network.os_vif_util [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:1c:22,bridge_name='br-int',has_traffic_filtering=True,id=69588747-06d2-44cb-bcb8-bfa62dd280d3,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69588747-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:57:12 np0005464891 nova_compute[259907]: 2025-10-01 16:57:12.839 2 DEBUG nova.objects.instance [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lazy-loading 'pci_devices' on Instance uuid 47531108-4f20-41bd-8fb8-77fae3a30b85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.039 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  <uuid>47531108-4f20-41bd-8fb8-77fae3a30b85</uuid>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  <name>instance-00000013</name>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-554072980</nova:name>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:57:11</nova:creationTime>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:        <nova:user uuid="1280014cdfb74333ae8d71c78116e646">tempest-TestVolumeBootPattern-582136054-project-member</nova:user>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:        <nova:project uuid="8318b65fa88942a99937a0d198a04a9c">tempest-TestVolumeBootPattern-582136054</nova:project>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:        <nova:port uuid="69588747-06d2-44cb-bcb8-bfa62dd280d3">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <entry name="serial">47531108-4f20-41bd-8fb8-77fae3a30b85</entry>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <entry name="uuid">47531108-4f20-41bd-8fb8-77fae3a30b85</entry>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/47531108-4f20-41bd-8fb8-77fae3a30b85_disk.config">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="volumes/volume-e3868bbb-c720-4557-8ae5-297fa9b8743c">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <serial>e3868bbb-c720-4557-8ae5-297fa9b8743c</serial>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:e1:1c:22"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <target dev="tap69588747-06"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/47531108-4f20-41bd-8fb8-77fae3a30b85/console.log" append="off"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:57:13 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:57:13 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:57:13 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:57:13 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.040 2 DEBUG nova.compute.manager [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Preparing to wait for external event network-vif-plugged-69588747-06d2-44cb-bcb8-bfa62dd280d3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.041 2 DEBUG oslo_concurrency.lockutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.041 2 DEBUG oslo_concurrency.lockutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.041 2 DEBUG oslo_concurrency.lockutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.042 2 DEBUG nova.virt.libvirt.vif [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:57:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-554072980',display_name='tempest-TestVolumeBootPattern-volume-backed-server-554072980',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-554072980',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIA89U+L/HbbAMUwRnUudlnbssd9D8/QXPa6lN4Le8arNbHmKfF3KR4E1oY5xNiJdAE870XWxXZRbQWs2VeTBkEYbdx/bUvxGF6RT6eWXbmql4fDNN9pQLw1Jszf6Z6rkw==',key_name='tempest-keypair-269452850',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-yq6yu1yk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:57:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1280014cdfb74333ae8d71c78116e646',uuid=47531108-4f20-41bd-8fb8-77fae3a30b85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "address": "fa:16:3e:e1:1c:22", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69588747-06", "ovs_interfaceid": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.042 2 DEBUG nova.network.os_vif_util [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "address": "fa:16:3e:e1:1c:22", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69588747-06", "ovs_interfaceid": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.043 2 DEBUG nova.network.os_vif_util [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:1c:22,bridge_name='br-int',has_traffic_filtering=True,id=69588747-06d2-44cb-bcb8-bfa62dd280d3,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69588747-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.044 2 DEBUG os_vif [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:1c:22,bridge_name='br-int',has_traffic_filtering=True,id=69588747-06d2-44cb-bcb8-bfa62dd280d3,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69588747-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.045 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69588747-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap69588747-06, col_values=(('external_ids', {'iface-id': '69588747-06d2-44cb-bcb8-bfa62dd280d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:1c:22', 'vm-uuid': '47531108-4f20-41bd-8fb8-77fae3a30b85'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:13 np0005464891 NetworkManager[44940]: <info>  [1759337833.0530] manager: (tap69588747-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.064 2 INFO os_vif [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:1c:22,bridge_name='br-int',has_traffic_filtering=True,id=69588747-06d2-44cb-bcb8-bfa62dd280d3,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69588747-06')#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.092 2 DEBUG nova.network.neutron [req-9a99fe4a-ebe3-4ccc-a66d-93b77073ad3f req-abfbb66e-8f9b-4520-8430-0b87ccb6d296 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Updated VIF entry in instance network info cache for port 69588747-06d2-44cb-bcb8-bfa62dd280d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.094 2 DEBUG nova.network.neutron [req-9a99fe4a-ebe3-4ccc-a66d-93b77073ad3f req-abfbb66e-8f9b-4520-8430-0b87ccb6d296 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Updating instance_info_cache with network_info: [{"id": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "address": "fa:16:3e:e1:1c:22", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69588747-06", "ovs_interfaceid": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.123 2 DEBUG oslo_concurrency.lockutils [req-9a99fe4a-ebe3-4ccc-a66d-93b77073ad3f req-abfbb66e-8f9b-4520-8430-0b87ccb6d296 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-47531108-4f20-41bd-8fb8-77fae3a30b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:57:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1713: 321 pgs: 321 active+clean; 181 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 745 KiB/s rd, 1.2 KiB/s wr, 51 op/s
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.228 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.229 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.229 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No VIF found with MAC fa:16:3e:e1:1c:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.229 2 INFO nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Using config drive#033[00m
Oct  1 12:57:13 np0005464891 nova_compute[259907]: 2025-10-01 16:57:13.491 2 DEBUG nova.storage.rbd_utils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image 47531108-4f20-41bd-8fb8-77fae3a30b85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:57:14 np0005464891 nova_compute[259907]: 2025-10-01 16:57:14.409 2 INFO nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Creating config drive at /var/lib/nova/instances/47531108-4f20-41bd-8fb8-77fae3a30b85/disk.config#033[00m
Oct  1 12:57:14 np0005464891 nova_compute[259907]: 2025-10-01 16:57:14.415 2 DEBUG oslo_concurrency.processutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/47531108-4f20-41bd-8fb8-77fae3a30b85/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoto1t57v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:14 np0005464891 nova_compute[259907]: 2025-10-01 16:57:14.550 2 DEBUG oslo_concurrency.processutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/47531108-4f20-41bd-8fb8-77fae3a30b85/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoto1t57v" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:14 np0005464891 nova_compute[259907]: 2025-10-01 16:57:14.734 2 DEBUG nova.storage.rbd_utils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image 47531108-4f20-41bd-8fb8-77fae3a30b85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:57:14 np0005464891 nova_compute[259907]: 2025-10-01 16:57:14.738 2 DEBUG oslo_concurrency.processutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/47531108-4f20-41bd-8fb8-77fae3a30b85/disk.config 47531108-4f20-41bd-8fb8-77fae3a30b85_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 181 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 774 KiB/s rd, 308 KiB/s wr, 64 op/s
Oct  1 12:57:16 np0005464891 nova_compute[259907]: 2025-10-01 16:57:16.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:16 np0005464891 nova_compute[259907]: 2025-10-01 16:57:16.836 2 DEBUG oslo_concurrency.processutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/47531108-4f20-41bd-8fb8-77fae3a30b85/disk.config 47531108-4f20-41bd-8fb8-77fae3a30b85_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:16 np0005464891 nova_compute[259907]: 2025-10-01 16:57:16.837 2 INFO nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Deleting local config drive /var/lib/nova/instances/47531108-4f20-41bd-8fb8-77fae3a30b85/disk.config because it was imported into RBD.#033[00m
Oct  1 12:57:16 np0005464891 kernel: tap69588747-06: entered promiscuous mode
Oct  1 12:57:16 np0005464891 NetworkManager[44940]: <info>  [1759337836.8990] manager: (tap69588747-06): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Oct  1 12:57:16 np0005464891 nova_compute[259907]: 2025-10-01 16:57:16.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:16 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:16Z|00177|binding|INFO|Claiming lport 69588747-06d2-44cb-bcb8-bfa62dd280d3 for this chassis.
Oct  1 12:57:16 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:16Z|00178|binding|INFO|69588747-06d2-44cb-bcb8-bfa62dd280d3: Claiming fa:16:3e:e1:1c:22 10.100.0.8
Oct  1 12:57:16 np0005464891 systemd-machined[214891]: New machine qemu-19-instance-00000013.
Oct  1 12:57:16 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:16Z|00179|binding|INFO|Setting lport 69588747-06d2-44cb-bcb8-bfa62dd280d3 ovn-installed in OVS
Oct  1 12:57:16 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:16Z|00180|binding|INFO|Setting lport 69588747-06d2-44cb-bcb8-bfa62dd280d3 up in Southbound
Oct  1 12:57:16 np0005464891 nova_compute[259907]: 2025-10-01 16:57:16.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:16.936 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:1c:22 10.100.0.8'], port_security=['fa:16:3e:e1:1c:22 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '47531108-4f20-41bd-8fb8-77fae3a30b85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce1e1062-6685-441b-8278-667224375e38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8318b65fa88942a99937a0d198a04a9c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '16fffc6f-0dbd-4932-b567-78bcd2e66114', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4cdcf07-f310-4572-944c-43bd8e74f763, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=69588747-06d2-44cb-bcb8-bfa62dd280d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:57:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:16.938 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 69588747-06d2-44cb-bcb8-bfa62dd280d3 in datapath ce1e1062-6685-441b-8278-667224375e38 bound to our chassis#033[00m
Oct  1 12:57:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:16.941 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce1e1062-6685-441b-8278-667224375e38#033[00m
Oct  1 12:57:16 np0005464891 nova_compute[259907]: 2025-10-01 16:57:16.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:16 np0005464891 systemd[1]: Started Virtual Machine qemu-19-instance-00000013.
Oct  1 12:57:16 np0005464891 systemd-udevd[295536]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:57:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:16.966 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3b84f1d8-2f42-4c5b-8bb9-7afff3e6ae35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:16.968 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce1e1062-61 in ovnmeta-ce1e1062-6685-441b-8278-667224375e38 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:57:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:16.969 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce1e1062-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:57:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:16.969 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[edd917d4-89e0-4164-96d6-9c1f772213a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:16.970 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf594ca-1a76-4272-b6af-7df5ed3a9779]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:16 np0005464891 podman[295506]: 2025-10-01 16:57:16.976943592 +0000 UTC m=+0.090559033 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 12:57:16 np0005464891 NetworkManager[44940]: <info>  [1759337836.9900] device (tap69588747-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:57:16 np0005464891 NetworkManager[44940]: <info>  [1759337836.9912] device (tap69588747-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:57:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:16.990 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[1d17191e-81a5-4651-b15b-d425888c9326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.025 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d85ac441-7bd6-43a7-8fa2-045cc5bf13c1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.059 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[79a7b591-a339-4e6f-b1dc-6bf13955970d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:17 np0005464891 NetworkManager[44940]: <info>  [1759337837.0694] manager: (tapce1e1062-60): new Veth device (/org/freedesktop/NetworkManager/Devices/107)
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.068 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[beea5e7b-5bc8-41f7-a96a-69987c252f7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:57:17 np0005464891 systemd-udevd[295543]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:57:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.115 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[f3d7146a-3880-4877-a34c-f713017f751a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.120 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[279be18f-8850-4046-8a7f-4cd8957613fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:17 np0005464891 NetworkManager[44940]: <info>  [1759337837.1472] device (tapce1e1062-60): carrier: link connected
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.153 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[48d4ce99-48cf-49f3-9df4-871f561e6add]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 181 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 308 KiB/s wr, 23 op/s
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.173 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[83d8d907-4c79-4f21-b79b-04142b8b4e37]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce1e1062-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:87:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478572, 'reachable_time': 34368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295569, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.196 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[67f912a5-92e4-46a8-a919-3eb8ae87b3c1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec5:872c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478572, 'tstamp': 478572}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295570, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.215 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[bc58922f-81e7-4b2c-a83e-b317984f7874]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce1e1062-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:87:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478572, 'reachable_time': 34368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295571, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.248 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[fded24c6-bf8a-4ce8-9ff9-20babf7e9951]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.310 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f81afbb3-6106-47c4-b4e2-815ce014b00a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.312 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce1e1062-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.313 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.314 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce1e1062-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:17 np0005464891 NetworkManager[44940]: <info>  [1759337837.3180] manager: (tapce1e1062-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Oct  1 12:57:17 np0005464891 kernel: tapce1e1062-60: entered promiscuous mode
Oct  1 12:57:17 np0005464891 nova_compute[259907]: 2025-10-01 16:57:17.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.322 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce1e1062-60, col_values=(('external_ids', {'iface-id': 'd971881d-8d8b-44dc-b0b0-4cd0065c0105'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:17 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:17Z|00181|binding|INFO|Releasing lport d971881d-8d8b-44dc-b0b0-4cd0065c0105 from this chassis (sb_readonly=0)
Oct  1 12:57:17 np0005464891 nova_compute[259907]: 2025-10-01 16:57:17.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:17 np0005464891 nova_compute[259907]: 2025-10-01 16:57:17.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.347 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce1e1062-6685-441b-8278-667224375e38.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce1e1062-6685-441b-8278-667224375e38.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.348 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6d257f-0bcb-4903-b29c-b44a3eedb740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.349 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-ce1e1062-6685-441b-8278-667224375e38
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/ce1e1062-6685-441b-8278-667224375e38.pid.haproxy
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID ce1e1062-6685-441b-8278-667224375e38
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:57:17 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:17.351 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'env', 'PROCESS_TAG=haproxy-ce1e1062-6685-441b-8278-667224375e38', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce1e1062-6685-441b-8278-667224375e38.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:57:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Oct  1 12:57:17 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Oct  1 12:57:17 np0005464891 nova_compute[259907]: 2025-10-01 16:57:17.623 2 DEBUG nova.compute.manager [req-396e746d-7de3-42dc-8a43-c7fe89ed0a04 req-e65145cd-d203-4da1-8208-a6b1b7976ed8 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Received event network-vif-plugged-69588747-06d2-44cb-bcb8-bfa62dd280d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:57:17 np0005464891 nova_compute[259907]: 2025-10-01 16:57:17.625 2 DEBUG oslo_concurrency.lockutils [req-396e746d-7de3-42dc-8a43-c7fe89ed0a04 req-e65145cd-d203-4da1-8208-a6b1b7976ed8 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:17 np0005464891 nova_compute[259907]: 2025-10-01 16:57:17.625 2 DEBUG oslo_concurrency.lockutils [req-396e746d-7de3-42dc-8a43-c7fe89ed0a04 req-e65145cd-d203-4da1-8208-a6b1b7976ed8 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:17 np0005464891 nova_compute[259907]: 2025-10-01 16:57:17.626 2 DEBUG oslo_concurrency.lockutils [req-396e746d-7de3-42dc-8a43-c7fe89ed0a04 req-e65145cd-d203-4da1-8208-a6b1b7976ed8 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:17 np0005464891 nova_compute[259907]: 2025-10-01 16:57:17.626 2 DEBUG nova.compute.manager [req-396e746d-7de3-42dc-8a43-c7fe89ed0a04 req-e65145cd-d203-4da1-8208-a6b1b7976ed8 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Processing event network-vif-plugged-69588747-06d2-44cb-bcb8-bfa62dd280d3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:57:17 np0005464891 podman[295603]: 2025-10-01 16:57:17.719439863 +0000 UTC m=+0.021029827 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:57:18 np0005464891 nova_compute[259907]: 2025-10-01 16:57:18.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:18 np0005464891 podman[295603]: 2025-10-01 16:57:18.323324677 +0000 UTC m=+0.624914641 container create ea01c5c6717eed6fad481ef0de762b447e32e377b94c77121c5a06e1d6b864b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:57:18 np0005464891 systemd[1]: Started libpod-conmon-ea01c5c6717eed6fad481ef0de762b447e32e377b94c77121c5a06e1d6b864b4.scope.
Oct  1 12:57:18 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:57:18 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5f02c5070bfcddb114aae2c461dfa813863231e340f805a871bb92c26b84339/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:57:18 np0005464891 podman[295603]: 2025-10-01 16:57:18.657610136 +0000 UTC m=+0.959200130 container init ea01c5c6717eed6fad481ef0de762b447e32e377b94c77121c5a06e1d6b864b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  1 12:57:18 np0005464891 podman[295603]: 2025-10-01 16:57:18.665812511 +0000 UTC m=+0.967402455 container start ea01c5c6717eed6fad481ef0de762b447e32e377b94c77121c5a06e1d6b864b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:57:18 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[295654]: [NOTICE]   (295659) : New worker (295665) forked
Oct  1 12:57:18 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[295654]: [NOTICE]   (295659) : Loading success.
Oct  1 12:57:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 186 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 1.0 MiB/s wr, 51 op/s
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.238 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337839.2374792, 47531108-4f20-41bd-8fb8-77fae3a30b85 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.239 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] VM Started (Lifecycle Event)#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.241 2 DEBUG nova.compute.manager [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.246 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.249 2 INFO nova.virt.libvirt.driver [-] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Instance spawned successfully.#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.249 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.439 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.443 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.512 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.513 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.514 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.514 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.514 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.515 2 DEBUG nova.virt.libvirt.driver [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.573 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.574 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337839.2376974, 47531108-4f20-41bd-8fb8-77fae3a30b85 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.574 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.690 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.696 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337839.244842, 47531108-4f20-41bd-8fb8-77fae3a30b85 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.697 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.729 2 INFO nova.compute.manager [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Took 13.18 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.730 2 DEBUG nova.compute.manager [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.732 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.746 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.775 2 DEBUG nova.compute.manager [req-aaf15eab-79fb-4f94-84c6-80b848992d1b req-b34f8728-2fa9-4984-8405-796b88bf370c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Received event network-vif-plugged-69588747-06d2-44cb-bcb8-bfa62dd280d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.777 2 DEBUG oslo_concurrency.lockutils [req-aaf15eab-79fb-4f94-84c6-80b848992d1b req-b34f8728-2fa9-4984-8405-796b88bf370c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.777 2 DEBUG oslo_concurrency.lockutils [req-aaf15eab-79fb-4f94-84c6-80b848992d1b req-b34f8728-2fa9-4984-8405-796b88bf370c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.778 2 DEBUG oslo_concurrency.lockutils [req-aaf15eab-79fb-4f94-84c6-80b848992d1b req-b34f8728-2fa9-4984-8405-796b88bf370c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.779 2 DEBUG nova.compute.manager [req-aaf15eab-79fb-4f94-84c6-80b848992d1b req-b34f8728-2fa9-4984-8405-796b88bf370c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] No waiting events found dispatching network-vif-plugged-69588747-06d2-44cb-bcb8-bfa62dd280d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.780 2 WARNING nova.compute.manager [req-aaf15eab-79fb-4f94-84c6-80b848992d1b req-b34f8728-2fa9-4984-8405-796b88bf370c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Received unexpected event network-vif-plugged-69588747-06d2-44cb-bcb8-bfa62dd280d3 for instance with vm_state building and task_state spawning.#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.803 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:57:19 np0005464891 nova_compute[259907]: 2025-10-01 16:57:19.889 2 INFO nova.compute.manager [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Took 16.00 seconds to build instance.#033[00m
Oct  1 12:57:20 np0005464891 nova_compute[259907]: 2025-10-01 16:57:20.080 2 DEBUG oslo_concurrency.lockutils [None req-3839958a-bfec-455b-90af-7393dd1d420e 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "47531108-4f20-41bd-8fb8-77fae3a30b85" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.261s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:20 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:20Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fd:c3:c3 10.100.0.9
Oct  1 12:57:20 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:20Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fd:c3:c3 10.100.0.9
Oct  1 12:57:20 np0005464891 podman[295675]: 2025-10-01 16:57:20.944380945 +0000 UTC m=+0.060775766 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  1 12:57:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 206 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 696 KiB/s rd, 2.8 MiB/s wr, 112 op/s
Oct  1 12:57:21 np0005464891 nova_compute[259907]: 2025-10-01 16:57:21.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007492893102493977 of space, bias 1.0, pg target 0.2247867930748193 quantized to 32 (current 32)
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006976536554991209 of space, bias 1.0, pg target 0.20929609664973628 quantized to 32 (current 32)
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:57:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:57:22 np0005464891 nova_compute[259907]: 2025-10-01 16:57:22.792 2 DEBUG nova.compute.manager [req-413137bf-5324-480d-a2fb-406876a5a73e req-0ef95c73-aae8-4885-be9b-d4004f8bb719 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Received event network-changed-69588747-06d2-44cb-bcb8-bfa62dd280d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:57:22 np0005464891 nova_compute[259907]: 2025-10-01 16:57:22.793 2 DEBUG nova.compute.manager [req-413137bf-5324-480d-a2fb-406876a5a73e req-0ef95c73-aae8-4885-be9b-d4004f8bb719 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Refreshing instance network info cache due to event network-changed-69588747-06d2-44cb-bcb8-bfa62dd280d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:57:22 np0005464891 nova_compute[259907]: 2025-10-01 16:57:22.794 2 DEBUG oslo_concurrency.lockutils [req-413137bf-5324-480d-a2fb-406876a5a73e req-0ef95c73-aae8-4885-be9b-d4004f8bb719 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-47531108-4f20-41bd-8fb8-77fae3a30b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:57:22 np0005464891 nova_compute[259907]: 2025-10-01 16:57:22.795 2 DEBUG oslo_concurrency.lockutils [req-413137bf-5324-480d-a2fb-406876a5a73e req-0ef95c73-aae8-4885-be9b-d4004f8bb719 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-47531108-4f20-41bd-8fb8-77fae3a30b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:57:22 np0005464891 nova_compute[259907]: 2025-10-01 16:57:22.795 2 DEBUG nova.network.neutron [req-413137bf-5324-480d-a2fb-406876a5a73e req-0ef95c73-aae8-4885-be9b-d4004f8bb719 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Refreshing network info cache for port 69588747-06d2-44cb-bcb8-bfa62dd280d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:57:23 np0005464891 nova_compute[259907]: 2025-10-01 16:57:23.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1719: 321 pgs: 321 active+clean; 208 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.5 MiB/s wr, 134 op/s
Oct  1 12:57:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:57:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4074009071' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:57:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:57:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4074009071' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:57:24 np0005464891 nova_compute[259907]: 2025-10-01 16:57:24.845 2 DEBUG nova.network.neutron [req-413137bf-5324-480d-a2fb-406876a5a73e req-0ef95c73-aae8-4885-be9b-d4004f8bb719 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Updated VIF entry in instance network info cache for port 69588747-06d2-44cb-bcb8-bfa62dd280d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:57:24 np0005464891 nova_compute[259907]: 2025-10-01 16:57:24.846 2 DEBUG nova.network.neutron [req-413137bf-5324-480d-a2fb-406876a5a73e req-0ef95c73-aae8-4885-be9b-d4004f8bb719 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Updating instance_info_cache with network_info: [{"id": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "address": "fa:16:3e:e1:1c:22", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69588747-06", "ovs_interfaceid": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:57:24 np0005464891 nova_compute[259907]: 2025-10-01 16:57:24.874 2 DEBUG oslo_concurrency.lockutils [req-413137bf-5324-480d-a2fb-406876a5a73e req-0ef95c73-aae8-4885-be9b-d4004f8bb719 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-47531108-4f20-41bd-8fb8-77fae3a30b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:57:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 214 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.3 MiB/s wr, 192 op/s
Oct  1 12:57:26 np0005464891 nova_compute[259907]: 2025-10-01 16:57:26.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:57:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Oct  1 12:57:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Oct  1 12:57:27 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Oct  1 12:57:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 214 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.4 MiB/s wr, 201 op/s
Oct  1 12:57:28 np0005464891 nova_compute[259907]: 2025-10-01 16:57:28.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1723: 321 pgs: 321 active+clean; 214 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 170 op/s
Oct  1 12:57:30 np0005464891 nova_compute[259907]: 2025-10-01 16:57:30.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:57:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 214 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 128 KiB/s wr, 109 op/s
Oct  1 12:57:31 np0005464891 nova_compute[259907]: 2025-10-01 16:57:31.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:31 np0005464891 nova_compute[259907]: 2025-10-01 16:57:31.803 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:57:31 np0005464891 nova_compute[259907]: 2025-10-01 16:57:31.804 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:57:31 np0005464891 nova_compute[259907]: 2025-10-01 16:57:31.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:57:31 np0005464891 nova_compute[259907]: 2025-10-01 16:57:31.845 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:31 np0005464891 nova_compute[259907]: 2025-10-01 16:57:31.846 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:31 np0005464891 nova_compute[259907]: 2025-10-01 16:57:31.846 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:31 np0005464891 nova_compute[259907]: 2025-10-01 16:57:31.847 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:57:31 np0005464891 nova_compute[259907]: 2025-10-01 16:57:31.848 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:31 np0005464891 podman[295699]: 2025-10-01 16:57:31.970431263 +0000 UTC m=+0.074424591 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:57:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:57:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:57:32 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3925270644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.405 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.488 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.489 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:57:32 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:32Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:1c:22 10.100.0.8
Oct  1 12:57:32 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:32Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:1c:22 10.100.0.8
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.495 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.496 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.666 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.667 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4047MB free_disk=59.94255065917969GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.667 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.668 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.894 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 03ad1fe8-a967-4d62-a904-ceda4729227a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.895 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 47531108-4f20-41bd-8fb8-77fae3a30b85 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.895 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.895 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.909 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing inventories for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.926 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating ProviderTree inventory for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.927 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating inventory in ProviderTree for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.942 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing aggregate associations for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 12:57:32 np0005464891 nova_compute[259907]: 2025-10-01 16:57:32.958 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing trait associations for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8, traits: HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 12:57:33 np0005464891 nova_compute[259907]: 2025-10-01 16:57:33.011 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:33 np0005464891 nova_compute[259907]: 2025-10-01 16:57:33.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1725: 321 pgs: 321 active+clean; 230 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.5 MiB/s wr, 97 op/s
Oct  1 12:57:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:57:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2936038942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:57:33 np0005464891 nova_compute[259907]: 2025-10-01 16:57:33.430 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:33 np0005464891 nova_compute[259907]: 2025-10-01 16:57:33.438 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:57:33 np0005464891 nova_compute[259907]: 2025-10-01 16:57:33.457 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:57:33 np0005464891 nova_compute[259907]: 2025-10-01 16:57:33.486 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:57:33 np0005464891 nova_compute[259907]: 2025-10-01 16:57:33.487 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:34 np0005464891 nova_compute[259907]: 2025-10-01 16:57:34.484 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:57:34 np0005464891 nova_compute[259907]: 2025-10-01 16:57:34.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:57:34 np0005464891 nova_compute[259907]: 2025-10-01 16:57:34.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:57:34 np0005464891 nova_compute[259907]: 2025-10-01 16:57:34.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:57:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 247 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 279 KiB/s rd, 2.6 MiB/s wr, 78 op/s
Oct  1 12:57:35 np0005464891 nova_compute[259907]: 2025-10-01 16:57:35.228 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "refresh_cache-03ad1fe8-a967-4d62-a904-ceda4729227a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:57:35 np0005464891 nova_compute[259907]: 2025-10-01 16:57:35.228 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquired lock "refresh_cache-03ad1fe8-a967-4d62-a904-ceda4729227a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:57:35 np0005464891 nova_compute[259907]: 2025-10-01 16:57:35.229 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  1 12:57:35 np0005464891 nova_compute[259907]: 2025-10-01 16:57:35.229 2 DEBUG nova.objects.instance [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 03ad1fe8-a967-4d62-a904-ceda4729227a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:57:36 np0005464891 nova_compute[259907]: 2025-10-01 16:57:36.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.051 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Updating instance_info_cache with network_info: [{"id": "7094fed9-935c-41be-bfa9-a61118606ba8", "address": "fa:16:3e:fd:c3:c3", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7094fed9-93", "ovs_interfaceid": "7094fed9-935c-41be-bfa9-a61118606ba8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.068 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Releasing lock "refresh_cache-03ad1fe8-a967-4d62-a904-ceda4729227a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.068 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.069 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:57:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.124 2 DEBUG oslo_concurrency.lockutils [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "03ad1fe8-a967-4d62-a904-ceda4729227a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.124 2 DEBUG oslo_concurrency.lockutils [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.139 2 DEBUG nova.objects.instance [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lazy-loading 'flavor' on Instance uuid 03ad1fe8-a967-4d62-a904-ceda4729227a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.172 2 DEBUG oslo_concurrency.lockutils [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 321 active+clean; 247 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.345 2 DEBUG oslo_concurrency.lockutils [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "03ad1fe8-a967-4d62-a904-ceda4729227a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.346 2 DEBUG oslo_concurrency.lockutils [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.346 2 INFO nova.compute.manager [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Attaching volume 89c7762a-83c1-46dd-9f1e-14bd62fd31cc to /dev/vdb#033[00m
Oct  1 12:57:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:57:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2572173149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:57:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:57:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2572173149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.469 2 DEBUG os_brick.utils [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.470 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.479 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.480 741 DEBUG oslo.privsep.daemon [-] privsep: reply[5dbf78db-6b55-4ee8-84e7-f4ac6ea3cb85]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.481 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.488 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.489 741 DEBUG oslo.privsep.daemon [-] privsep: reply[44ebb6e7-6548-4673-b9c3-802ac81d1c1d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.490 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.500 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.500 741 DEBUG oslo.privsep.daemon [-] privsep: reply[9f824e00-a1e1-41e1-8cd5-4c9eca517a0e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.502 741 DEBUG oslo.privsep.daemon [-] privsep: reply[852cc2ab-4e29-47ff-8902-7e36ecd825da]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.502 2 DEBUG oslo_concurrency.processutils [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.526 2 DEBUG oslo_concurrency.processutils [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.528 2 DEBUG os_brick.initiator.connectors.lightos [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.528 2 DEBUG os_brick.initiator.connectors.lightos [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.529 2 DEBUG os_brick.initiator.connectors.lightos [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.529 2 DEBUG os_brick.utils [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] <== get_connector_properties: return (59ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.529 2 DEBUG nova.virt.block_device [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Updating existing volume attachment record: 244787e9-4647-49bc-a33d-d9304aa7c527 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:57:37 np0005464891 nova_compute[259907]: 2025-10-01 16:57:37.828 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:57:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/90225523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.455 2 DEBUG os_brick.encryptors [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Using volume encryption metadata '{'encryption_key_id': '870e981d-0466-4924-bb2f-615552662bac', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-89c7762a-83c1-46dd-9f1e-14bd62fd31cc', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '89c7762a-83c1-46dd-9f1e-14bd62fd31cc', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '03ad1fe8-a967-4d62-a904-ceda4729227a', 'attached_at': '', 'detached_at': '', 'volume_id': '89c7762a-83c1-46dd-9f1e-14bd62fd31cc', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.462 2 DEBUG barbicanclient.client [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.476 2 DEBUG barbicanclient.v1.secrets [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/870e981d-0466-4924-bb2f-615552662bac get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.477 2 INFO barbicanclient.base [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/870e981d-0466-4924-bb2f-615552662bac#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.517 2 DEBUG barbicanclient.client [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.518 2 INFO barbicanclient.base [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/870e981d-0466-4924-bb2f-615552662bac#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.541 2 DEBUG barbicanclient.client [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.542 2 INFO barbicanclient.base [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/870e981d-0466-4924-bb2f-615552662bac#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.574 2 DEBUG barbicanclient.client [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.575 2 INFO barbicanclient.base [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/870e981d-0466-4924-bb2f-615552662bac#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.611 2 DEBUG barbicanclient.client [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.612 2 INFO barbicanclient.base [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/870e981d-0466-4924-bb2f-615552662bac#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.633 2 DEBUG barbicanclient.client [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.634 2 INFO barbicanclient.base [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/870e981d-0466-4924-bb2f-615552662bac#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.657 2 DEBUG barbicanclient.client [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.657 2 INFO barbicanclient.base [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/870e981d-0466-4924-bb2f-615552662bac#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.686 2 DEBUG barbicanclient.client [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.686 2 INFO barbicanclient.base [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/870e981d-0466-4924-bb2f-615552662bac#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.713 2 DEBUG barbicanclient.client [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.714 2 INFO barbicanclient.base [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/870e981d-0466-4924-bb2f-615552662bac#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.749 2 DEBUG barbicanclient.client [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.750 2 INFO barbicanclient.base [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/870e981d-0466-4924-bb2f-615552662bac#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.791 2 DEBUG barbicanclient.client [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.792 2 INFO barbicanclient.base [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/870e981d-0466-4924-bb2f-615552662bac#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.815 2 DEBUG barbicanclient.client [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.815 2 INFO barbicanclient.base [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/870e981d-0466-4924-bb2f-615552662bac#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.868 2 DEBUG barbicanclient.client [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.869 2 INFO barbicanclient.base [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/870e981d-0466-4924-bb2f-615552662bac#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.893 2 DEBUG barbicanclient.client [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.893 2 INFO barbicanclient.base [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/870e981d-0466-4924-bb2f-615552662bac#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.926 2 DEBUG barbicanclient.client [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.927 2 INFO barbicanclient.base [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/870e981d-0466-4924-bb2f-615552662bac#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.949 2 DEBUG barbicanclient.client [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.950 2 DEBUG nova.virt.libvirt.host [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct  1 12:57:38 np0005464891 nova_compute[259907]:  <usage type="volume">
Oct  1 12:57:38 np0005464891 nova_compute[259907]:    <volume>89c7762a-83c1-46dd-9f1e-14bd62fd31cc</volume>
Oct  1 12:57:38 np0005464891 nova_compute[259907]:  </usage>
Oct  1 12:57:38 np0005464891 nova_compute[259907]: </secret>
Oct  1 12:57:38 np0005464891 nova_compute[259907]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.962 2 DEBUG nova.objects.instance [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lazy-loading 'flavor' on Instance uuid 03ad1fe8-a967-4d62-a904-ceda4729227a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.981 2 DEBUG nova.virt.libvirt.driver [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Attempting to attach volume 89c7762a-83c1-46dd-9f1e-14bd62fd31cc with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  1 12:57:38 np0005464891 nova_compute[259907]: 2025-10-01 16:57:38.984 2 DEBUG nova.virt.libvirt.guest [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] attach device xml: <disk type="network" device="disk">
Oct  1 12:57:38 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:57:38 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-89c7762a-83c1-46dd-9f1e-14bd62fd31cc">
Oct  1 12:57:38 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:57:38 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:57:38 np0005464891 nova_compute[259907]:  <auth username="openstack">
Oct  1 12:57:38 np0005464891 nova_compute[259907]:    <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:57:38 np0005464891 nova_compute[259907]:  </auth>
Oct  1 12:57:38 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:57:38 np0005464891 nova_compute[259907]:  <serial>89c7762a-83c1-46dd-9f1e-14bd62fd31cc</serial>
Oct  1 12:57:38 np0005464891 nova_compute[259907]:  <encryption format="luks">
Oct  1 12:57:38 np0005464891 nova_compute[259907]:    <secret type="passphrase" uuid="1eb12601-ea22-4824-b132-1b6e82459983"/>
Oct  1 12:57:38 np0005464891 nova_compute[259907]:  </encryption>
Oct  1 12:57:38 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:57:38 np0005464891 nova_compute[259907]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  1 12:57:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 247 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 233 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  1 12:57:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e434 do_prune osdmap full prune enabled
Oct  1 12:57:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e435 e435: 3 total, 3 up, 3 in
Oct  1 12:57:40 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e435: 3 total, 3 up, 3 in
Oct  1 12:57:40 np0005464891 nova_compute[259907]: 2025-10-01 16:57:40.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:57:41 np0005464891 nova_compute[259907]: 2025-10-01 16:57:41.120 2 DEBUG nova.virt.libvirt.driver [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:57:41 np0005464891 nova_compute[259907]: 2025-10-01 16:57:41.121 2 DEBUG nova.virt.libvirt.driver [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:57:41 np0005464891 nova_compute[259907]: 2025-10-01 16:57:41.121 2 DEBUG nova.virt.libvirt.driver [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:57:41 np0005464891 nova_compute[259907]: 2025-10-01 16:57:41.121 2 DEBUG nova.virt.libvirt.driver [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] No VIF found with MAC fa:16:3e:fd:c3:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:57:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 247 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.6 MiB/s wr, 83 op/s
Oct  1 12:57:41 np0005464891 nova_compute[259907]: 2025-10-01 16:57:41.306 2 DEBUG oslo_concurrency.lockutils [None req-91f9e6fc-d628-4d1d-a75c-0e45104f5d8a 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:41 np0005464891 nova_compute[259907]: 2025-10-01 16:57:41.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:57:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:57:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:57:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:57:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:57:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:57:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:57:42 np0005464891 nova_compute[259907]: 2025-10-01 16:57:42.399 2 DEBUG oslo_concurrency.lockutils [None req-b10df50b-0fde-43ee-98ee-645b04275b1f 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "03ad1fe8-a967-4d62-a904-ceda4729227a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:42 np0005464891 nova_compute[259907]: 2025-10-01 16:57:42.399 2 DEBUG oslo_concurrency.lockutils [None req-b10df50b-0fde-43ee-98ee-645b04275b1f 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:42 np0005464891 nova_compute[259907]: 2025-10-01 16:57:42.415 2 INFO nova.compute.manager [None req-b10df50b-0fde-43ee-98ee-645b04275b1f 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Detaching volume 89c7762a-83c1-46dd-9f1e-14bd62fd31cc#033[00m
Oct  1 12:57:42 np0005464891 nova_compute[259907]: 2025-10-01 16:57:42.555 2 INFO nova.virt.block_device [None req-b10df50b-0fde-43ee-98ee-645b04275b1f 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Attempting to driver detach volume 89c7762a-83c1-46dd-9f1e-14bd62fd31cc from mountpoint /dev/vdb#033[00m
Oct  1 12:57:42 np0005464891 nova_compute[259907]: 2025-10-01 16:57:42.663 2 DEBUG os_brick.encryptors [None req-b10df50b-0fde-43ee-98ee-645b04275b1f 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Using volume encryption metadata '{'encryption_key_id': '870e981d-0466-4924-bb2f-615552662bac', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-89c7762a-83c1-46dd-9f1e-14bd62fd31cc', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '89c7762a-83c1-46dd-9f1e-14bd62fd31cc', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '03ad1fe8-a967-4d62-a904-ceda4729227a', 'attached_at': '', 'detached_at': '', 'volume_id': '89c7762a-83c1-46dd-9f1e-14bd62fd31cc', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct  1 12:57:42 np0005464891 nova_compute[259907]: 2025-10-01 16:57:42.670 2 DEBUG nova.virt.libvirt.driver [None req-b10df50b-0fde-43ee-98ee-645b04275b1f 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Attempting to detach device vdb from instance 03ad1fe8-a967-4d62-a904-ceda4729227a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  1 12:57:42 np0005464891 nova_compute[259907]: 2025-10-01 16:57:42.671 2 DEBUG nova.virt.libvirt.guest [None req-b10df50b-0fde-43ee-98ee-645b04275b1f 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:57:42 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:57:42 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-89c7762a-83c1-46dd-9f1e-14bd62fd31cc">
Oct  1 12:57:42 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:57:42 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:57:42 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:57:42 np0005464891 nova_compute[259907]:  <serial>89c7762a-83c1-46dd-9f1e-14bd62fd31cc</serial>
Oct  1 12:57:42 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:57:42 np0005464891 nova_compute[259907]:  <encryption format="luks">
Oct  1 12:57:42 np0005464891 nova_compute[259907]:    <secret type="passphrase" uuid="1eb12601-ea22-4824-b132-1b6e82459983"/>
Oct  1 12:57:42 np0005464891 nova_compute[259907]:  </encryption>
Oct  1 12:57:42 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:57:42 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:57:42 np0005464891 nova_compute[259907]: 2025-10-01 16:57:42.678 2 INFO nova.virt.libvirt.driver [None req-b10df50b-0fde-43ee-98ee-645b04275b1f 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Successfully detached device vdb from instance 03ad1fe8-a967-4d62-a904-ceda4729227a from the persistent domain config.#033[00m
Oct  1 12:57:42 np0005464891 nova_compute[259907]: 2025-10-01 16:57:42.678 2 DEBUG nova.virt.libvirt.driver [None req-b10df50b-0fde-43ee-98ee-645b04275b1f 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 03ad1fe8-a967-4d62-a904-ceda4729227a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  1 12:57:42 np0005464891 nova_compute[259907]: 2025-10-01 16:57:42.678 2 DEBUG nova.virt.libvirt.guest [None req-b10df50b-0fde-43ee-98ee-645b04275b1f 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] detach device xml: <disk type="network" device="disk">
Oct  1 12:57:42 np0005464891 nova_compute[259907]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:57:42 np0005464891 nova_compute[259907]:  <source protocol="rbd" name="volumes/volume-89c7762a-83c1-46dd-9f1e-14bd62fd31cc">
Oct  1 12:57:42 np0005464891 nova_compute[259907]:    <host name="192.168.122.100" port="6789"/>
Oct  1 12:57:42 np0005464891 nova_compute[259907]:  </source>
Oct  1 12:57:42 np0005464891 nova_compute[259907]:  <target dev="vdb" bus="virtio"/>
Oct  1 12:57:42 np0005464891 nova_compute[259907]:  <serial>89c7762a-83c1-46dd-9f1e-14bd62fd31cc</serial>
Oct  1 12:57:42 np0005464891 nova_compute[259907]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  1 12:57:42 np0005464891 nova_compute[259907]:  <encryption format="luks">
Oct  1 12:57:42 np0005464891 nova_compute[259907]:    <secret type="passphrase" uuid="1eb12601-ea22-4824-b132-1b6e82459983"/>
Oct  1 12:57:42 np0005464891 nova_compute[259907]:  </encryption>
Oct  1 12:57:42 np0005464891 nova_compute[259907]: </disk>
Oct  1 12:57:42 np0005464891 nova_compute[259907]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  1 12:57:42 np0005464891 nova_compute[259907]: 2025-10-01 16:57:42.785 2 DEBUG nova.virt.libvirt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Received event <DeviceRemovedEvent: 1759337862.7848387, 03ad1fe8-a967-4d62-a904-ceda4729227a => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  1 12:57:42 np0005464891 nova_compute[259907]: 2025-10-01 16:57:42.787 2 DEBUG nova.virt.libvirt.driver [None req-b10df50b-0fde-43ee-98ee-645b04275b1f 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 03ad1fe8-a967-4d62-a904-ceda4729227a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  1 12:57:42 np0005464891 nova_compute[259907]: 2025-10-01 16:57:42.789 2 INFO nova.virt.libvirt.driver [None req-b10df50b-0fde-43ee-98ee-645b04275b1f 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Successfully detached device vdb from instance 03ad1fe8-a967-4d62-a904-ceda4729227a from the live domain config.#033[00m
Oct  1 12:57:42 np0005464891 podman[295793]: 2025-10-01 16:57:42.996051549 +0000 UTC m=+0.100768962 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 12:57:43 np0005464891 nova_compute[259907]: 2025-10-01 16:57:43.045 2 DEBUG nova.objects.instance [None req-b10df50b-0fde-43ee-98ee-645b04275b1f 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lazy-loading 'flavor' on Instance uuid 03ad1fe8-a967-4d62-a904-ceda4729227a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:57:43 np0005464891 nova_compute[259907]: 2025-10-01 16:57:43.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:43 np0005464891 nova_compute[259907]: 2025-10-01 16:57:43.075 2 DEBUG oslo_concurrency.lockutils [None req-b10df50b-0fde-43ee-98ee-645b04275b1f 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 247 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 1.2 MiB/s wr, 61 op/s
Oct  1 12:57:43 np0005464891 nova_compute[259907]: 2025-10-01 16:57:43.984 2 DEBUG oslo_concurrency.lockutils [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "03ad1fe8-a967-4d62-a904-ceda4729227a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:43 np0005464891 nova_compute[259907]: 2025-10-01 16:57:43.985 2 DEBUG oslo_concurrency.lockutils [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:43 np0005464891 nova_compute[259907]: 2025-10-01 16:57:43.985 2 DEBUG oslo_concurrency.lockutils [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:43 np0005464891 nova_compute[259907]: 2025-10-01 16:57:43.985 2 DEBUG oslo_concurrency.lockutils [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:43 np0005464891 nova_compute[259907]: 2025-10-01 16:57:43.986 2 DEBUG oslo_concurrency.lockutils [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:43 np0005464891 nova_compute[259907]: 2025-10-01 16:57:43.987 2 INFO nova.compute.manager [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Terminating instance#033[00m
Oct  1 12:57:43 np0005464891 nova_compute[259907]: 2025-10-01 16:57:43.988 2 DEBUG nova.compute.manager [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:57:44 np0005464891 kernel: tap7094fed9-93 (unregistering): left promiscuous mode
Oct  1 12:57:44 np0005464891 NetworkManager[44940]: <info>  [1759337864.0481] device (tap7094fed9-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:57:44 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:44Z|00182|binding|INFO|Releasing lport 7094fed9-935c-41be-bfa9-a61118606ba8 from this chassis (sb_readonly=0)
Oct  1 12:57:44 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:44Z|00183|binding|INFO|Setting lport 7094fed9-935c-41be-bfa9-a61118606ba8 down in Southbound
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:44 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:44Z|00184|binding|INFO|Removing iface tap7094fed9-93 ovn-installed in OVS
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:44.063 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:c3:c3 10.100.0.9'], port_security=['fa:16:3e:fd:c3:c3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '03ad1fe8-a967-4d62-a904-ceda4729227a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bb5e44f7928546dfb674d53cd3727027', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7748abdc-2492-422e-a502-5b4edc6dc141', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08e741b0-61e8-4126-b98f-610a01494f2d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=7094fed9-935c-41be-bfa9-a61118606ba8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:57:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:44.066 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 7094fed9-935c-41be-bfa9-a61118606ba8 in datapath 2345ad6b-d676-4546-a17e-6f7405ff5f24 unbound from our chassis#033[00m
Oct  1 12:57:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:44.069 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2345ad6b-d676-4546-a17e-6f7405ff5f24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:57:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:44.070 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[dc64f8e0-49c5-4313-b831-7a27d4d9d5eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:44.071 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24 namespace which is not needed anymore#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:44 np0005464891 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct  1 12:57:44 np0005464891 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 16.628s CPU time.
Oct  1 12:57:44 np0005464891 systemd-machined[214891]: Machine qemu-18-instance-00000012 terminated.
Oct  1 12:57:44 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[295330]: [NOTICE]   (295334) : haproxy version is 2.8.14-c23fe91
Oct  1 12:57:44 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[295330]: [NOTICE]   (295334) : path to executable is /usr/sbin/haproxy
Oct  1 12:57:44 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[295330]: [WARNING]  (295334) : Exiting Master process...
Oct  1 12:57:44 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[295330]: [ALERT]    (295334) : Current worker (295336) exited with code 143 (Terminated)
Oct  1 12:57:44 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[295330]: [WARNING]  (295334) : All workers exited. Exiting... (0)
Oct  1 12:57:44 np0005464891 systemd[1]: libpod-a064aa1ed4636262502f6f2a7b1dd6de214d26adb56ac5daaac178719e741d3b.scope: Deactivated successfully.
Oct  1 12:57:44 np0005464891 podman[295843]: 2025-10-01 16:57:44.208462914 +0000 UTC m=+0.047674727 container died a064aa1ed4636262502f6f2a7b1dd6de214d26adb56ac5daaac178719e741d3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.225 2 INFO nova.virt.libvirt.driver [-] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Instance destroyed successfully.#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.226 2 DEBUG nova.objects.instance [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lazy-loading 'resources' on Instance uuid 03ad1fe8-a967-4d62-a904-ceda4729227a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:57:44 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a064aa1ed4636262502f6f2a7b1dd6de214d26adb56ac5daaac178719e741d3b-userdata-shm.mount: Deactivated successfully.
Oct  1 12:57:44 np0005464891 systemd[1]: var-lib-containers-storage-overlay-7f54595462b2204a762ad58839b6ac7f6e0faa1abe6b493d9105ae23a3392182-merged.mount: Deactivated successfully.
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.244 2 DEBUG nova.virt.libvirt.vif [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:56:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-829299339',display_name='tempest-TestEncryptedCinderVolumes-server-829299339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-829299339',id=18,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIl8GG+Hu3ZeIB1jTbep6CoWVksHXyZXyjvntmOv7OGRe4G98GRtUibF6/2O1ilX4yVyQx2ndKQDONwIhDbTq9iQHoxJ5BxTIpatSro6LGX2MFYFIPrpekYlMom8yztJVQ==',key_name='tempest-keypair-341137682',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:57:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bb5e44f7928546dfb674d53cd3727027',ramdisk_id='',reservation_id='r-qpd0blyf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-803701988',owner_user_name='tempest-TestEncryptedCinderVolumes-803701988-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:57:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='906d3d29e27b49c1860f5397c6028d96',uuid=03ad1fe8-a967-4d62-a904-ceda4729227a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7094fed9-935c-41be-bfa9-a61118606ba8", "address": "fa:16:3e:fd:c3:c3", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7094fed9-93", "ovs_interfaceid": "7094fed9-935c-41be-bfa9-a61118606ba8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.245 2 DEBUG nova.network.os_vif_util [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converting VIF {"id": "7094fed9-935c-41be-bfa9-a61118606ba8", "address": "fa:16:3e:fd:c3:c3", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7094fed9-93", "ovs_interfaceid": "7094fed9-935c-41be-bfa9-a61118606ba8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.246 2 DEBUG nova.network.os_vif_util [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fd:c3:c3,bridge_name='br-int',has_traffic_filtering=True,id=7094fed9-935c-41be-bfa9-a61118606ba8,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7094fed9-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.246 2 DEBUG os_vif [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:c3:c3,bridge_name='br-int',has_traffic_filtering=True,id=7094fed9-935c-41be-bfa9-a61118606ba8,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7094fed9-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.248 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7094fed9-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.255 2 INFO os_vif [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:c3:c3,bridge_name='br-int',has_traffic_filtering=True,id=7094fed9-935c-41be-bfa9-a61118606ba8,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7094fed9-93')#033[00m
Oct  1 12:57:44 np0005464891 podman[295843]: 2025-10-01 16:57:44.256028907 +0000 UTC m=+0.095240690 container cleanup a064aa1ed4636262502f6f2a7b1dd6de214d26adb56ac5daaac178719e741d3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:57:44 np0005464891 systemd[1]: libpod-conmon-a064aa1ed4636262502f6f2a7b1dd6de214d26adb56ac5daaac178719e741d3b.scope: Deactivated successfully.
Oct  1 12:57:44 np0005464891 podman[295891]: 2025-10-01 16:57:44.343731491 +0000 UTC m=+0.058847414 container remove a064aa1ed4636262502f6f2a7b1dd6de214d26adb56ac5daaac178719e741d3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  1 12:57:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:44.353 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[827e3142-02f4-4a35-b2e9-c5d5844615d9]: (4, ('Wed Oct  1 04:57:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24 (a064aa1ed4636262502f6f2a7b1dd6de214d26adb56ac5daaac178719e741d3b)\na064aa1ed4636262502f6f2a7b1dd6de214d26adb56ac5daaac178719e741d3b\nWed Oct  1 04:57:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24 (a064aa1ed4636262502f6f2a7b1dd6de214d26adb56ac5daaac178719e741d3b)\na064aa1ed4636262502f6f2a7b1dd6de214d26adb56ac5daaac178719e741d3b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:44.355 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[59d288ca-5a52-440d-898a-29292b1bb5cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:44.356 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2345ad6b-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:44 np0005464891 kernel: tap2345ad6b-d0: left promiscuous mode
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:44.365 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e391fd95-0556-46d8-916b-0538c184cdb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:44.403 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[37dc7c13-c0a3-41d9-8b27-aa2ef5911062]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:44.404 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[1c7b77a7-ebdc-44e3-875f-780bd1996cba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:44.419 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[4a561c13-2cca-426d-9658-4530c97cc7a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476951, 'reachable_time': 30814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295913, 'error': None, 'target': 'ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:44 np0005464891 systemd[1]: run-netns-ovnmeta\x2d2345ad6b\x2dd676\x2d4546\x2da17e\x2d6f7405ff5f24.mount: Deactivated successfully.
Oct  1 12:57:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:44.426 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:57:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:44.427 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[814d9a51-3505-489f-8a34-d7fc9deb97e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.681 2 INFO nova.virt.libvirt.driver [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Deleting instance files /var/lib/nova/instances/03ad1fe8-a967-4d62-a904-ceda4729227a_del#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.681 2 INFO nova.virt.libvirt.driver [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Deletion of /var/lib/nova/instances/03ad1fe8-a967-4d62-a904-ceda4729227a_del complete#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.759 2 INFO nova.compute.manager [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.760 2 DEBUG oslo.service.loopingcall [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.760 2 DEBUG nova.compute.manager [-] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:57:44 np0005464891 nova_compute[259907]: 2025-10-01 16:57:44.760 2 DEBUG nova.network.neutron [-] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:57:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 187 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 21 KiB/s wr, 44 op/s
Oct  1 12:57:45 np0005464891 nova_compute[259907]: 2025-10-01 16:57:45.326 2 DEBUG nova.compute.manager [req-19ac234f-ce3a-4b5b-80b7-32a12e183fb4 req-07cf371d-394b-4cfe-8272-2e019dc76d21 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Received event network-vif-unplugged-7094fed9-935c-41be-bfa9-a61118606ba8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:57:45 np0005464891 nova_compute[259907]: 2025-10-01 16:57:45.327 2 DEBUG oslo_concurrency.lockutils [req-19ac234f-ce3a-4b5b-80b7-32a12e183fb4 req-07cf371d-394b-4cfe-8272-2e019dc76d21 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:45 np0005464891 nova_compute[259907]: 2025-10-01 16:57:45.327 2 DEBUG oslo_concurrency.lockutils [req-19ac234f-ce3a-4b5b-80b7-32a12e183fb4 req-07cf371d-394b-4cfe-8272-2e019dc76d21 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:45 np0005464891 nova_compute[259907]: 2025-10-01 16:57:45.327 2 DEBUG oslo_concurrency.lockutils [req-19ac234f-ce3a-4b5b-80b7-32a12e183fb4 req-07cf371d-394b-4cfe-8272-2e019dc76d21 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:45 np0005464891 nova_compute[259907]: 2025-10-01 16:57:45.327 2 DEBUG nova.compute.manager [req-19ac234f-ce3a-4b5b-80b7-32a12e183fb4 req-07cf371d-394b-4cfe-8272-2e019dc76d21 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] No waiting events found dispatching network-vif-unplugged-7094fed9-935c-41be-bfa9-a61118606ba8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:57:45 np0005464891 nova_compute[259907]: 2025-10-01 16:57:45.328 2 DEBUG nova.compute.manager [req-19ac234f-ce3a-4b5b-80b7-32a12e183fb4 req-07cf371d-394b-4cfe-8272-2e019dc76d21 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Received event network-vif-unplugged-7094fed9-935c-41be-bfa9-a61118606ba8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.214 2 DEBUG nova.network.neutron [-] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.233 2 INFO nova.compute.manager [-] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Took 1.47 seconds to deallocate network for instance.#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.286 2 DEBUG oslo_concurrency.lockutils [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.287 2 DEBUG oslo_concurrency.lockutils [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.352 2 DEBUG nova.compute.manager [req-7db2c1fd-ef1d-4e0a-93a7-c52b62f10c7a req-e6dbdd3d-c374-4eb8-acf1-2820dc8b4af9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Received event network-vif-deleted-7094fed9-935c-41be-bfa9-a61118606ba8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.355 2 DEBUG oslo_concurrency.processutils [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.629 2 DEBUG oslo_concurrency.lockutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.629 2 DEBUG oslo_concurrency.lockutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.647 2 DEBUG nova.compute.manager [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.716 2 DEBUG oslo_concurrency.lockutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:57:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1775436777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.846 2 DEBUG oslo_concurrency.processutils [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.852 2 DEBUG nova.compute.provider_tree [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.872 2 DEBUG nova.scheduler.client.report [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.906 2 DEBUG oslo_concurrency.lockutils [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.908 2 DEBUG oslo_concurrency.lockutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.915 2 DEBUG nova.virt.hardware [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.915 2 INFO nova.compute.claims [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:57:46 np0005464891 nova_compute[259907]: 2025-10-01 16:57:46.939 2 INFO nova.scheduler.client.report [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Deleted allocations for instance 03ad1fe8-a967-4d62-a904-ceda4729227a#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.028 2 DEBUG oslo_concurrency.lockutils [None req-de36d601-3291-4686-8cde-c0dedcfefd32 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.083 2 DEBUG oslo_concurrency.processutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:57:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 187 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 21 KiB/s wr, 44 op/s
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.392 2 DEBUG nova.compute.manager [req-3ff3643e-5b15-4cbc-aa18-e0224c420949 req-176d04f7-987a-4f8c-8156-63c68bdfd004 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Received event network-vif-plugged-7094fed9-935c-41be-bfa9-a61118606ba8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.393 2 DEBUG oslo_concurrency.lockutils [req-3ff3643e-5b15-4cbc-aa18-e0224c420949 req-176d04f7-987a-4f8c-8156-63c68bdfd004 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.394 2 DEBUG oslo_concurrency.lockutils [req-3ff3643e-5b15-4cbc-aa18-e0224c420949 req-176d04f7-987a-4f8c-8156-63c68bdfd004 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.394 2 DEBUG oslo_concurrency.lockutils [req-3ff3643e-5b15-4cbc-aa18-e0224c420949 req-176d04f7-987a-4f8c-8156-63c68bdfd004 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "03ad1fe8-a967-4d62-a904-ceda4729227a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.395 2 DEBUG nova.compute.manager [req-3ff3643e-5b15-4cbc-aa18-e0224c420949 req-176d04f7-987a-4f8c-8156-63c68bdfd004 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] No waiting events found dispatching network-vif-plugged-7094fed9-935c-41be-bfa9-a61118606ba8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.395 2 WARNING nova.compute.manager [req-3ff3643e-5b15-4cbc-aa18-e0224c420949 req-176d04f7-987a-4f8c-8156-63c68bdfd004 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Received unexpected event network-vif-plugged-7094fed9-935c-41be-bfa9-a61118606ba8 for instance with vm_state deleted and task_state None.#033[00m
Oct  1 12:57:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:57:47 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/538501509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.537 2 DEBUG oslo_concurrency.processutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.544 2 DEBUG nova.compute.provider_tree [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.565 2 DEBUG nova.scheduler.client.report [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.596 2 DEBUG oslo_concurrency.lockutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.598 2 DEBUG nova.compute.manager [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.648 2 INFO nova.virt.libvirt.driver [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.651 2 DEBUG nova.compute.manager [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.651 2 DEBUG nova.network.neutron [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.672 2 DEBUG nova.compute.manager [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.725 2 INFO nova.virt.block_device [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Booting with volume snapshot 0eef489b-fa19-434e-aa40-d8fabfd6bcfd at /dev/vda#033[00m
Oct  1 12:57:47 np0005464891 nova_compute[259907]: 2025-10-01 16:57:47.835 2 DEBUG nova.policy [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1280014cdfb74333ae8d71c78116e646', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8318b65fa88942a99937a0d198a04a9c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:57:47 np0005464891 podman[295959]: 2025-10-01 16:57:47.943538964 +0000 UTC m=+0.054537956 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2)
Oct  1 12:57:48 np0005464891 nova_compute[259907]: 2025-10-01 16:57:48.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:48 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:48.623 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:57:48 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:48.625 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:57:48 np0005464891 nova_compute[259907]: 2025-10-01 16:57:48.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 167 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 22 KiB/s wr, 61 op/s
Oct  1 12:57:49 np0005464891 nova_compute[259907]: 2025-10-01 16:57:49.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:49 np0005464891 nova_compute[259907]: 2025-10-01 16:57:49.378 2 DEBUG nova.network.neutron [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Successfully created port: f38329c7-0a79-480d-86b8-0cdc29deda98 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:57:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:57:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3631074885' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:57:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:57:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3631074885' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:57:50 np0005464891 nova_compute[259907]: 2025-10-01 16:57:50.514 2 DEBUG nova.network.neutron [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Successfully updated port: f38329c7-0a79-480d-86b8-0cdc29deda98 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:57:50 np0005464891 nova_compute[259907]: 2025-10-01 16:57:50.533 2 DEBUG oslo_concurrency.lockutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "refresh_cache-f5c6a668-6fa1-4a25-974c-0395fc52bf1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:57:50 np0005464891 nova_compute[259907]: 2025-10-01 16:57:50.533 2 DEBUG oslo_concurrency.lockutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquired lock "refresh_cache-f5c6a668-6fa1-4a25-974c-0395fc52bf1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:57:50 np0005464891 nova_compute[259907]: 2025-10-01 16:57:50.534 2 DEBUG nova.network.neutron [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:57:50 np0005464891 nova_compute[259907]: 2025-10-01 16:57:50.630 2 DEBUG nova.compute.manager [req-eae00f80-94ac-4bb5-984e-ca278f9db60b req-6613c78b-c08f-47e2-a58f-b265276401e5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Received event network-changed-f38329c7-0a79-480d-86b8-0cdc29deda98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:57:50 np0005464891 nova_compute[259907]: 2025-10-01 16:57:50.630 2 DEBUG nova.compute.manager [req-eae00f80-94ac-4bb5-984e-ca278f9db60b req-6613c78b-c08f-47e2-a58f-b265276401e5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Refreshing instance network info cache due to event network-changed-f38329c7-0a79-480d-86b8-0cdc29deda98. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:57:50 np0005464891 nova_compute[259907]: 2025-10-01 16:57:50.630 2 DEBUG oslo_concurrency.lockutils [req-eae00f80-94ac-4bb5-984e-ca278f9db60b req-6613c78b-c08f-47e2-a58f-b265276401e5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-f5c6a668-6fa1-4a25-974c-0395fc52bf1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:57:50 np0005464891 nova_compute[259907]: 2025-10-01 16:57:50.741 2 DEBUG nova.network.neutron [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:57:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  1 12:57:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 12:57:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:57:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:57:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:57:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:57:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:57:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:57:50 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev dfb0dcca-7f35-4fc5-831a-8ee021e47243 does not exist
Oct  1 12:57:50 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev b86e929f-c0bd-4293-a799-89048117140c does not exist
Oct  1 12:57:50 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev a0a95604-245c-45a9-8f71-85613d738871 does not exist
Oct  1 12:57:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:57:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:57:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:57:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:57:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:57:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:57:51 np0005464891 podman[296137]: 2025-10-01 16:57:51.134326971 +0000 UTC m=+0.090586824 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct  1 12:57:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:57:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1364778670' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:57:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:57:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1364778670' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:57:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 5.8 KiB/s wr, 70 op/s
Oct  1 12:57:51 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 12:57:51 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:57:51 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:57:51 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:57:51 np0005464891 podman[296274]: 2025-10-01 16:57:51.572216697 +0000 UTC m=+0.047734809 container create 55c5a2f52774c070a662082005d22d85de3933bbe830fbadf8edf39951f995be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:57:51 np0005464891 systemd[1]: Started libpod-conmon-55c5a2f52774c070a662082005d22d85de3933bbe830fbadf8edf39951f995be.scope.
Oct  1 12:57:51 np0005464891 nova_compute[259907]: 2025-10-01 16:57:51.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:51 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:57:51 np0005464891 podman[296274]: 2025-10-01 16:57:51.55405965 +0000 UTC m=+0.029577782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:57:51 np0005464891 podman[296274]: 2025-10-01 16:57:51.670008606 +0000 UTC m=+0.145526738 container init 55c5a2f52774c070a662082005d22d85de3933bbe830fbadf8edf39951f995be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:57:51 np0005464891 podman[296274]: 2025-10-01 16:57:51.681737908 +0000 UTC m=+0.157256020 container start 55c5a2f52774c070a662082005d22d85de3933bbe830fbadf8edf39951f995be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 12:57:51 np0005464891 podman[296274]: 2025-10-01 16:57:51.686489298 +0000 UTC m=+0.162007440 container attach 55c5a2f52774c070a662082005d22d85de3933bbe830fbadf8edf39951f995be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:57:51 np0005464891 stoic_margulis[296291]: 167 167
Oct  1 12:57:51 np0005464891 systemd[1]: libpod-55c5a2f52774c070a662082005d22d85de3933bbe830fbadf8edf39951f995be.scope: Deactivated successfully.
Oct  1 12:57:51 np0005464891 conmon[296291]: conmon 55c5a2f52774c070a662 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55c5a2f52774c070a662082005d22d85de3933bbe830fbadf8edf39951f995be.scope/container/memory.events
Oct  1 12:57:51 np0005464891 podman[296274]: 2025-10-01 16:57:51.692562924 +0000 UTC m=+0.168081036 container died 55c5a2f52774c070a662082005d22d85de3933bbe830fbadf8edf39951f995be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:57:51 np0005464891 systemd[1]: var-lib-containers-storage-overlay-95d9360505913bae9fefd90092bcdb8ce3c1ef20dacff6ec6ae1b6ed8e3b0c57-merged.mount: Deactivated successfully.
Oct  1 12:57:51 np0005464891 nova_compute[259907]: 2025-10-01 16:57:51.733 2 DEBUG nova.network.neutron [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Updating instance_info_cache with network_info: [{"id": "f38329c7-0a79-480d-86b8-0cdc29deda98", "address": "fa:16:3e:5d:5d:50", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf38329c7-0a", "ovs_interfaceid": "f38329c7-0a79-480d-86b8-0cdc29deda98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:57:51 np0005464891 podman[296274]: 2025-10-01 16:57:51.772947757 +0000 UTC m=+0.248465869 container remove 55c5a2f52774c070a662082005d22d85de3933bbe830fbadf8edf39951f995be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:57:51 np0005464891 nova_compute[259907]: 2025-10-01 16:57:51.779 2 DEBUG oslo_concurrency.lockutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Releasing lock "refresh_cache-f5c6a668-6fa1-4a25-974c-0395fc52bf1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:57:51 np0005464891 nova_compute[259907]: 2025-10-01 16:57:51.780 2 DEBUG nova.compute.manager [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Instance network_info: |[{"id": "f38329c7-0a79-480d-86b8-0cdc29deda98", "address": "fa:16:3e:5d:5d:50", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf38329c7-0a", "ovs_interfaceid": "f38329c7-0a79-480d-86b8-0cdc29deda98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:57:51 np0005464891 nova_compute[259907]: 2025-10-01 16:57:51.782 2 DEBUG oslo_concurrency.lockutils [req-eae00f80-94ac-4bb5-984e-ca278f9db60b req-6613c78b-c08f-47e2-a58f-b265276401e5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-f5c6a668-6fa1-4a25-974c-0395fc52bf1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:57:51 np0005464891 nova_compute[259907]: 2025-10-01 16:57:51.782 2 DEBUG nova.network.neutron [req-eae00f80-94ac-4bb5-984e-ca278f9db60b req-6613c78b-c08f-47e2-a58f-b265276401e5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Refreshing network info cache for port f38329c7-0a79-480d-86b8-0cdc29deda98 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:57:51 np0005464891 systemd[1]: libpod-conmon-55c5a2f52774c070a662082005d22d85de3933bbe830fbadf8edf39951f995be.scope: Deactivated successfully.
Oct  1 12:57:51 np0005464891 podman[296315]: 2025-10-01 16:57:51.987004251 +0000 UTC m=+0.063291726 container create 6ea0549873de97faa8382a08ea88d6d46f85ac62fc23658d5b5b0fcd4e082e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feistel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:57:52 np0005464891 systemd[1]: Started libpod-conmon-6ea0549873de97faa8382a08ea88d6d46f85ac62fc23658d5b5b0fcd4e082e40.scope.
Oct  1 12:57:52 np0005464891 podman[296315]: 2025-10-01 16:57:51.95120618 +0000 UTC m=+0.027493675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:57:52 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:57:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a99b23b469cea3ee14a06ff97d1d1d6f7a0445f088fab1173b699e2eb3271a9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:57:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a99b23b469cea3ee14a06ff97d1d1d6f7a0445f088fab1173b699e2eb3271a9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:57:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a99b23b469cea3ee14a06ff97d1d1d6f7a0445f088fab1173b699e2eb3271a9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:57:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a99b23b469cea3ee14a06ff97d1d1d6f7a0445f088fab1173b699e2eb3271a9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:57:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a99b23b469cea3ee14a06ff97d1d1d6f7a0445f088fab1173b699e2eb3271a9a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.085 2 DEBUG os_brick.utils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.088 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:52 np0005464891 podman[296315]: 2025-10-01 16:57:52.093965591 +0000 UTC m=+0.170253066 container init 6ea0549873de97faa8382a08ea88d6d46f85ac62fc23658d5b5b0fcd4e082e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feistel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.101 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.101 741 DEBUG oslo.privsep.daemon [-] privsep: reply[262d74f0-2cd3-4003-8cbf-da65ccdd9bee]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.102 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:52 np0005464891 podman[296315]: 2025-10-01 16:57:52.103183894 +0000 UTC m=+0.179471369 container start 6ea0549873de97faa8382a08ea88d6d46f85ac62fc23658d5b5b0fcd4e082e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:57:52 np0005464891 podman[296315]: 2025-10-01 16:57:52.106827664 +0000 UTC m=+0.183115139 container attach 6ea0549873de97faa8382a08ea88d6d46f85ac62fc23658d5b5b0fcd4e082e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feistel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 12:57:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.112 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.113 741 DEBUG oslo.privsep.daemon [-] privsep: reply[42dca785-00c4-4785-a0e9-5e78f2d7cf08]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.114 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.126 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.127 741 DEBUG oslo.privsep.daemon [-] privsep: reply[681fecae-ba02-4bf0-b93c-92f3b37a0480]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.128 741 DEBUG oslo.privsep.daemon [-] privsep: reply[309c2af9-dc8d-4565-b7df-28897df0d100]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.129 2 DEBUG oslo_concurrency.processutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.159 2 DEBUG oslo_concurrency.processutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.162 2 DEBUG os_brick.initiator.connectors.lightos [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.162 2 DEBUG os_brick.initiator.connectors.lightos [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.162 2 DEBUG os_brick.initiator.connectors.lightos [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.163 2 DEBUG os_brick.utils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.163 2 DEBUG nova.virt.block_device [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Updating existing volume attachment record: b86f4be3-6459-4b95-8133-20356d733700 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:57:52 np0005464891 nova_compute[259907]: 2025-10-01 16:57:52.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:57:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2920548111' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:57:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 167 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 5.9 KiB/s wr, 80 op/s
Oct  1 12:57:53 np0005464891 serene_feistel[296332]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:57:53 np0005464891 serene_feistel[296332]: --> relative data size: 1.0
Oct  1 12:57:53 np0005464891 serene_feistel[296332]: --> All data devices are unavailable
Oct  1 12:57:53 np0005464891 systemd[1]: libpod-6ea0549873de97faa8382a08ea88d6d46f85ac62fc23658d5b5b0fcd4e082e40.scope: Deactivated successfully.
Oct  1 12:57:53 np0005464891 systemd[1]: libpod-6ea0549873de97faa8382a08ea88d6d46f85ac62fc23658d5b5b0fcd4e082e40.scope: Consumed 1.086s CPU time.
Oct  1 12:57:53 np0005464891 podman[296315]: 2025-10-01 16:57:53.248007328 +0000 UTC m=+1.324294803 container died 6ea0549873de97faa8382a08ea88d6d46f85ac62fc23658d5b5b0fcd4e082e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:57:53 np0005464891 systemd[1]: var-lib-containers-storage-overlay-a99b23b469cea3ee14a06ff97d1d1d6f7a0445f088fab1173b699e2eb3271a9a-merged.mount: Deactivated successfully.
Oct  1 12:57:53 np0005464891 podman[296315]: 2025-10-01 16:57:53.382640006 +0000 UTC m=+1.458927521 container remove 6ea0549873de97faa8382a08ea88d6d46f85ac62fc23658d5b5b0fcd4e082e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 12:57:53 np0005464891 systemd[1]: libpod-conmon-6ea0549873de97faa8382a08ea88d6d46f85ac62fc23658d5b5b0fcd4e082e40.scope: Deactivated successfully.
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.531 2 DEBUG nova.network.neutron [req-eae00f80-94ac-4bb5-984e-ca278f9db60b req-6613c78b-c08f-47e2-a58f-b265276401e5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Updated VIF entry in instance network info cache for port f38329c7-0a79-480d-86b8-0cdc29deda98. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.533 2 DEBUG nova.network.neutron [req-eae00f80-94ac-4bb5-984e-ca278f9db60b req-6613c78b-c08f-47e2-a58f-b265276401e5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Updating instance_info_cache with network_info: [{"id": "f38329c7-0a79-480d-86b8-0cdc29deda98", "address": "fa:16:3e:5d:5d:50", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf38329c7-0a", "ovs_interfaceid": "f38329c7-0a79-480d-86b8-0cdc29deda98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.550 2 DEBUG nova.compute.manager [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.554 2 DEBUG nova.virt.libvirt.driver [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.555 2 INFO nova.virt.libvirt.driver [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Creating image(s)#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.555 2 DEBUG nova.virt.libvirt.driver [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.556 2 DEBUG nova.virt.libvirt.driver [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Ensure instance console log exists: /var/lib/nova/instances/f5c6a668-6fa1-4a25-974c-0395fc52bf1b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.556 2 DEBUG oslo_concurrency.lockutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.557 2 DEBUG oslo_concurrency.lockutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.557 2 DEBUG oslo_concurrency.lockutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.561 2 DEBUG nova.virt.libvirt.driver [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Start _get_guest_xml network_info=[{"id": "f38329c7-0a79-480d-86b8-0cdc29deda98", "address": "fa:16:3e:5d:5d:50", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf38329c7-0a", "ovs_interfaceid": "f38329c7-0a79-480d-86b8-0cdc29deda98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-10-01T16:57:39Z,direct_url=<?>,disk_format='qcow2',id=581008f4-25c1-47a9-a575-c3b8fd62331a,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-2012122874',owner='8318b65fa88942a99937a0d198a04a9c',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-10-01T16:57:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'mount_device': '/dev/vda', 'attachment_id': 'b86f4be3-6459-4b95-8133-20356d733700', 'disk_bus': 'virtio', 'delete_on_termination': True, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-39ef8826-321b-487e-8079-cce20f84e21a', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '39ef8826-321b-487e-8079-cce20f84e21a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'f5c6a668-6fa1-4a25-974c-0395fc52bf1b', 'attached_at': '', 'detached_at': '', 'volume_id': '39ef8826-321b-487e-8079-cce20f84e21a', 'serial': '39ef8826-321b-487e-8079-cce20f84e21a'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.569 2 DEBUG oslo_concurrency.lockutils [req-eae00f80-94ac-4bb5-984e-ca278f9db60b req-6613c78b-c08f-47e2-a58f-b265276401e5 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-f5c6a668-6fa1-4a25-974c-0395fc52bf1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.571 2 WARNING nova.virt.libvirt.driver [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.577 2 DEBUG nova.virt.libvirt.host [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.579 2 DEBUG nova.virt.libvirt.host [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.585 2 DEBUG nova.virt.libvirt.host [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.586 2 DEBUG nova.virt.libvirt.host [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.587 2 DEBUG nova.virt.libvirt.driver [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.587 2 DEBUG nova.virt.hardware [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-10-01T16:57:39Z,direct_url=<?>,disk_format='qcow2',id=581008f4-25c1-47a9-a575-c3b8fd62331a,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-2012122874',owner='8318b65fa88942a99937a0d198a04a9c',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-10-01T16:57:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.587 2 DEBUG nova.virt.hardware [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.588 2 DEBUG nova.virt.hardware [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.588 2 DEBUG nova.virt.hardware [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.588 2 DEBUG nova.virt.hardware [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.588 2 DEBUG nova.virt.hardware [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.588 2 DEBUG nova.virt.hardware [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.589 2 DEBUG nova.virt.hardware [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.589 2 DEBUG nova.virt.hardware [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.589 2 DEBUG nova.virt.hardware [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.589 2 DEBUG nova.virt.hardware [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.625 2 DEBUG nova.storage.rbd_utils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image f5c6a668-6fa1-4a25-974c-0395fc52bf1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:57:53 np0005464891 nova_compute[259907]: 2025-10-01 16:57:53.629 2 DEBUG oslo_concurrency.processutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:57:54 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2473849097' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:57:54 np0005464891 podman[296558]: 2025-10-01 16:57:54.092285049 +0000 UTC m=+0.050709840 container create 9530c8b660b11f00cda684e2a8a8e464ee06de8a16b72371b25c8bb84a0afde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.109 2 DEBUG oslo_concurrency.processutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:54 np0005464891 systemd[1]: Started libpod-conmon-9530c8b660b11f00cda684e2a8a8e464ee06de8a16b72371b25c8bb84a0afde0.scope.
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.140 2 DEBUG nova.virt.libvirt.vif [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:57:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1759564802',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1759564802',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1759564802',id=20,image_ref='581008f4-25c1-47a9-a575-c3b8fd62331a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK2Aa4XN53urF/mn2LfmOREsLi3DcmZBjmrB/iRLAeVu3VEZvhSi7RK4LeMBlzZ8HZFvWV0aV3VGttYChu1q08d3Ir/3+FVcTBmbKIN0Dtco4ir+PNGiDzfRAUuEZ2kizQ==',key_name='tempest-keypair-1828679444',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-g9okt0pe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-582136054',image_owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:57:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1280014cdfb74333ae8d71c78116e646',uuid=f5c6a668-6fa1-4a25-974c-0395fc52bf1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f38329c7-0a79-480d-86b8-0cdc29deda98", "address": "fa:16:3e:5d:5d:50", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf38329c7-0a", "ovs_interfaceid": "f38329c7-0a79-480d-86b8-0cdc29deda98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.140 2 DEBUG nova.network.os_vif_util [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "f38329c7-0a79-480d-86b8-0cdc29deda98", "address": "fa:16:3e:5d:5d:50", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf38329c7-0a", "ovs_interfaceid": "f38329c7-0a79-480d-86b8-0cdc29deda98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.142 2 DEBUG nova.network.os_vif_util [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:5d:50,bridge_name='br-int',has_traffic_filtering=True,id=f38329c7-0a79-480d-86b8-0cdc29deda98,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf38329c7-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.143 2 DEBUG nova.objects.instance [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lazy-loading 'pci_devices' on Instance uuid f5c6a668-6fa1-4a25-974c-0395fc52bf1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.159 2 DEBUG nova.virt.libvirt.driver [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  <uuid>f5c6a668-6fa1-4a25-974c-0395fc52bf1b</uuid>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  <name>instance-00000014</name>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-1759564802</nova:name>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:57:53</nova:creationTime>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:        <nova:user uuid="1280014cdfb74333ae8d71c78116e646">tempest-TestVolumeBootPattern-582136054-project-member</nova:user>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:        <nova:project uuid="8318b65fa88942a99937a0d198a04a9c">tempest-TestVolumeBootPattern-582136054</nova:project>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="581008f4-25c1-47a9-a575-c3b8fd62331a"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:        <nova:port uuid="f38329c7-0a79-480d-86b8-0cdc29deda98">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <entry name="serial">f5c6a668-6fa1-4a25-974c-0395fc52bf1b</entry>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <entry name="uuid">f5c6a668-6fa1-4a25-974c-0395fc52bf1b</entry>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/f5c6a668-6fa1-4a25-974c-0395fc52bf1b_disk.config">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="volumes/volume-39ef8826-321b-487e-8079-cce20f84e21a">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <serial>39ef8826-321b-487e-8079-cce20f84e21a</serial>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:5d:5d:50"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <target dev="tapf38329c7-0a"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/f5c6a668-6fa1-4a25-974c-0395fc52bf1b/console.log" append="off"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <input type="keyboard" bus="usb"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:57:54 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:57:54 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:57:54 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:57:54 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.159 2 DEBUG nova.compute.manager [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Preparing to wait for external event network-vif-plugged-f38329c7-0a79-480d-86b8-0cdc29deda98 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.160 2 DEBUG oslo_concurrency.lockutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.160 2 DEBUG oslo_concurrency.lockutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.160 2 DEBUG oslo_concurrency.lockutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.161 2 DEBUG nova.virt.libvirt.vif [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:57:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1759564802',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1759564802',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1759564802',id=20,image_ref='581008f4-25c1-47a9-a575-c3b8fd62331a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK2Aa4XN53urF/mn2LfmOREsLi3DcmZBjmrB/iRLAeVu3VEZvhSi7RK4LeMBlzZ8HZFvWV0aV3VGttYChu1q08d3Ir/3+FVcTBmbKIN0Dtco4ir+PNGiDzfRAUuEZ2kizQ==',key_name='tempest-keypair-1828679444',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-g9okt0pe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-582136054',image_owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:57:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1280014cdfb74333ae8d71c78116e646',uuid=f5c6a668-6fa1-4a25-974c-0395fc52bf1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f38329c7-0a79-480d-86b8-0cdc29deda98", "address": "fa:16:3e:5d:5d:50", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf38329c7-0a", "ovs_interfaceid": "f38329c7-0a79-480d-86b8-0cdc29deda98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.161 2 DEBUG nova.network.os_vif_util [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "f38329c7-0a79-480d-86b8-0cdc29deda98", "address": "fa:16:3e:5d:5d:50", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf38329c7-0a", "ovs_interfaceid": "f38329c7-0a79-480d-86b8-0cdc29deda98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.162 2 DEBUG nova.network.os_vif_util [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:5d:50,bridge_name='br-int',has_traffic_filtering=True,id=f38329c7-0a79-480d-86b8-0cdc29deda98,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf38329c7-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:57:54 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.162 2 DEBUG os_vif [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:5d:50,bridge_name='br-int',has_traffic_filtering=True,id=f38329c7-0a79-480d-86b8-0cdc29deda98,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf38329c7-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.164 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.164 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:57:54 np0005464891 podman[296558]: 2025-10-01 16:57:54.074746758 +0000 UTC m=+0.033171579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf38329c7-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.169 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf38329c7-0a, col_values=(('external_ids', {'iface-id': 'f38329c7-0a79-480d-86b8-0cdc29deda98', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:5d:50', 'vm-uuid': 'f5c6a668-6fa1-4a25-974c-0395fc52bf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:54 np0005464891 NetworkManager[44940]: <info>  [1759337874.1719] manager: (tapf38329c7-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.179 2 INFO os_vif [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:5d:50,bridge_name='br-int',has_traffic_filtering=True,id=f38329c7-0a79-480d-86b8-0cdc29deda98,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf38329c7-0a')#033[00m
Oct  1 12:57:54 np0005464891 podman[296558]: 2025-10-01 16:57:54.183290712 +0000 UTC m=+0.141715533 container init 9530c8b660b11f00cda684e2a8a8e464ee06de8a16b72371b25c8bb84a0afde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 12:57:54 np0005464891 podman[296558]: 2025-10-01 16:57:54.19049387 +0000 UTC m=+0.148918661 container start 9530c8b660b11f00cda684e2a8a8e464ee06de8a16b72371b25c8bb84a0afde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 12:57:54 np0005464891 podman[296558]: 2025-10-01 16:57:54.193885542 +0000 UTC m=+0.152310363 container attach 9530c8b660b11f00cda684e2a8a8e464ee06de8a16b72371b25c8bb84a0afde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 12:57:54 np0005464891 musing_kowalevski[296577]: 167 167
Oct  1 12:57:54 np0005464891 systemd[1]: libpod-9530c8b660b11f00cda684e2a8a8e464ee06de8a16b72371b25c8bb84a0afde0.scope: Deactivated successfully.
Oct  1 12:57:54 np0005464891 podman[296558]: 2025-10-01 16:57:54.199833155 +0000 UTC m=+0.158257946 container died 9530c8b660b11f00cda684e2a8a8e464ee06de8a16b72371b25c8bb84a0afde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:57:54 np0005464891 systemd[1]: var-lib-containers-storage-overlay-560f7f223998904b8fd87ed4d84c3547759c6f743e66b4e0f1df47b8af99929d-merged.mount: Deactivated successfully.
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.232 2 DEBUG nova.virt.libvirt.driver [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.233 2 DEBUG nova.virt.libvirt.driver [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.233 2 DEBUG nova.virt.libvirt.driver [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No VIF found with MAC fa:16:3e:5d:5d:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.234 2 INFO nova.virt.libvirt.driver [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Using config drive#033[00m
Oct  1 12:57:54 np0005464891 podman[296558]: 2025-10-01 16:57:54.23613561 +0000 UTC m=+0.194560401 container remove 9530c8b660b11f00cda684e2a8a8e464ee06de8a16b72371b25c8bb84a0afde0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 12:57:54 np0005464891 systemd[1]: libpod-conmon-9530c8b660b11f00cda684e2a8a8e464ee06de8a16b72371b25c8bb84a0afde0.scope: Deactivated successfully.
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.258 2 DEBUG nova.storage.rbd_utils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image f5c6a668-6fa1-4a25-974c-0395fc52bf1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:57:54 np0005464891 podman[296622]: 2025-10-01 16:57:54.474685475 +0000 UTC m=+0.065567917 container create 01b10990cf6309b7426b5941683231319e2e26df01bb7e0f7079ba1efc5d394c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:57:54 np0005464891 systemd[1]: Started libpod-conmon-01b10990cf6309b7426b5941683231319e2e26df01bb7e0f7079ba1efc5d394c.scope.
Oct  1 12:57:54 np0005464891 podman[296622]: 2025-10-01 16:57:54.450822532 +0000 UTC m=+0.041705064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:57:54 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:57:54 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3b4ce71e25d14d45fa64659e12cfe80a36331b27fae5b30264a43181eb2e87f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:57:54 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3b4ce71e25d14d45fa64659e12cfe80a36331b27fae5b30264a43181eb2e87f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:57:54 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3b4ce71e25d14d45fa64659e12cfe80a36331b27fae5b30264a43181eb2e87f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:57:54 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3b4ce71e25d14d45fa64659e12cfe80a36331b27fae5b30264a43181eb2e87f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:57:54 np0005464891 podman[296622]: 2025-10-01 16:57:54.574086839 +0000 UTC m=+0.164969331 container init 01b10990cf6309b7426b5941683231319e2e26df01bb7e0f7079ba1efc5d394c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tharp, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 12:57:54 np0005464891 podman[296622]: 2025-10-01 16:57:54.582866529 +0000 UTC m=+0.173748971 container start 01b10990cf6309b7426b5941683231319e2e26df01bb7e0f7079ba1efc5d394c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 12:57:54 np0005464891 podman[296622]: 2025-10-01 16:57:54.58836074 +0000 UTC m=+0.179243192 container attach 01b10990cf6309b7426b5941683231319e2e26df01bb7e0f7079ba1efc5d394c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tharp, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.618 2 INFO nova.virt.libvirt.driver [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Creating config drive at /var/lib/nova/instances/f5c6a668-6fa1-4a25-974c-0395fc52bf1b/disk.config#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.630 2 DEBUG oslo_concurrency.processutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5c6a668-6fa1-4a25-974c-0395fc52bf1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyjywpqrp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.762 2 DEBUG oslo_concurrency.processutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5c6a668-6fa1-4a25-974c-0395fc52bf1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyjywpqrp" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.799 2 DEBUG nova.storage.rbd_utils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image f5c6a668-6fa1-4a25-974c-0395fc52bf1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.803 2 DEBUG oslo_concurrency.processutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5c6a668-6fa1-4a25-974c-0395fc52bf1b/disk.config f5c6a668-6fa1-4a25-974c-0395fc52bf1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.969 2 DEBUG oslo_concurrency.processutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5c6a668-6fa1-4a25-974c-0395fc52bf1b/disk.config f5c6a668-6fa1-4a25-974c-0395fc52bf1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:57:54 np0005464891 nova_compute[259907]: 2025-10-01 16:57:54.971 2 INFO nova.virt.libvirt.driver [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Deleting local config drive /var/lib/nova/instances/f5c6a668-6fa1-4a25-974c-0395fc52bf1b/disk.config because it was imported into RBD.#033[00m
Oct  1 12:57:55 np0005464891 kernel: tapf38329c7-0a: entered promiscuous mode
Oct  1 12:57:55 np0005464891 NetworkManager[44940]: <info>  [1759337875.0331] manager: (tapf38329c7-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/110)
Oct  1 12:57:55 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:55Z|00185|binding|INFO|Claiming lport f38329c7-0a79-480d-86b8-0cdc29deda98 for this chassis.
Oct  1 12:57:55 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:55Z|00186|binding|INFO|f38329c7-0a79-480d-86b8-0cdc29deda98: Claiming fa:16:3e:5d:5d:50 10.100.0.9
Oct  1 12:57:55 np0005464891 nova_compute[259907]: 2025-10-01 16:57:55.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:55.045 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:5d:50 10.100.0.9'], port_security=['fa:16:3e:5d:5d:50 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f5c6a668-6fa1-4a25-974c-0395fc52bf1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce1e1062-6685-441b-8278-667224375e38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8318b65fa88942a99937a0d198a04a9c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4798687d-057e-464a-b213-d922e99d4dec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4cdcf07-f310-4572-944c-43bd8e74f763, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=f38329c7-0a79-480d-86b8-0cdc29deda98) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:57:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:55.047 162546 INFO neutron.agent.ovn.metadata.agent [-] Port f38329c7-0a79-480d-86b8-0cdc29deda98 in datapath ce1e1062-6685-441b-8278-667224375e38 bound to our chassis#033[00m
Oct  1 12:57:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:55.052 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce1e1062-6685-441b-8278-667224375e38#033[00m
Oct  1 12:57:55 np0005464891 nova_compute[259907]: 2025-10-01 16:57:55.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:55 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:55Z|00187|binding|INFO|Setting lport f38329c7-0a79-480d-86b8-0cdc29deda98 ovn-installed in OVS
Oct  1 12:57:55 np0005464891 ovn_controller[152409]: 2025-10-01T16:57:55Z|00188|binding|INFO|Setting lport f38329c7-0a79-480d-86b8-0cdc29deda98 up in Southbound
Oct  1 12:57:55 np0005464891 nova_compute[259907]: 2025-10-01 16:57:55.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:55 np0005464891 nova_compute[259907]: 2025-10-01 16:57:55.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:55.069 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b0cd68bf-4810-4b69-945c-67a1d117ca59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:55 np0005464891 systemd-udevd[296696]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:57:55 np0005464891 systemd-machined[214891]: New machine qemu-20-instance-00000014.
Oct  1 12:57:55 np0005464891 NetworkManager[44940]: <info>  [1759337875.0923] device (tapf38329c7-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:57:55 np0005464891 NetworkManager[44940]: <info>  [1759337875.0936] device (tapf38329c7-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:57:55 np0005464891 systemd[1]: Started Virtual Machine qemu-20-instance-00000014.
Oct  1 12:57:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:55.111 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[2609b14c-c00c-486a-bf74-17ad6f2a27e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:55.116 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec277d9-3b5d-4a85-b651-d90f7d78be4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:55.147 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[bae2d29f-3a38-4a6a-a521-fa75e32f9acb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:55.165 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5badd4e7-1a7b-4dc4-ab09-b2753177bd72]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce1e1062-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:87:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478572, 'reachable_time': 34368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296709, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:55.182 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f9d2c57c-46eb-4e68-869a-a208d2c98238]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce1e1062-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478584, 'tstamp': 478584}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296711, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce1e1062-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478587, 'tstamp': 478587}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296711, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:57:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 167 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 20 KiB/s wr, 79 op/s
Oct  1 12:57:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:55.184 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce1e1062-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:55 np0005464891 nova_compute[259907]: 2025-10-01 16:57:55.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:55 np0005464891 nova_compute[259907]: 2025-10-01 16:57:55.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:55.187 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce1e1062-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:55.188 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:57:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:55.188 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce1e1062-60, col_values=(('external_ids', {'iface-id': 'd971881d-8d8b-44dc-b0b0-4cd0065c0105'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:55.188 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:57:55 np0005464891 determined_tharp[296639]: {
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:    "0": [
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:        {
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "devices": [
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "/dev/loop3"
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            ],
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "lv_name": "ceph_lv0",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "lv_size": "21470642176",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "name": "ceph_lv0",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "tags": {
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.cluster_name": "ceph",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.crush_device_class": "",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.encrypted": "0",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.osd_id": "0",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.type": "block",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.vdo": "0"
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            },
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "type": "block",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "vg_name": "ceph_vg0"
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:        }
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:    ],
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:    "1": [
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:        {
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "devices": [
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "/dev/loop4"
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            ],
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "lv_name": "ceph_lv1",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "lv_size": "21470642176",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "name": "ceph_lv1",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "tags": {
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.cluster_name": "ceph",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.crush_device_class": "",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.encrypted": "0",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.osd_id": "1",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.type": "block",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.vdo": "0"
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            },
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "type": "block",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "vg_name": "ceph_vg1"
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:        }
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:    ],
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:    "2": [
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:        {
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "devices": [
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "/dev/loop5"
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            ],
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "lv_name": "ceph_lv2",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "lv_size": "21470642176",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "name": "ceph_lv2",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "tags": {
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.cluster_name": "ceph",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.crush_device_class": "",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.encrypted": "0",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.osd_id": "2",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.type": "block",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:                "ceph.vdo": "0"
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            },
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "type": "block",
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:            "vg_name": "ceph_vg2"
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:        }
Oct  1 12:57:55 np0005464891 determined_tharp[296639]:    ]
Oct  1 12:57:55 np0005464891 determined_tharp[296639]: }
Oct  1 12:57:55 np0005464891 systemd[1]: libpod-01b10990cf6309b7426b5941683231319e2e26df01bb7e0f7079ba1efc5d394c.scope: Deactivated successfully.
Oct  1 12:57:55 np0005464891 podman[296752]: 2025-10-01 16:57:55.424656741 +0000 UTC m=+0.034084035 container died 01b10990cf6309b7426b5941683231319e2e26df01bb7e0f7079ba1efc5d394c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tharp, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:57:55 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e3b4ce71e25d14d45fa64659e12cfe80a36331b27fae5b30264a43181eb2e87f-merged.mount: Deactivated successfully.
Oct  1 12:57:55 np0005464891 podman[296752]: 2025-10-01 16:57:55.487524214 +0000 UTC m=+0.096951488 container remove 01b10990cf6309b7426b5941683231319e2e26df01bb7e0f7079ba1efc5d394c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tharp, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 12:57:55 np0005464891 systemd[1]: libpod-conmon-01b10990cf6309b7426b5941683231319e2e26df01bb7e0f7079ba1efc5d394c.scope: Deactivated successfully.
Oct  1 12:57:55 np0005464891 nova_compute[259907]: 2025-10-01 16:57:55.887 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337875.8871498, f5c6a668-6fa1-4a25-974c-0395fc52bf1b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:57:55 np0005464891 nova_compute[259907]: 2025-10-01 16:57:55.889 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] VM Started (Lifecycle Event)#033[00m
Oct  1 12:57:55 np0005464891 nova_compute[259907]: 2025-10-01 16:57:55.910 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:57:55 np0005464891 nova_compute[259907]: 2025-10-01 16:57:55.915 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337875.8873327, f5c6a668-6fa1-4a25-974c-0395fc52bf1b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:57:55 np0005464891 nova_compute[259907]: 2025-10-01 16:57:55.915 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:57:55 np0005464891 nova_compute[259907]: 2025-10-01 16:57:55.936 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:57:55 np0005464891 nova_compute[259907]: 2025-10-01 16:57:55.940 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:57:55 np0005464891 nova_compute[259907]: 2025-10-01 16:57:55.958 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:57:56 np0005464891 podman[296913]: 2025-10-01 16:57:56.114236503 +0000 UTC m=+0.043929764 container create f5bb98390110f08e7b6d1b8ea3400f2cce4c16dfdc5f2c325ee1ae4bd0583475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 12:57:56 np0005464891 systemd[1]: Started libpod-conmon-f5bb98390110f08e7b6d1b8ea3400f2cce4c16dfdc5f2c325ee1ae4bd0583475.scope.
Oct  1 12:57:56 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:57:56 np0005464891 podman[296913]: 2025-10-01 16:57:56.098819181 +0000 UTC m=+0.028512462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:57:56 np0005464891 podman[296913]: 2025-10-01 16:57:56.213063751 +0000 UTC m=+0.142757062 container init f5bb98390110f08e7b6d1b8ea3400f2cce4c16dfdc5f2c325ee1ae4bd0583475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_grothendieck, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 12:57:56 np0005464891 podman[296913]: 2025-10-01 16:57:56.226011746 +0000 UTC m=+0.155705017 container start f5bb98390110f08e7b6d1b8ea3400f2cce4c16dfdc5f2c325ee1ae4bd0583475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_grothendieck, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:57:56 np0005464891 podman[296913]: 2025-10-01 16:57:56.229952974 +0000 UTC m=+0.159646235 container attach f5bb98390110f08e7b6d1b8ea3400f2cce4c16dfdc5f2c325ee1ae4bd0583475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 12:57:56 np0005464891 systemd[1]: libpod-f5bb98390110f08e7b6d1b8ea3400f2cce4c16dfdc5f2c325ee1ae4bd0583475.scope: Deactivated successfully.
Oct  1 12:57:56 np0005464891 sleepy_grothendieck[296929]: 167 167
Oct  1 12:57:56 np0005464891 podman[296913]: 2025-10-01 16:57:56.233604024 +0000 UTC m=+0.163297355 container died f5bb98390110f08e7b6d1b8ea3400f2cce4c16dfdc5f2c325ee1ae4bd0583475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:57:56 np0005464891 systemd[1]: var-lib-containers-storage-overlay-cdaef2fdb6e9a35c8e7ab500c62db1b7319ae56ea436a4e8ef025eef2032af2e-merged.mount: Deactivated successfully.
Oct  1 12:57:56 np0005464891 podman[296913]: 2025-10-01 16:57:56.292786075 +0000 UTC m=+0.222479366 container remove f5bb98390110f08e7b6d1b8ea3400f2cce4c16dfdc5f2c325ee1ae4bd0583475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_grothendieck, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:57:56 np0005464891 systemd[1]: libpod-conmon-f5bb98390110f08e7b6d1b8ea3400f2cce4c16dfdc5f2c325ee1ae4bd0583475.scope: Deactivated successfully.
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.316 2 DEBUG nova.compute.manager [req-5f8c89a4-8ebb-43ec-bc94-7fa9d160e2c6 req-9a53bc01-3400-4a87-be21-08d1227eaafb af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Received event network-vif-plugged-f38329c7-0a79-480d-86b8-0cdc29deda98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.317 2 DEBUG oslo_concurrency.lockutils [req-5f8c89a4-8ebb-43ec-bc94-7fa9d160e2c6 req-9a53bc01-3400-4a87-be21-08d1227eaafb af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.317 2 DEBUG oslo_concurrency.lockutils [req-5f8c89a4-8ebb-43ec-bc94-7fa9d160e2c6 req-9a53bc01-3400-4a87-be21-08d1227eaafb af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.317 2 DEBUG oslo_concurrency.lockutils [req-5f8c89a4-8ebb-43ec-bc94-7fa9d160e2c6 req-9a53bc01-3400-4a87-be21-08d1227eaafb af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.318 2 DEBUG nova.compute.manager [req-5f8c89a4-8ebb-43ec-bc94-7fa9d160e2c6 req-9a53bc01-3400-4a87-be21-08d1227eaafb af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Processing event network-vif-plugged-f38329c7-0a79-480d-86b8-0cdc29deda98 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.319 2 DEBUG nova.compute.manager [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.323 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337876.323574, f5c6a668-6fa1-4a25-974c-0395fc52bf1b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.324 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.326 2 DEBUG nova.virt.libvirt.driver [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.330 2 INFO nova.virt.libvirt.driver [-] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Instance spawned successfully.#033[00m
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.330 2 INFO nova.compute.manager [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Took 2.78 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.331 2 DEBUG nova.compute.manager [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.345 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.347 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.366 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.402 2 INFO nova.compute.manager [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Took 9.70 seconds to build instance.#033[00m
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.422 2 DEBUG oslo_concurrency.lockutils [None req-7190c186-d639-44fa-adb8-cd6ef4673a30 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:56 np0005464891 podman[296953]: 2025-10-01 16:57:56.50758058 +0000 UTC m=+0.060373145 container create fca5e2b0f8fbb9c06dee65aae3be1572959775342f3bae4d38eb1358aeb0c52d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 12:57:56 np0005464891 systemd[1]: Started libpod-conmon-fca5e2b0f8fbb9c06dee65aae3be1572959775342f3bae4d38eb1358aeb0c52d.scope.
Oct  1 12:57:56 np0005464891 podman[296953]: 2025-10-01 16:57:56.486391939 +0000 UTC m=+0.039184534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:57:56 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:57:56 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0904bdd6f76515b450f3f961a439b9687dc4a46424269abcb8d26b824587c8f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:57:56 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0904bdd6f76515b450f3f961a439b9687dc4a46424269abcb8d26b824587c8f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:57:56 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0904bdd6f76515b450f3f961a439b9687dc4a46424269abcb8d26b824587c8f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:57:56 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0904bdd6f76515b450f3f961a439b9687dc4a46424269abcb8d26b824587c8f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:57:56 np0005464891 podman[296953]: 2025-10-01 16:57:56.598602983 +0000 UTC m=+0.151395568 container init fca5e2b0f8fbb9c06dee65aae3be1572959775342f3bae4d38eb1358aeb0c52d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 12:57:56 np0005464891 podman[296953]: 2025-10-01 16:57:56.60431657 +0000 UTC m=+0.157109135 container start fca5e2b0f8fbb9c06dee65aae3be1572959775342f3bae4d38eb1358aeb0c52d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 12:57:56 np0005464891 podman[296953]: 2025-10-01 16:57:56.607583689 +0000 UTC m=+0.160376274 container attach fca5e2b0f8fbb9c06dee65aae3be1572959775342f3bae4d38eb1358aeb0c52d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:57:56 np0005464891 nova_compute[259907]: 2025-10-01 16:57:56.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:57:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 167 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 18 KiB/s wr, 52 op/s
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]: {
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:        "osd_id": 2,
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:        "type": "bluestore"
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:    },
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:        "osd_id": 0,
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:        "type": "bluestore"
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:    },
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:        "osd_id": 1,
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:        "type": "bluestore"
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]:    }
Oct  1 12:57:57 np0005464891 affectionate_poitras[296969]: }
Oct  1 12:57:57 np0005464891 systemd[1]: libpod-fca5e2b0f8fbb9c06dee65aae3be1572959775342f3bae4d38eb1358aeb0c52d.scope: Deactivated successfully.
Oct  1 12:57:57 np0005464891 podman[296953]: 2025-10-01 16:57:57.59020864 +0000 UTC m=+1.143001215 container died fca5e2b0f8fbb9c06dee65aae3be1572959775342f3bae4d38eb1358aeb0c52d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:57:57 np0005464891 systemd[1]: var-lib-containers-storage-overlay-0904bdd6f76515b450f3f961a439b9687dc4a46424269abcb8d26b824587c8f3-merged.mount: Deactivated successfully.
Oct  1 12:57:57 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:57:57.630 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:57:57 np0005464891 podman[296953]: 2025-10-01 16:57:57.655992623 +0000 UTC m=+1.208785188 container remove fca5e2b0f8fbb9c06dee65aae3be1572959775342f3bae4d38eb1358aeb0c52d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:57:57 np0005464891 systemd[1]: libpod-conmon-fca5e2b0f8fbb9c06dee65aae3be1572959775342f3bae4d38eb1358aeb0c52d.scope: Deactivated successfully.
Oct  1 12:57:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:57:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:57:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:57:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:57:57 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 7baf588c-8c13-4b32-9ec5-846c801289c3 does not exist
Oct  1 12:57:57 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev e02119a7-a593-4a9b-9897-2f8d6327064e does not exist
Oct  1 12:57:57 np0005464891 nova_compute[259907]: 2025-10-01 16:57:57.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:58 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:57:58 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:57:58 np0005464891 nova_compute[259907]: 2025-10-01 16:57:58.608 2 DEBUG nova.compute.manager [req-4da9f2fa-d88c-4a22-8b36-e5c7dc522d4d req-4cfb56a6-f69d-4468-83ca-bf5f31822d55 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Received event network-vif-plugged-f38329c7-0a79-480d-86b8-0cdc29deda98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:57:58 np0005464891 nova_compute[259907]: 2025-10-01 16:57:58.610 2 DEBUG oslo_concurrency.lockutils [req-4da9f2fa-d88c-4a22-8b36-e5c7dc522d4d req-4cfb56a6-f69d-4468-83ca-bf5f31822d55 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:57:58 np0005464891 nova_compute[259907]: 2025-10-01 16:57:58.610 2 DEBUG oslo_concurrency.lockutils [req-4da9f2fa-d88c-4a22-8b36-e5c7dc522d4d req-4cfb56a6-f69d-4468-83ca-bf5f31822d55 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:57:58 np0005464891 nova_compute[259907]: 2025-10-01 16:57:58.610 2 DEBUG oslo_concurrency.lockutils [req-4da9f2fa-d88c-4a22-8b36-e5c7dc522d4d req-4cfb56a6-f69d-4468-83ca-bf5f31822d55 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:57:58 np0005464891 nova_compute[259907]: 2025-10-01 16:57:58.611 2 DEBUG nova.compute.manager [req-4da9f2fa-d88c-4a22-8b36-e5c7dc522d4d req-4cfb56a6-f69d-4468-83ca-bf5f31822d55 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] No waiting events found dispatching network-vif-plugged-f38329c7-0a79-480d-86b8-0cdc29deda98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:57:58 np0005464891 nova_compute[259907]: 2025-10-01 16:57:58.611 2 WARNING nova.compute.manager [req-4da9f2fa-d88c-4a22-8b36-e5c7dc522d4d req-4cfb56a6-f69d-4468-83ca-bf5f31822d55 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Received unexpected event network-vif-plugged-f38329c7-0a79-480d-86b8-0cdc29deda98 for instance with vm_state active and task_state None.#033[00m
Oct  1 12:57:58 np0005464891 nova_compute[259907]: 2025-10-01 16:57:58.618 2 DEBUG nova.compute.manager [req-067e61f9-a9b8-461e-b1be-f4ab160ebfba req-493af5c8-205d-48b0-88c2-d724404e6ff2 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Received event network-changed-f38329c7-0a79-480d-86b8-0cdc29deda98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:57:58 np0005464891 nova_compute[259907]: 2025-10-01 16:57:58.618 2 DEBUG nova.compute.manager [req-067e61f9-a9b8-461e-b1be-f4ab160ebfba req-493af5c8-205d-48b0-88c2-d724404e6ff2 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Refreshing instance network info cache due to event network-changed-f38329c7-0a79-480d-86b8-0cdc29deda98. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:57:58 np0005464891 nova_compute[259907]: 2025-10-01 16:57:58.618 2 DEBUG oslo_concurrency.lockutils [req-067e61f9-a9b8-461e-b1be-f4ab160ebfba req-493af5c8-205d-48b0-88c2-d724404e6ff2 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-f5c6a668-6fa1-4a25-974c-0395fc52bf1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:57:58 np0005464891 nova_compute[259907]: 2025-10-01 16:57:58.619 2 DEBUG oslo_concurrency.lockutils [req-067e61f9-a9b8-461e-b1be-f4ab160ebfba req-493af5c8-205d-48b0-88c2-d724404e6ff2 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-f5c6a668-6fa1-4a25-974c-0395fc52bf1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:57:58 np0005464891 nova_compute[259907]: 2025-10-01 16:57:58.619 2 DEBUG nova.network.neutron [req-067e61f9-a9b8-461e-b1be-f4ab160ebfba req-493af5c8-205d-48b0-88c2-d724404e6ff2 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Refreshing network info cache for port f38329c7-0a79-480d-86b8-0cdc29deda98 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:57:59 np0005464891 nova_compute[259907]: 2025-10-01 16:57:59.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:57:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 167 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 18 KiB/s wr, 65 op/s
Oct  1 12:57:59 np0005464891 nova_compute[259907]: 2025-10-01 16:57:59.224 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337864.2227879, 03ad1fe8-a967-4d62-a904-ceda4729227a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:57:59 np0005464891 nova_compute[259907]: 2025-10-01 16:57:59.224 2 INFO nova.compute.manager [-] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:57:59 np0005464891 nova_compute[259907]: 2025-10-01 16:57:59.251 2 DEBUG nova.compute.manager [None req-109f9508-a95e-48d5-80d3-ad301ef91733 - - - - - -] [instance: 03ad1fe8-a967-4d62-a904-ceda4729227a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:58:00 np0005464891 nova_compute[259907]: 2025-10-01 16:58:00.694 2 DEBUG nova.network.neutron [req-067e61f9-a9b8-461e-b1be-f4ab160ebfba req-493af5c8-205d-48b0-88c2-d724404e6ff2 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Updated VIF entry in instance network info cache for port f38329c7-0a79-480d-86b8-0cdc29deda98. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:58:00 np0005464891 nova_compute[259907]: 2025-10-01 16:58:00.695 2 DEBUG nova.network.neutron [req-067e61f9-a9b8-461e-b1be-f4ab160ebfba req-493af5c8-205d-48b0-88c2-d724404e6ff2 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Updating instance_info_cache with network_info: [{"id": "f38329c7-0a79-480d-86b8-0cdc29deda98", "address": "fa:16:3e:5d:5d:50", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf38329c7-0a", "ovs_interfaceid": "f38329c7-0a79-480d-86b8-0cdc29deda98", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:58:00 np0005464891 nova_compute[259907]: 2025-10-01 16:58:00.721 2 DEBUG oslo_concurrency.lockutils [req-067e61f9-a9b8-461e-b1be-f4ab160ebfba req-493af5c8-205d-48b0-88c2-d724404e6ff2 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-f5c6a668-6fa1-4a25-974c-0395fc52bf1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:58:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 167 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 17 KiB/s wr, 119 op/s
Oct  1 12:58:01 np0005464891 nova_compute[259907]: 2025-10-01 16:58:01.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:58:02 np0005464891 nova_compute[259907]: 2025-10-01 16:58:02.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:03 np0005464891 podman[297064]: 2025-10-01 16:58:03.000123334 +0000 UTC m=+0.097031099 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 12:58:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 167 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 16 KiB/s wr, 102 op/s
Oct  1 12:58:04 np0005464891 nova_compute[259907]: 2025-10-01 16:58:04.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 168 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 18 KiB/s wr, 86 op/s
Oct  1 12:58:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e435 do_prune osdmap full prune enabled
Oct  1 12:58:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e436 e436: 3 total, 3 up, 3 in
Oct  1 12:58:05 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e436: 3 total, 3 up, 3 in
Oct  1 12:58:06 np0005464891 nova_compute[259907]: 2025-10-01 16:58:06.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:58:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 168 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 KiB/s wr, 97 op/s
Oct  1 12:58:09 np0005464891 nova_compute[259907]: 2025-10-01 16:58:09.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 173 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 208 KiB/s wr, 114 op/s
Oct  1 12:58:09 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:09Z|00034|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.9
Oct  1 12:58:09 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:09Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:5d:5d:50 10.100.0.9
Oct  1 12:58:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e436 do_prune osdmap full prune enabled
Oct  1 12:58:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e437 e437: 3 total, 3 up, 3 in
Oct  1 12:58:10 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e437: 3 total, 3 up, 3 in
Oct  1 12:58:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:58:10 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1958994745' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:58:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:58:10 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1958994745' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:58:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 182 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 779 KiB/s wr, 113 op/s
Oct  1 12:58:11 np0005464891 nova_compute[259907]: 2025-10-01 16:58:11.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:58:12
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['images', 'vms', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'backups']
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:58:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:58:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:58:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3335612020' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:58:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:58:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3335612020' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:58:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:12.460 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:12.461 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:12.461 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:58:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:58:12 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:12Z|00036|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.9
Oct  1 12:58:12 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:12Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:5d:5d:50 10.100.0.9
Oct  1 12:58:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 182 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 777 KiB/s wr, 196 op/s
Oct  1 12:58:13 np0005464891 podman[297083]: 2025-10-01 16:58:13.993155796 +0000 UTC m=+0.102191391 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  1 12:58:14 np0005464891 nova_compute[259907]: 2025-10-01 16:58:14.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:14 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:14Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:5d:50 10.100.0.9
Oct  1 12:58:14 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:14Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:5d:50 10.100.0.9
Oct  1 12:58:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 185 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 684 KiB/s wr, 166 op/s
Oct  1 12:58:16 np0005464891 nova_compute[259907]: 2025-10-01 16:58:16.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:58:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e437 do_prune osdmap full prune enabled
Oct  1 12:58:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e438 e438: 3 total, 3 up, 3 in
Oct  1 12:58:17 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e438: 3 total, 3 up, 3 in
Oct  1 12:58:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 185 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 591 KiB/s wr, 167 op/s
Oct  1 12:58:18 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:18Z|00189|binding|INFO|Releasing lport d971881d-8d8b-44dc-b0b0-4cd0065c0105 from this chassis (sb_readonly=0)
Oct  1 12:58:18 np0005464891 podman[297109]: 2025-10-01 16:58:18.962651182 +0000 UTC m=+0.071358806 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  1 12:58:19 np0005464891 nova_compute[259907]: 2025-10-01 16:58:19.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:19 np0005464891 nova_compute[259907]: 2025-10-01 16:58:19.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 185 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 196 KiB/s rd, 65 KiB/s wr, 84 op/s
Oct  1 12:58:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1753: 321 pgs: 321 active+clean; 185 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 72 KiB/s wr, 76 op/s
Oct  1 12:58:21 np0005464891 nova_compute[259907]: 2025-10-01 16:58:21.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:21 np0005464891 podman[297130]: 2025-10-01 16:58:21.967804454 +0000 UTC m=+0.069882876 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  1 12:58:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.65957299602787e-06 of space, bias 1.0, pg target 0.001697871898808361 quantized to 32 (current 32)
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.001215854333528684 of space, bias 1.0, pg target 0.3647563000586052 quantized to 32 (current 32)
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663670272514163 of space, bias 1.0, pg target 0.19991010817542487 quantized to 32 (current 32)
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:58:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:58:22 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:22Z|00190|binding|INFO|Releasing lport d971881d-8d8b-44dc-b0b0-4cd0065c0105 from this chassis (sb_readonly=0)
Oct  1 12:58:23 np0005464891 nova_compute[259907]: 2025-10-01 16:58:23.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 219 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.5 MiB/s wr, 16 op/s
Oct  1 12:58:24 np0005464891 nova_compute[259907]: 2025-10-01 16:58:24.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1755: 321 pgs: 321 active+clean; 303 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 257 KiB/s rd, 11 MiB/s wr, 61 op/s
Oct  1 12:58:25 np0005464891 nova_compute[259907]: 2025-10-01 16:58:25.566 2 DEBUG oslo_concurrency.lockutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:25 np0005464891 nova_compute[259907]: 2025-10-01 16:58:25.567 2 DEBUG oslo_concurrency.lockutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:25 np0005464891 nova_compute[259907]: 2025-10-01 16:58:25.585 2 DEBUG nova.compute.manager [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:58:25 np0005464891 nova_compute[259907]: 2025-10-01 16:58:25.699 2 DEBUG oslo_concurrency.lockutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:25 np0005464891 nova_compute[259907]: 2025-10-01 16:58:25.700 2 DEBUG oslo_concurrency.lockutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:25 np0005464891 nova_compute[259907]: 2025-10-01 16:58:25.708 2 DEBUG nova.virt.hardware [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:58:25 np0005464891 nova_compute[259907]: 2025-10-01 16:58:25.708 2 INFO nova.compute.claims [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:58:25 np0005464891 nova_compute[259907]: 2025-10-01 16:58:25.850 2 DEBUG oslo_concurrency.processutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:58:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:58:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712849975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.358 2 DEBUG oslo_concurrency.processutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.366 2 DEBUG nova.compute.provider_tree [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.401 2 DEBUG nova.scheduler.client.report [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.438 2 DEBUG oslo_concurrency.lockutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.439 2 DEBUG nova.compute.manager [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.495 2 DEBUG nova.compute.manager [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.495 2 DEBUG nova.network.neutron [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.512 2 INFO nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.530 2 DEBUG nova.compute.manager [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.579 2 INFO nova.virt.block_device [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Booting with volume 1580863a-8a45-49dc-baa1-1fe7c2e3a74d at /dev/vda#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.777 2 DEBUG os_brick.utils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.778 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.791 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.792 741 DEBUG oslo.privsep.daemon [-] privsep: reply[df5c3cc5-d02c-4e79-a441-852353a59af9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.794 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.804 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.805 741 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3798a2-c0ca-4b3f-a5ad-f4346f0accfa]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.807 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.819 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.820 741 DEBUG oslo.privsep.daemon [-] privsep: reply[3d385ee0-c204-4fb5-a9f8-284e40ef6dc5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.822 741 DEBUG oslo.privsep.daemon [-] privsep: reply[4d398927-7453-40fa-8fb7-af6bebecdb29]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.822 2 DEBUG oslo_concurrency.processutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.852 2 DEBUG oslo_concurrency.processutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.854 2 DEBUG os_brick.initiator.connectors.lightos [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.854 2 DEBUG os_brick.initiator.connectors.lightos [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.854 2 DEBUG os_brick.initiator.connectors.lightos [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.855 2 DEBUG os_brick.utils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.855 2 DEBUG nova.virt.block_device [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Updating existing volume attachment record: 9d05eb2a-7dac-4cef-bd61-d9ba682802e2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:58:26 np0005464891 nova_compute[259907]: 2025-10-01 16:58:26.860 2 DEBUG nova.policy [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '906d3d29e27b49c1860f5397c6028d96', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bb5e44f7928546dfb674d53cd3727027', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:58:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:58:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 303 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 11 MiB/s wr, 61 op/s
Oct  1 12:58:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:58:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4272104107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:58:28 np0005464891 nova_compute[259907]: 2025-10-01 16:58:28.582 2 DEBUG nova.network.neutron [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Successfully created port: df49da0f-d552-4921-b312-c9644f9430de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:58:28 np0005464891 nova_compute[259907]: 2025-10-01 16:58:28.615 2 DEBUG nova.compute.manager [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:58:28 np0005464891 nova_compute[259907]: 2025-10-01 16:58:28.617 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:58:28 np0005464891 nova_compute[259907]: 2025-10-01 16:58:28.618 2 INFO nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Creating image(s)#033[00m
Oct  1 12:58:28 np0005464891 nova_compute[259907]: 2025-10-01 16:58:28.618 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  1 12:58:28 np0005464891 nova_compute[259907]: 2025-10-01 16:58:28.619 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Ensure instance console log exists: /var/lib/nova/instances/dd2acd48-65e4-48e1-80ae-b7404cb6fc4e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:58:28 np0005464891 nova_compute[259907]: 2025-10-01 16:58:28.619 2 DEBUG oslo_concurrency.lockutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:28 np0005464891 nova_compute[259907]: 2025-10-01 16:58:28.619 2 DEBUG oslo_concurrency.lockutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:28 np0005464891 nova_compute[259907]: 2025-10-01 16:58:28.620 2 DEBUG oslo_concurrency.lockutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:29 np0005464891 nova_compute[259907]: 2025-10-01 16:58:29.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 303 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 214 KiB/s rd, 9.6 MiB/s wr, 51 op/s
Oct  1 12:58:30 np0005464891 nova_compute[259907]: 2025-10-01 16:58:30.607 2 DEBUG nova.network.neutron [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Successfully updated port: df49da0f-d552-4921-b312-c9644f9430de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:58:30 np0005464891 nova_compute[259907]: 2025-10-01 16:58:30.909 2 DEBUG oslo_concurrency.lockutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "refresh_cache-dd2acd48-65e4-48e1-80ae-b7404cb6fc4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:58:30 np0005464891 nova_compute[259907]: 2025-10-01 16:58:30.910 2 DEBUG oslo_concurrency.lockutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquired lock "refresh_cache-dd2acd48-65e4-48e1-80ae-b7404cb6fc4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:58:30 np0005464891 nova_compute[259907]: 2025-10-01 16:58:30.910 2 DEBUG nova.network.neutron [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:58:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 303 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 214 KiB/s rd, 9.6 MiB/s wr, 51 op/s
Oct  1 12:58:31 np0005464891 nova_compute[259907]: 2025-10-01 16:58:31.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:31 np0005464891 nova_compute[259907]: 2025-10-01 16:58:31.803 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:58:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:58:32 np0005464891 nova_compute[259907]: 2025-10-01 16:58:32.313 2 DEBUG nova.compute.manager [req-564626b4-d673-4ae7-ac50-c55ee2e2d670 req-4958ae69-bedb-4c5f-b1a7-59bf9ea8dfdd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Received event network-changed-df49da0f-d552-4921-b312-c9644f9430de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:58:32 np0005464891 nova_compute[259907]: 2025-10-01 16:58:32.313 2 DEBUG nova.compute.manager [req-564626b4-d673-4ae7-ac50-c55ee2e2d670 req-4958ae69-bedb-4c5f-b1a7-59bf9ea8dfdd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Refreshing instance network info cache due to event network-changed-df49da0f-d552-4921-b312-c9644f9430de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:58:32 np0005464891 nova_compute[259907]: 2025-10-01 16:58:32.314 2 DEBUG oslo_concurrency.lockutils [req-564626b4-d673-4ae7-ac50-c55ee2e2d670 req-4958ae69-bedb-4c5f-b1a7-59bf9ea8dfdd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-dd2acd48-65e4-48e1-80ae-b7404cb6fc4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:58:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 303 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 214 KiB/s rd, 9.5 MiB/s wr, 50 op/s
Oct  1 12:58:33 np0005464891 nova_compute[259907]: 2025-10-01 16:58:33.271 2 DEBUG nova.network.neutron [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:58:33 np0005464891 nova_compute[259907]: 2025-10-01 16:58:33.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:58:33 np0005464891 nova_compute[259907]: 2025-10-01 16:58:33.804 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:58:33 np0005464891 nova_compute[259907]: 2025-10-01 16:58:33.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:58:33 np0005464891 podman[297181]: 2025-10-01 16:58:33.939423867 +0000 UTC m=+0.056950992 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  1 12:58:33 np0005464891 nova_compute[259907]: 2025-10-01 16:58:33.932 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:33 np0005464891 nova_compute[259907]: 2025-10-01 16:58:33.933 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:33 np0005464891 nova_compute[259907]: 2025-10-01 16:58:33.933 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:33 np0005464891 nova_compute[259907]: 2025-10-01 16:58:33.934 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:58:33 np0005464891 nova_compute[259907]: 2025-10-01 16:58:33.934 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:58:34 np0005464891 nova_compute[259907]: 2025-10-01 16:58:34.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:58:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1123952880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:58:34 np0005464891 nova_compute[259907]: 2025-10-01 16:58:34.472 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:58:34 np0005464891 nova_compute[259907]: 2025-10-01 16:58:34.772 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:58:34 np0005464891 nova_compute[259907]: 2025-10-01 16:58:34.773 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:58:34 np0005464891 nova_compute[259907]: 2025-10-01 16:58:34.777 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:58:34 np0005464891 nova_compute[259907]: 2025-10-01 16:58:34.778 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:58:34 np0005464891 nova_compute[259907]: 2025-10-01 16:58:34.939 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:58:34 np0005464891 nova_compute[259907]: 2025-10-01 16:58:34.940 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4033MB free_disk=59.98794174194336GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:58:34 np0005464891 nova_compute[259907]: 2025-10-01 16:58:34.940 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:34 np0005464891 nova_compute[259907]: 2025-10-01 16:58:34.941 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.017 2 DEBUG oslo_concurrency.lockutils [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.018 2 DEBUG oslo_concurrency.lockutils [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.018 2 DEBUG oslo_concurrency.lockutils [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.018 2 DEBUG oslo_concurrency.lockutils [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.018 2 DEBUG oslo_concurrency.lockutils [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.019 2 INFO nova.compute.manager [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Terminating instance#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.020 2 DEBUG nova.compute.manager [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.124 2 DEBUG nova.network.neutron [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Updating instance_info_cache with network_info: [{"id": "df49da0f-d552-4921-b312-c9644f9430de", "address": "fa:16:3e:88:2a:ee", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf49da0f-d5", "ovs_interfaceid": "df49da0f-d552-4921-b312-c9644f9430de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:58:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 303 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 212 KiB/s rd, 6.7 MiB/s wr, 44 op/s
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.226 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 47531108-4f20-41bd-8fb8-77fae3a30b85 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.226 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance f5c6a668-6fa1-4a25-974c-0395fc52bf1b actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.226 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance dd2acd48-65e4-48e1-80ae-b7404cb6fc4e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.227 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.227 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.284 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.316 2 DEBUG oslo_concurrency.lockutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Releasing lock "refresh_cache-dd2acd48-65e4-48e1-80ae-b7404cb6fc4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.317 2 DEBUG nova.compute.manager [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Instance network_info: |[{"id": "df49da0f-d552-4921-b312-c9644f9430de", "address": "fa:16:3e:88:2a:ee", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf49da0f-d5", "ovs_interfaceid": "df49da0f-d552-4921-b312-c9644f9430de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.318 2 DEBUG oslo_concurrency.lockutils [req-564626b4-d673-4ae7-ac50-c55ee2e2d670 req-4958ae69-bedb-4c5f-b1a7-59bf9ea8dfdd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-dd2acd48-65e4-48e1-80ae-b7404cb6fc4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.319 2 DEBUG nova.network.neutron [req-564626b4-d673-4ae7-ac50-c55ee2e2d670 req-4958ae69-bedb-4c5f-b1a7-59bf9ea8dfdd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Refreshing network info cache for port df49da0f-d552-4921-b312-c9644f9430de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.324 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Start _get_guest_xml network_info=[{"id": "df49da0f-d552-4921-b312-c9644f9430de", "address": "fa:16:3e:88:2a:ee", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf49da0f-d5", "ovs_interfaceid": "df49da0f-d552-4921-b312-c9644f9430de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'mount_device': '/dev/vda', 'attachment_id': '9d05eb2a-7dac-4cef-bd61-d9ba682802e2', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1580863a-8a45-49dc-baa1-1fe7c2e3a74d', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1580863a-8a45-49dc-baa1-1fe7c2e3a74d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'dd2acd48-65e4-48e1-80ae-b7404cb6fc4e', 'attached_at': '', 'detached_at': '', 'volume_id': '1580863a-8a45-49dc-baa1-1fe7c2e3a74d', 'serial': '1580863a-8a45-49dc-baa1-1fe7c2e3a74d'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.329 2 WARNING nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.335 2 DEBUG nova.virt.libvirt.host [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.335 2 DEBUG nova.virt.libvirt.host [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.337 2 DEBUG nova.virt.libvirt.host [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.338 2 DEBUG nova.virt.libvirt.host [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.338 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.339 2 DEBUG nova.virt.hardware [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.339 2 DEBUG nova.virt.hardware [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.339 2 DEBUG nova.virt.hardware [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.340 2 DEBUG nova.virt.hardware [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.340 2 DEBUG nova.virt.hardware [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.340 2 DEBUG nova.virt.hardware [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.340 2 DEBUG nova.virt.hardware [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.341 2 DEBUG nova.virt.hardware [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.341 2 DEBUG nova.virt.hardware [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.341 2 DEBUG nova.virt.hardware [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.342 2 DEBUG nova.virt.hardware [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.470 2 DEBUG nova.storage.rbd_utils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] rbd image dd2acd48-65e4-48e1-80ae-b7404cb6fc4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.477 2 DEBUG oslo_concurrency.processutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:58:35 np0005464891 kernel: tapf38329c7-0a (unregistering): left promiscuous mode
Oct  1 12:58:35 np0005464891 NetworkManager[44940]: <info>  [1759337915.5488] device (tapf38329c7-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:58:35 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:35Z|00191|binding|INFO|Releasing lport f38329c7-0a79-480d-86b8-0cdc29deda98 from this chassis (sb_readonly=0)
Oct  1 12:58:35 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:35Z|00192|binding|INFO|Setting lport f38329c7-0a79-480d-86b8-0cdc29deda98 down in Southbound
Oct  1 12:58:35 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:35Z|00193|binding|INFO|Removing iface tapf38329c7-0a ovn-installed in OVS
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:35.578 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:5d:50 10.100.0.9'], port_security=['fa:16:3e:5d:5d:50 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f5c6a668-6fa1-4a25-974c-0395fc52bf1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce1e1062-6685-441b-8278-667224375e38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8318b65fa88942a99937a0d198a04a9c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4798687d-057e-464a-b213-d922e99d4dec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.214'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4cdcf07-f310-4572-944c-43bd8e74f763, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=f38329c7-0a79-480d-86b8-0cdc29deda98) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:58:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:35.583 162546 INFO neutron.agent.ovn.metadata.agent [-] Port f38329c7-0a79-480d-86b8-0cdc29deda98 in datapath ce1e1062-6685-441b-8278-667224375e38 unbound from our chassis#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:35.587 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce1e1062-6685-441b-8278-667224375e38#033[00m
Oct  1 12:58:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:35.606 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed4efdd-3c9a-44e1-b0c4-87b12a268bb2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:35 np0005464891 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct  1 12:58:35 np0005464891 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Consumed 14.693s CPU time.
Oct  1 12:58:35 np0005464891 systemd-machined[214891]: Machine qemu-20-instance-00000014 terminated.
Oct  1 12:58:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:35.646 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[8c1ca7b3-d893-4950-935a-f83d590f50d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:35.649 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3b078c-4cde-4556-a862-684991f8ad62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.659 2 INFO nova.virt.libvirt.driver [-] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Instance destroyed successfully.#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.659 2 DEBUG nova.objects.instance [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lazy-loading 'resources' on Instance uuid f5c6a668-6fa1-4a25-974c-0395fc52bf1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:58:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:35.681 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[acfbb648-2e26-459c-a007-b8ee7382ab76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.686 2 DEBUG nova.virt.libvirt.vif [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:57:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1759564802',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1759564802',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1759564802',id=20,image_ref='581008f4-25c1-47a9-a575-c3b8fd62331a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK2Aa4XN53urF/mn2LfmOREsLi3DcmZBjmrB/iRLAeVu3VEZvhSi7RK4LeMBlzZ8HZFvWV0aV3VGttYChu1q08d3Ir/3+FVcTBmbKIN0Dtco4ir+PNGiDzfRAUuEZ2kizQ==',key_name='tempest-keypair-1828679444',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:57:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-g9okt0pe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-582136054',image_owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:57:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1280014cdfb74333ae8d71c78116e646',uuid=f5c6a668-6fa1-4a25-974c-0395fc52bf1b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f38329c7-0a79-480d-86b8-0cdc29deda98", "address": "fa:16:3e:5d:5d:50", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf38329c7-0a", "ovs_interfaceid": "f38329c7-0a79-480d-86b8-0cdc29deda98", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.686 2 DEBUG nova.network.os_vif_util [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "f38329c7-0a79-480d-86b8-0cdc29deda98", "address": "fa:16:3e:5d:5d:50", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf38329c7-0a", "ovs_interfaceid": "f38329c7-0a79-480d-86b8-0cdc29deda98", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.687 2 DEBUG nova.network.os_vif_util [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:5d:50,bridge_name='br-int',has_traffic_filtering=True,id=f38329c7-0a79-480d-86b8-0cdc29deda98,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf38329c7-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.687 2 DEBUG os_vif [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:5d:50,bridge_name='br-int',has_traffic_filtering=True,id=f38329c7-0a79-480d-86b8-0cdc29deda98,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf38329c7-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.689 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf38329c7-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.695 2 INFO os_vif [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:5d:50,bridge_name='br-int',has_traffic_filtering=True,id=f38329c7-0a79-480d-86b8-0cdc29deda98,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf38329c7-0a')#033[00m
Oct  1 12:58:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:35.701 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb43ffd-bdd8-485f-9d6e-771be951aa72]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce1e1062-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:87:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478572, 'reachable_time': 34368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297306, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:35.717 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[30733c17-b3cc-4f93-8bde-4a27e2c24c7c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce1e1062-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478584, 'tstamp': 478584}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297314, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce1e1062-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478587, 'tstamp': 478587}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297314, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:35.720 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce1e1062-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:35.723 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce1e1062-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:58:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:35.723 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:58:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:35.724 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce1e1062-60, col_values=(('external_ids', {'iface-id': 'd971881d-8d8b-44dc-b0b0-4cd0065c0105'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:58:35 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:35.724 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:58:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:58:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2569916523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.844 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.850 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.868 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.897 2 INFO nova.virt.libvirt.driver [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Deleting instance files /var/lib/nova/instances/f5c6a668-6fa1-4a25-974c-0395fc52bf1b_del#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.898 2 INFO nova.virt.libvirt.driver [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Deletion of /var/lib/nova/instances/f5c6a668-6fa1-4a25-974c-0395fc52bf1b_del complete#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.902 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.902 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.946 2 INFO nova.compute.manager [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.946 2 DEBUG oslo.service.loopingcall [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.947 2 DEBUG nova.compute.manager [-] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.947 2 DEBUG nova.network.neutron [-] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:58:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:58:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1631491133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:58:35 np0005464891 nova_compute[259907]: 2025-10-01 16:58:35.972 2 DEBUG oslo_concurrency.processutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.132 2 DEBUG os_brick.encryptors [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Using volume encryption metadata '{'encryption_key_id': '43b7937e-7643-499e-916b-d2fbc0639b47', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1580863a-8a45-49dc-baa1-1fe7c2e3a74d', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1580863a-8a45-49dc-baa1-1fe7c2e3a74d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'dd2acd48-65e4-48e1-80ae-b7404cb6fc4e', 'attached_at': '', 'detached_at': '', 'volume_id': '1580863a-8a45-49dc-baa1-1fe7c2e3a74d', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.134 2 DEBUG barbicanclient.client [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.150 2 DEBUG barbicanclient.v1.secrets [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/43b7937e-7643-499e-916b-d2fbc0639b47 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.150 2 INFO barbicanclient.base [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/43b7937e-7643-499e-916b-d2fbc0639b47#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.175 2 DEBUG barbicanclient.client [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.176 2 INFO barbicanclient.base [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/43b7937e-7643-499e-916b-d2fbc0639b47#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.207 2 DEBUG barbicanclient.client [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.208 2 INFO barbicanclient.base [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/43b7937e-7643-499e-916b-d2fbc0639b47#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.250 2 DEBUG barbicanclient.client [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.251 2 INFO barbicanclient.base [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/43b7937e-7643-499e-916b-d2fbc0639b47#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.289 2 DEBUG barbicanclient.client [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.290 2 INFO barbicanclient.base [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/43b7937e-7643-499e-916b-d2fbc0639b47#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.310 2 DEBUG barbicanclient.client [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.311 2 INFO barbicanclient.base [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/43b7937e-7643-499e-916b-d2fbc0639b47#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.332 2 DEBUG barbicanclient.client [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.333 2 INFO barbicanclient.base [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/43b7937e-7643-499e-916b-d2fbc0639b47#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.353 2 DEBUG barbicanclient.client [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.354 2 INFO barbicanclient.base [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/43b7937e-7643-499e-916b-d2fbc0639b47#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.372 2 DEBUG barbicanclient.client [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.374 2 INFO barbicanclient.base [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/43b7937e-7643-499e-916b-d2fbc0639b47#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.394 2 DEBUG barbicanclient.client [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.394 2 INFO barbicanclient.base [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/43b7937e-7643-499e-916b-d2fbc0639b47#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.419 2 DEBUG barbicanclient.client [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.419 2 INFO barbicanclient.base [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/43b7937e-7643-499e-916b-d2fbc0639b47#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.448 2 DEBUG barbicanclient.client [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.449 2 INFO barbicanclient.base [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/43b7937e-7643-499e-916b-d2fbc0639b47#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.476 2 DEBUG barbicanclient.client [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.477 2 INFO barbicanclient.base [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/43b7937e-7643-499e-916b-d2fbc0639b47#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.500 2 DEBUG barbicanclient.client [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.500 2 INFO barbicanclient.base [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/43b7937e-7643-499e-916b-d2fbc0639b47#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.505 2 DEBUG nova.compute.manager [req-fb1c6961-1697-4b08-9c96-acf0d5a79602 req-b562726d-41fc-450c-a88d-276f4d9a8139 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Received event network-vif-unplugged-f38329c7-0a79-480d-86b8-0cdc29deda98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.505 2 DEBUG oslo_concurrency.lockutils [req-fb1c6961-1697-4b08-9c96-acf0d5a79602 req-b562726d-41fc-450c-a88d-276f4d9a8139 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.505 2 DEBUG oslo_concurrency.lockutils [req-fb1c6961-1697-4b08-9c96-acf0d5a79602 req-b562726d-41fc-450c-a88d-276f4d9a8139 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.506 2 DEBUG oslo_concurrency.lockutils [req-fb1c6961-1697-4b08-9c96-acf0d5a79602 req-b562726d-41fc-450c-a88d-276f4d9a8139 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.506 2 DEBUG nova.compute.manager [req-fb1c6961-1697-4b08-9c96-acf0d5a79602 req-b562726d-41fc-450c-a88d-276f4d9a8139 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] No waiting events found dispatching network-vif-unplugged-f38329c7-0a79-480d-86b8-0cdc29deda98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.506 2 DEBUG nova.compute.manager [req-fb1c6961-1697-4b08-9c96-acf0d5a79602 req-b562726d-41fc-450c-a88d-276f4d9a8139 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Received event network-vif-unplugged-f38329c7-0a79-480d-86b8-0cdc29deda98 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.519 2 DEBUG barbicanclient.client [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.520 2 INFO barbicanclient.base [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/43b7937e-7643-499e-916b-d2fbc0639b47#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.548 2 DEBUG barbicanclient.client [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.549 2 DEBUG nova.virt.libvirt.host [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  <usage type="volume">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <volume>1580863a-8a45-49dc-baa1-1fe7c2e3a74d</volume>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  </usage>
Oct  1 12:58:36 np0005464891 nova_compute[259907]: </secret>
Oct  1 12:58:36 np0005464891 nova_compute[259907]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.576 2 DEBUG nova.virt.libvirt.vif [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:58:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-233020751',display_name='tempest-TestEncryptedCinderVolumes-server-233020751',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-233020751',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIhJiMuVwk4EQ7wkCYcLaeTsPomALwyR3FBK+97oa6ynrLvPrKJKnE71uKm0O/hFbPLnI7X22RnrmUili5anoyjadz+yIM+FZfOiuxhlfC8kCRP4tSOOTh7DLMRl7W7xOg==',key_name='tempest-TestEncryptedCinderVolumes-620996693',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bb5e44f7928546dfb674d53cd3727027',ramdisk_id='',reservation_id='r-ee895gd2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-803701988',owner_user_name='tempest-TestEncryptedCinderVolumes-803701988-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:58:26Z,user_data=None,user_id='906d3d29e27b49c1860f5397c6028d96',uuid=dd2acd48-65e4-48e1-80ae-b7404cb6fc4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df49da0f-d552-4921-b312-c9644f9430de", "address": "fa:16:3e:88:2a:ee", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf49da0f-d5", "ovs_interfaceid": "df49da0f-d552-4921-b312-c9644f9430de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.577 2 DEBUG nova.network.os_vif_util [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converting VIF {"id": "df49da0f-d552-4921-b312-c9644f9430de", "address": "fa:16:3e:88:2a:ee", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf49da0f-d5", "ovs_interfaceid": "df49da0f-d552-4921-b312-c9644f9430de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.577 2 DEBUG nova.network.os_vif_util [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:2a:ee,bridge_name='br-int',has_traffic_filtering=True,id=df49da0f-d552-4921-b312-c9644f9430de,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf49da0f-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.578 2 DEBUG nova.objects.instance [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd2acd48-65e4-48e1-80ae-b7404cb6fc4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.589 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  <uuid>dd2acd48-65e4-48e1-80ae-b7404cb6fc4e</uuid>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  <name>instance-00000015</name>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-233020751</nova:name>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:58:35</nova:creationTime>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:        <nova:user uuid="906d3d29e27b49c1860f5397c6028d96">tempest-TestEncryptedCinderVolumes-803701988-project-member</nova:user>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:        <nova:project uuid="bb5e44f7928546dfb674d53cd3727027">tempest-TestEncryptedCinderVolumes-803701988</nova:project>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:        <nova:port uuid="df49da0f-d552-4921-b312-c9644f9430de">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <entry name="serial">dd2acd48-65e4-48e1-80ae-b7404cb6fc4e</entry>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <entry name="uuid">dd2acd48-65e4-48e1-80ae-b7404cb6fc4e</entry>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/dd2acd48-65e4-48e1-80ae-b7404cb6fc4e_disk.config">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="volumes/volume-1580863a-8a45-49dc-baa1-1fe7c2e3a74d">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <serial>1580863a-8a45-49dc-baa1-1fe7c2e3a74d</serial>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <encryption format="luks">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:        <secret type="passphrase" uuid="c2fdd4b4-33de-42c4-8686-db107ee97f12"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      </encryption>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:88:2a:ee"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <target dev="tapdf49da0f-d5"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/dd2acd48-65e4-48e1-80ae-b7404cb6fc4e/console.log" append="off"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:58:36 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:58:36 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:58:36 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:58:36 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.590 2 DEBUG nova.compute.manager [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Preparing to wait for external event network-vif-plugged-df49da0f-d552-4921-b312-c9644f9430de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.590 2 DEBUG oslo_concurrency.lockutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.590 2 DEBUG oslo_concurrency.lockutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.591 2 DEBUG oslo_concurrency.lockutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.591 2 DEBUG nova.virt.libvirt.vif [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:58:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-233020751',display_name='tempest-TestEncryptedCinderVolumes-server-233020751',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-233020751',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIhJiMuVwk4EQ7wkCYcLaeTsPomALwyR3FBK+97oa6ynrLvPrKJKnE71uKm0O/hFbPLnI7X22RnrmUili5anoyjadz+yIM+FZfOiuxhlfC8kCRP4tSOOTh7DLMRl7W7xOg==',key_name='tempest-TestEncryptedCinderVolumes-620996693',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bb5e44f7928546dfb674d53cd3727027',ramdisk_id='',reservation_id='r-ee895gd2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-803701988',owner_user_name='tempest-TestEncryptedCinderVolumes-803701988-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:58:26Z,user_data=None,user_id='906d3d29e27b49c1860f5397c6028d96',uuid=dd2acd48-65e4-48e1-80ae-b7404cb6fc4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df49da0f-d552-4921-b312-c9644f9430de", "address": "fa:16:3e:88:2a:ee", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf49da0f-d5", "ovs_interfaceid": "df49da0f-d552-4921-b312-c9644f9430de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.592 2 DEBUG nova.network.os_vif_util [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converting VIF {"id": "df49da0f-d552-4921-b312-c9644f9430de", "address": "fa:16:3e:88:2a:ee", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf49da0f-d5", "ovs_interfaceid": "df49da0f-d552-4921-b312-c9644f9430de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.592 2 DEBUG nova.network.os_vif_util [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:2a:ee,bridge_name='br-int',has_traffic_filtering=True,id=df49da0f-d552-4921-b312-c9644f9430de,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf49da0f-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.592 2 DEBUG os_vif [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:2a:ee,bridge_name='br-int',has_traffic_filtering=True,id=df49da0f-d552-4921-b312-c9644f9430de,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf49da0f-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.593 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.593 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf49da0f-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf49da0f-d5, col_values=(('external_ids', {'iface-id': 'df49da0f-d552-4921-b312-c9644f9430de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:2a:ee', 'vm-uuid': 'dd2acd48-65e4-48e1-80ae-b7404cb6fc4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:36 np0005464891 NetworkManager[44940]: <info>  [1759337916.6002] manager: (tapdf49da0f-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.603 2 INFO os_vif [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:2a:ee,bridge_name='br-int',has_traffic_filtering=True,id=df49da0f-d552-4921-b312-c9644f9430de,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf49da0f-d5')#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.651 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.652 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.652 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] No VIF found with MAC fa:16:3e:88:2a:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.653 2 INFO nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Using config drive#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.672 2 DEBUG nova.storage.rbd_utils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] rbd image dd2acd48-65e4-48e1-80ae-b7404cb6fc4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.898 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.898 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:58:36 np0005464891 nova_compute[259907]: 2025-10-01 16:58:36.899 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.037 2 INFO nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Creating config drive at /var/lib/nova/instances/dd2acd48-65e4-48e1-80ae-b7404cb6fc4e/disk.config#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.042 2 DEBUG oslo_concurrency.processutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd2acd48-65e4-48e1-80ae-b7404cb6fc4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpztmgda6p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:58:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.140 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "refresh_cache-47531108-4f20-41bd-8fb8-77fae3a30b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.141 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquired lock "refresh_cache-47531108-4f20-41bd-8fb8-77fae3a30b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.142 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.173 2 DEBUG oslo_concurrency.processutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd2acd48-65e4-48e1-80ae-b7404cb6fc4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpztmgda6p" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.201 2 DEBUG nova.storage.rbd_utils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] rbd image dd2acd48-65e4-48e1-80ae-b7404cb6fc4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.206 2 DEBUG oslo_concurrency.processutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd2acd48-65e4-48e1-80ae-b7404cb6fc4e/disk.config dd2acd48-65e4-48e1-80ae-b7404cb6fc4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:58:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 303 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 3.7 KiB/s wr, 0 op/s
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.315 2 DEBUG nova.network.neutron [-] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.337 2 INFO nova.compute.manager [-] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Took 1.39 seconds to deallocate network for instance.#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.365 2 DEBUG oslo_concurrency.processutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd2acd48-65e4-48e1-80ae-b7404cb6fc4e/disk.config dd2acd48-65e4-48e1-80ae-b7404cb6fc4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.366 2 INFO nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Deleting local config drive /var/lib/nova/instances/dd2acd48-65e4-48e1-80ae-b7404cb6fc4e/disk.config because it was imported into RBD.#033[00m
Oct  1 12:58:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:58:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2298059648' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:58:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:58:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2298059648' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:58:37 np0005464891 kernel: tapdf49da0f-d5: entered promiscuous mode
Oct  1 12:58:37 np0005464891 systemd-udevd[297268]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:58:37 np0005464891 NetworkManager[44940]: <info>  [1759337917.4372] manager: (tapdf49da0f-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:37 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:37Z|00194|binding|INFO|Claiming lport df49da0f-d552-4921-b312-c9644f9430de for this chassis.
Oct  1 12:58:37 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:37Z|00195|binding|INFO|df49da0f-d552-4921-b312-c9644f9430de: Claiming fa:16:3e:88:2a:ee 10.100.0.9
Oct  1 12:58:37 np0005464891 NetworkManager[44940]: <info>  [1759337917.4436] device (tapdf49da0f-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:58:37 np0005464891 NetworkManager[44940]: <info>  [1759337917.4454] device (tapdf49da0f-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:37 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:37Z|00196|binding|INFO|Setting lport df49da0f-d552-4921-b312-c9644f9430de ovn-installed in OVS
Oct  1 12:58:37 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:37Z|00197|binding|INFO|Setting lport df49da0f-d552-4921-b312-c9644f9430de up in Southbound
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.453 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:ee 10.100.0.9'], port_security=['fa:16:3e:88:2a:ee 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dd2acd48-65e4-48e1-80ae-b7404cb6fc4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bb5e44f7928546dfb674d53cd3727027', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c51767f2-742e-4209-a278-1c1f1e9af624', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08e741b0-61e8-4126-b98f-610a01494f2d, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=df49da0f-d552-4921-b312-c9644f9430de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.455 162546 INFO neutron.agent.ovn.metadata.agent [-] Port df49da0f-d552-4921-b312-c9644f9430de in datapath 2345ad6b-d676-4546-a17e-6f7405ff5f24 bound to our chassis#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.456 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2345ad6b-d676-4546-a17e-6f7405ff5f24#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.469 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a04909-7357-4fdd-9291-356cd76aa292]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.469 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2345ad6b-d1 in ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.471 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2345ad6b-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.471 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5f345c4c-5b3a-46e3-8ee7-56cdf4555694]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.472 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[9acc7dc6-c30b-46e2-a2fa-f314e05019e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:37 np0005464891 systemd-machined[214891]: New machine qemu-21-instance-00000015.
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.484 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[3a33716d-5956-4bcc-8fd5-723e33c6cec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:37 np0005464891 systemd[1]: Started Virtual Machine qemu-21-instance-00000015.
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.516 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8a682045-c663-4b02-a390-316b8080ffcd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.542 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[423975bf-7535-4a09-98a4-18b1afe163ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:37 np0005464891 NetworkManager[44940]: <info>  [1759337917.5490] manager: (tap2345ad6b-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.549 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b97889-82f6-4e29-b05b-41762a804d95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.586 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[8809082f-f5ff-4011-827b-106cc6b9d7de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.590 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[9b73a9f0-6f59-44f9-8db6-a217e24fc9d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:37 np0005464891 NetworkManager[44940]: <info>  [1759337917.6216] device (tap2345ad6b-d0): carrier: link connected
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.637 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[acedcd27-351e-4c79-b428-63862ad34616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.660 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[bf07024d-afe2-4233-a044-a83584415b05]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2345ad6b-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:95:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486619, 'reachable_time': 23908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297434, 'error': None, 'target': 'ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.684 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5dcb82f5-b566-4b58-9de9-540e054e14e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:9597'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 486619, 'tstamp': 486619}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297435, 'error': None, 'target': 'ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.703 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e313b2e7-e75a-4a83-b03e-cfba76368943]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2345ad6b-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:95:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486619, 'reachable_time': 23908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297436, 'error': None, 'target': 'ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.733 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7a48a1-a247-4d87-a3e6-ccd5e899f73a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.786 2 INFO nova.compute.manager [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Took 0.45 seconds to detach 1 volumes for instance.#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.788 2 DEBUG nova.compute.manager [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Deleting volume: 39ef8826-321b-487e-8079-cce20f84e21a _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.804 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[cad44e46-bfc5-45d7-81fe-c557ea128c7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.805 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2345ad6b-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.806 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.806 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2345ad6b-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:37 np0005464891 kernel: tap2345ad6b-d0: entered promiscuous mode
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:37 np0005464891 NetworkManager[44940]: <info>  [1759337917.8132] manager: (tap2345ad6b-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.820 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2345ad6b-d0, col_values=(('external_ids', {'iface-id': '459f1bd9-9c63-458d-a0ce-6bd274d1ecbb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:58:37 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:37Z|00198|binding|INFO|Releasing lport 459f1bd9-9c63-458d-a0ce-6bd274d1ecbb from this chassis (sb_readonly=0)
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.837 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2345ad6b-d676-4546-a17e-6f7405ff5f24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2345ad6b-d676-4546-a17e-6f7405ff5f24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.839 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d0903d70-729b-4e57-b21f-e6ec5ca9c77d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.839 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-2345ad6b-d676-4546-a17e-6f7405ff5f24
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/2345ad6b-d676-4546-a17e-6f7405ff5f24.pid.haproxy
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID 2345ad6b-d676-4546-a17e-6f7405ff5f24
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:58:37 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:37.840 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'env', 'PROCESS_TAG=haproxy-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2345ad6b-d676-4546-a17e-6f7405ff5f24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.888 2 DEBUG nova.network.neutron [req-564626b4-d673-4ae7-ac50-c55ee2e2d670 req-4958ae69-bedb-4c5f-b1a7-59bf9ea8dfdd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Updated VIF entry in instance network info cache for port df49da0f-d552-4921-b312-c9644f9430de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.889 2 DEBUG nova.network.neutron [req-564626b4-d673-4ae7-ac50-c55ee2e2d670 req-4958ae69-bedb-4c5f-b1a7-59bf9ea8dfdd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Updating instance_info_cache with network_info: [{"id": "df49da0f-d552-4921-b312-c9644f9430de", "address": "fa:16:3e:88:2a:ee", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf49da0f-d5", "ovs_interfaceid": "df49da0f-d552-4921-b312-c9644f9430de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.909 2 DEBUG oslo_concurrency.lockutils [req-564626b4-d673-4ae7-ac50-c55ee2e2d670 req-4958ae69-bedb-4c5f-b1a7-59bf9ea8dfdd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-dd2acd48-65e4-48e1-80ae-b7404cb6fc4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.953 2 DEBUG nova.compute.manager [req-0989ea81-51a6-426d-ac29-790a48084726 req-38b106d8-e1c6-4260-a950-bcab0f66a371 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Received event network-vif-plugged-df49da0f-d552-4921-b312-c9644f9430de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.960 2 DEBUG oslo_concurrency.lockutils [req-0989ea81-51a6-426d-ac29-790a48084726 req-38b106d8-e1c6-4260-a950-bcab0f66a371 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.961 2 DEBUG oslo_concurrency.lockutils [req-0989ea81-51a6-426d-ac29-790a48084726 req-38b106d8-e1c6-4260-a950-bcab0f66a371 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.961 2 DEBUG oslo_concurrency.lockutils [req-0989ea81-51a6-426d-ac29-790a48084726 req-38b106d8-e1c6-4260-a950-bcab0f66a371 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:37 np0005464891 nova_compute[259907]: 2025-10-01 16:58:37.961 2 DEBUG nova.compute.manager [req-0989ea81-51a6-426d-ac29-790a48084726 req-38b106d8-e1c6-4260-a950-bcab0f66a371 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Processing event network-vif-plugged-df49da0f-d552-4921-b312-c9644f9430de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:58:38 np0005464891 podman[297467]: 2025-10-01 16:58:38.236747239 +0000 UTC m=+0.052212172 container create a788e433c5ac2182c1cad47249c1fe398f32d1045fc74f47855a6c075c245faf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  1 12:58:38 np0005464891 systemd[1]: Started libpod-conmon-a788e433c5ac2182c1cad47249c1fe398f32d1045fc74f47855a6c075c245faf.scope.
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.289 2 DEBUG oslo_concurrency.lockutils [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.290 2 DEBUG oslo_concurrency.lockutils [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:38 np0005464891 podman[297467]: 2025-10-01 16:58:38.209707958 +0000 UTC m=+0.025172911 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:58:38 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:58:38 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e6783cd79457b80c9a298bd1072d9419f2e346539466b78ea67994ff0f3722b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:58:38 np0005464891 podman[297467]: 2025-10-01 16:58:38.326163139 +0000 UTC m=+0.141628092 container init a788e433c5ac2182c1cad47249c1fe398f32d1045fc74f47855a6c075c245faf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  1 12:58:38 np0005464891 podman[297467]: 2025-10-01 16:58:38.331215896 +0000 UTC m=+0.146680839 container start a788e433c5ac2182c1cad47249c1fe398f32d1045fc74f47855a6c075c245faf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:58:38 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[297518]: [NOTICE]   (297522) : New worker (297524) forked
Oct  1 12:58:38 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[297518]: [NOTICE]   (297522) : Loading success.
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.376 2 DEBUG oslo_concurrency.processutils [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.423 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Updating instance_info_cache with network_info: [{"id": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "address": "fa:16:3e:e1:1c:22", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69588747-06", "ovs_interfaceid": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.444 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Releasing lock "refresh_cache-47531108-4f20-41bd-8fb8-77fae3a30b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.444 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.445 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.585 2 DEBUG nova.compute.manager [req-5abac1d9-6e74-49e8-b315-a58b526e951b req-93007a6e-c978-4356-b3bb-f04c76fe1eba af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Received event network-vif-plugged-f38329c7-0a79-480d-86b8-0cdc29deda98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.586 2 DEBUG oslo_concurrency.lockutils [req-5abac1d9-6e74-49e8-b315-a58b526e951b req-93007a6e-c978-4356-b3bb-f04c76fe1eba af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.587 2 DEBUG oslo_concurrency.lockutils [req-5abac1d9-6e74-49e8-b315-a58b526e951b req-93007a6e-c978-4356-b3bb-f04c76fe1eba af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.587 2 DEBUG oslo_concurrency.lockutils [req-5abac1d9-6e74-49e8-b315-a58b526e951b req-93007a6e-c978-4356-b3bb-f04c76fe1eba af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.587 2 DEBUG nova.compute.manager [req-5abac1d9-6e74-49e8-b315-a58b526e951b req-93007a6e-c978-4356-b3bb-f04c76fe1eba af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] No waiting events found dispatching network-vif-plugged-f38329c7-0a79-480d-86b8-0cdc29deda98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.588 2 WARNING nova.compute.manager [req-5abac1d9-6e74-49e8-b315-a58b526e951b req-93007a6e-c978-4356-b3bb-f04c76fe1eba af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Received unexpected event network-vif-plugged-f38329c7-0a79-480d-86b8-0cdc29deda98 for instance with vm_state deleted and task_state None.#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.588 2 DEBUG nova.compute.manager [req-5abac1d9-6e74-49e8-b315-a58b526e951b req-93007a6e-c978-4356-b3bb-f04c76fe1eba af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Received event network-vif-deleted-f38329c7-0a79-480d-86b8-0cdc29deda98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:58:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:58:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1202499721' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:58:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:58:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1202499721' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:58:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:58:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/930096072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.825 2 DEBUG oslo_concurrency.processutils [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.833 2 DEBUG nova.compute.provider_tree [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.866 2 DEBUG nova.scheduler.client.report [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.896 2 DEBUG oslo_concurrency.lockutils [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.929 2 INFO nova.scheduler.client.report [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Deleted allocations for instance f5c6a668-6fa1-4a25-974c-0395fc52bf1b#033[00m
Oct  1 12:58:38 np0005464891 nova_compute[259907]: 2025-10-01 16:58:38.984 2 DEBUG oslo_concurrency.lockutils [None req-c42c73d8-4aa8-4bc5-8a65-e9cc2c3c60b5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "f5c6a668-6fa1-4a25-974c-0395fc52bf1b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 303 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 3.9 KiB/s wr, 1 op/s
Oct  1 12:58:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e438 do_prune osdmap full prune enabled
Oct  1 12:58:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e439 e439: 3 total, 3 up, 3 in
Oct  1 12:58:39 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e439: 3 total, 3 up, 3 in
Oct  1 12:58:39 np0005464891 nova_compute[259907]: 2025-10-01 16:58:39.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:58:40 np0005464891 nova_compute[259907]: 2025-10-01 16:58:40.074 2 DEBUG nova.compute.manager [req-342b3fb1-038c-4c9c-ba1f-b9cf44f7aeac req-4521169e-c15c-486a-b4f2-e8cbd83f5e1c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Received event network-vif-plugged-df49da0f-d552-4921-b312-c9644f9430de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:58:40 np0005464891 nova_compute[259907]: 2025-10-01 16:58:40.075 2 DEBUG oslo_concurrency.lockutils [req-342b3fb1-038c-4c9c-ba1f-b9cf44f7aeac req-4521169e-c15c-486a-b4f2-e8cbd83f5e1c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:40 np0005464891 nova_compute[259907]: 2025-10-01 16:58:40.075 2 DEBUG oslo_concurrency.lockutils [req-342b3fb1-038c-4c9c-ba1f-b9cf44f7aeac req-4521169e-c15c-486a-b4f2-e8cbd83f5e1c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:40 np0005464891 nova_compute[259907]: 2025-10-01 16:58:40.076 2 DEBUG oslo_concurrency.lockutils [req-342b3fb1-038c-4c9c-ba1f-b9cf44f7aeac req-4521169e-c15c-486a-b4f2-e8cbd83f5e1c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:40 np0005464891 nova_compute[259907]: 2025-10-01 16:58:40.076 2 DEBUG nova.compute.manager [req-342b3fb1-038c-4c9c-ba1f-b9cf44f7aeac req-4521169e-c15c-486a-b4f2-e8cbd83f5e1c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] No waiting events found dispatching network-vif-plugged-df49da0f-d552-4921-b312-c9644f9430de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:58:40 np0005464891 nova_compute[259907]: 2025-10-01 16:58:40.077 2 WARNING nova.compute.manager [req-342b3fb1-038c-4c9c-ba1f-b9cf44f7aeac req-4521169e-c15c-486a-b4f2-e8cbd83f5e1c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Received unexpected event network-vif-plugged-df49da0f-d552-4921-b312-c9644f9430de for instance with vm_state building and task_state spawning.#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.000 2 DEBUG oslo_concurrency.lockutils [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "47531108-4f20-41bd-8fb8-77fae3a30b85" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.000 2 DEBUG oslo_concurrency.lockutils [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "47531108-4f20-41bd-8fb8-77fae3a30b85" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.001 2 DEBUG oslo_concurrency.lockutils [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.001 2 DEBUG oslo_concurrency.lockutils [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.001 2 DEBUG oslo_concurrency.lockutils [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.002 2 INFO nova.compute.manager [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Terminating instance#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.003 2 DEBUG nova.compute.manager [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.063 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337921.0628247, dd2acd48-65e4-48e1-80ae-b7404cb6fc4e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.064 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] VM Started (Lifecycle Event)#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.066 2 DEBUG nova.compute.manager [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.071 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.076 2 INFO nova.virt.libvirt.driver [-] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Instance spawned successfully.#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.076 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.094 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.097 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:58:41 np0005464891 kernel: tap69588747-06 (unregistering): left promiscuous mode
Oct  1 12:58:41 np0005464891 NetworkManager[44940]: <info>  [1759337921.1051] device (tap69588747-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.111 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:58:41 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:41Z|00199|binding|INFO|Releasing lport 69588747-06d2-44cb-bcb8-bfa62dd280d3 from this chassis (sb_readonly=0)
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.112 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:58:41 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:41Z|00200|binding|INFO|Setting lport 69588747-06d2-44cb-bcb8-bfa62dd280d3 down in Southbound
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.112 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:58:41 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:41Z|00201|binding|INFO|Removing iface tap69588747-06 ovn-installed in OVS
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.112 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.113 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.113 2 DEBUG nova.virt.libvirt.driver [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.124 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.125 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337921.0629745, dd2acd48-65e4-48e1-80ae-b7404cb6fc4e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.125 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:58:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:41.123 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:1c:22 10.100.0.8'], port_security=['fa:16:3e:e1:1c:22 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '47531108-4f20-41bd-8fb8-77fae3a30b85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce1e1062-6685-441b-8278-667224375e38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8318b65fa88942a99937a0d198a04a9c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '16fffc6f-0dbd-4932-b567-78bcd2e66114', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4cdcf07-f310-4572-944c-43bd8e74f763, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=69588747-06d2-44cb-bcb8-bfa62dd280d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:58:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:41.124 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 69588747-06d2-44cb-bcb8-bfa62dd280d3 in datapath ce1e1062-6685-441b-8278-667224375e38 unbound from our chassis#033[00m
Oct  1 12:58:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:41.126 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce1e1062-6685-441b-8278-667224375e38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:58:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:41.127 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[85644f08-430c-4992-b08a-53c095e53303]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:41.127 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce1e1062-6685-441b-8278-667224375e38 namespace which is not needed anymore#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.154 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.159 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337921.070125, dd2acd48-65e4-48e1-80ae-b7404cb6fc4e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.159 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.183 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:58:41 np0005464891 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Deactivated successfully.
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.187 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:58:41 np0005464891 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Consumed 16.660s CPU time.
Oct  1 12:58:41 np0005464891 systemd-machined[214891]: Machine qemu-19-instance-00000013 terminated.
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.193 2 INFO nova.compute.manager [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Took 12.58 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.194 2 DEBUG nova.compute.manager [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:58:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 286 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 17 KiB/s wr, 49 op/s
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.237 2 INFO nova.virt.libvirt.driver [-] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Instance destroyed successfully.#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.237 2 DEBUG nova.objects.instance [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lazy-loading 'resources' on Instance uuid 47531108-4f20-41bd-8fb8-77fae3a30b85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.239 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.263 2 DEBUG nova.virt.libvirt.vif [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:57:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-554072980',display_name='tempest-TestVolumeBootPattern-volume-backed-server-554072980',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-554072980',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIA89U+L/HbbAMUwRnUudlnbssd9D8/QXPa6lN4Le8arNbHmKfF3KR4E1oY5xNiJdAE870XWxXZRbQWs2VeTBkEYbdx/bUvxGF6RT6eWXbmql4fDNN9pQLw1Jszf6Z6rkw==',key_name='tempest-keypair-269452850',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:57:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-yq6yu1yk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:57:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1280014cdfb74333ae8d71c78116e646',uuid=47531108-4f20-41bd-8fb8-77fae3a30b85,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "address": "fa:16:3e:e1:1c:22", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69588747-06", "ovs_interfaceid": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.263 2 DEBUG nova.network.os_vif_util [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "address": "fa:16:3e:e1:1c:22", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69588747-06", "ovs_interfaceid": "69588747-06d2-44cb-bcb8-bfa62dd280d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.265 2 DEBUG nova.network.os_vif_util [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:1c:22,bridge_name='br-int',has_traffic_filtering=True,id=69588747-06d2-44cb-bcb8-bfa62dd280d3,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69588747-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.267 2 DEBUG os_vif [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:1c:22,bridge_name='br-int',has_traffic_filtering=True,id=69588747-06d2-44cb-bcb8-bfa62dd280d3,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69588747-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69588747-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.278 2 INFO nova.compute.manager [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Took 15.61 seconds to build instance.#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.279 2 INFO os_vif [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:1c:22,bridge_name='br-int',has_traffic_filtering=True,id=69588747-06d2-44cb-bcb8-bfa62dd280d3,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69588747-06')#033[00m
Oct  1 12:58:41 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[295654]: [NOTICE]   (295659) : haproxy version is 2.8.14-c23fe91
Oct  1 12:58:41 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[295654]: [NOTICE]   (295659) : path to executable is /usr/sbin/haproxy
Oct  1 12:58:41 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[295654]: [WARNING]  (295659) : Exiting Master process...
Oct  1 12:58:41 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[295654]: [WARNING]  (295659) : Exiting Master process...
Oct  1 12:58:41 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[295654]: [ALERT]    (295659) : Current worker (295665) exited with code 143 (Terminated)
Oct  1 12:58:41 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[295654]: [WARNING]  (295659) : All workers exited. Exiting... (0)
Oct  1 12:58:41 np0005464891 systemd[1]: libpod-ea01c5c6717eed6fad481ef0de762b447e32e377b94c77121c5a06e1d6b864b4.scope: Deactivated successfully.
Oct  1 12:58:41 np0005464891 conmon[295654]: conmon ea01c5c6717eed6fad48 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea01c5c6717eed6fad481ef0de762b447e32e377b94c77121c5a06e1d6b864b4.scope/container/memory.events
Oct  1 12:58:41 np0005464891 podman[297586]: 2025-10-01 16:58:41.309935564 +0000 UTC m=+0.072837626 container died ea01c5c6717eed6fad481ef0de762b447e32e377b94c77121c5a06e1d6b864b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.325 2 DEBUG oslo_concurrency.lockutils [None req-9a0b4d7c-517b-40c1-b69b-26dc4d57854b 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:41 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ea01c5c6717eed6fad481ef0de762b447e32e377b94c77121c5a06e1d6b864b4-userdata-shm.mount: Deactivated successfully.
Oct  1 12:58:41 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c5f02c5070bfcddb114aae2c461dfa813863231e340f805a871bb92c26b84339-merged.mount: Deactivated successfully.
Oct  1 12:58:41 np0005464891 podman[297586]: 2025-10-01 16:58:41.435617688 +0000 UTC m=+0.198519770 container cleanup ea01c5c6717eed6fad481ef0de762b447e32e377b94c77121c5a06e1d6b864b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  1 12:58:41 np0005464891 systemd[1]: libpod-conmon-ea01c5c6717eed6fad481ef0de762b447e32e377b94c77121c5a06e1d6b864b4.scope: Deactivated successfully.
Oct  1 12:58:41 np0005464891 podman[297643]: 2025-10-01 16:58:41.638064734 +0000 UTC m=+0.172001394 container remove ea01c5c6717eed6fad481ef0de762b447e32e377b94c77121c5a06e1d6b864b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  1 12:58:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:41.650 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[1c637913-e5b6-4bfb-9c8b-2fb82e13d78a]: (4, ('Wed Oct  1 04:58:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38 (ea01c5c6717eed6fad481ef0de762b447e32e377b94c77121c5a06e1d6b864b4)\nea01c5c6717eed6fad481ef0de762b447e32e377b94c77121c5a06e1d6b864b4\nWed Oct  1 04:58:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38 (ea01c5c6717eed6fad481ef0de762b447e32e377b94c77121c5a06e1d6b864b4)\nea01c5c6717eed6fad481ef0de762b447e32e377b94c77121c5a06e1d6b864b4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:41.653 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[993003d9-d049-40cd-a574-7bd117684f6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:41.654 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce1e1062-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:41 np0005464891 kernel: tapce1e1062-60: left promiscuous mode
Oct  1 12:58:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:41.663 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ada96e-652c-4f0a-8fe2-40fb81dac701]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:41.693 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[bb77a2e3-99cb-45fd-96dd-da12bc6079ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:41.695 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac6d9f1-fe07-412e-9cb7-14e504b3f6df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:41.720 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[eeef9ebc-bf99-4029-9ea9-d856a90b20a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478562, 'reachable_time': 18228, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297658, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:41 np0005464891 systemd[1]: run-netns-ovnmeta\x2dce1e1062\x2d6685\x2d441b\x2d8278\x2d667224375e38.mount: Deactivated successfully.
Oct  1 12:58:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:41.723 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce1e1062-6685-441b-8278-667224375e38 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:58:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:41.723 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[d84c47c2-16df-4f9e-99c5-6198fe85a2e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:58:41 np0005464891 nova_compute[259907]: 2025-10-01 16:58:41.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:58:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:58:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:58:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:58:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:58:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:58:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:58:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.164 2 DEBUG nova.compute.manager [req-22b8a34d-73dd-45fc-a072-7cbe1d44453e req-e7d1fa4b-e34a-4e57-a497-958977464898 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Received event network-vif-unplugged-69588747-06d2-44cb-bcb8-bfa62dd280d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.166 2 DEBUG oslo_concurrency.lockutils [req-22b8a34d-73dd-45fc-a072-7cbe1d44453e req-e7d1fa4b-e34a-4e57-a497-958977464898 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.166 2 DEBUG oslo_concurrency.lockutils [req-22b8a34d-73dd-45fc-a072-7cbe1d44453e req-e7d1fa4b-e34a-4e57-a497-958977464898 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.167 2 DEBUG oslo_concurrency.lockutils [req-22b8a34d-73dd-45fc-a072-7cbe1d44453e req-e7d1fa4b-e34a-4e57-a497-958977464898 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.168 2 DEBUG nova.compute.manager [req-22b8a34d-73dd-45fc-a072-7cbe1d44453e req-e7d1fa4b-e34a-4e57-a497-958977464898 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] No waiting events found dispatching network-vif-unplugged-69588747-06d2-44cb-bcb8-bfa62dd280d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.169 2 DEBUG nova.compute.manager [req-22b8a34d-73dd-45fc-a072-7cbe1d44453e req-e7d1fa4b-e34a-4e57-a497-958977464898 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Received event network-vif-unplugged-69588747-06d2-44cb-bcb8-bfa62dd280d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.169 2 DEBUG nova.compute.manager [req-22b8a34d-73dd-45fc-a072-7cbe1d44453e req-e7d1fa4b-e34a-4e57-a497-958977464898 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Received event network-vif-plugged-69588747-06d2-44cb-bcb8-bfa62dd280d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.170 2 DEBUG oslo_concurrency.lockutils [req-22b8a34d-73dd-45fc-a072-7cbe1d44453e req-e7d1fa4b-e34a-4e57-a497-958977464898 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.171 2 DEBUG oslo_concurrency.lockutils [req-22b8a34d-73dd-45fc-a072-7cbe1d44453e req-e7d1fa4b-e34a-4e57-a497-958977464898 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.172 2 DEBUG oslo_concurrency.lockutils [req-22b8a34d-73dd-45fc-a072-7cbe1d44453e req-e7d1fa4b-e34a-4e57-a497-958977464898 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "47531108-4f20-41bd-8fb8-77fae3a30b85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.173 2 DEBUG nova.compute.manager [req-22b8a34d-73dd-45fc-a072-7cbe1d44453e req-e7d1fa4b-e34a-4e57-a497-958977464898 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] No waiting events found dispatching network-vif-plugged-69588747-06d2-44cb-bcb8-bfa62dd280d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.173 2 WARNING nova.compute.manager [req-22b8a34d-73dd-45fc-a072-7cbe1d44453e req-e7d1fa4b-e34a-4e57-a497-958977464898 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Received unexpected event network-vif-plugged-69588747-06d2-44cb-bcb8-bfa62dd280d3 for instance with vm_state active and task_state deleting.#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.431 2 INFO nova.virt.libvirt.driver [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Deleting instance files /var/lib/nova/instances/47531108-4f20-41bd-8fb8-77fae3a30b85_del#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.432 2 INFO nova.virt.libvirt.driver [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Deletion of /var/lib/nova/instances/47531108-4f20-41bd-8fb8-77fae3a30b85_del complete#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.701 2 INFO nova.compute.manager [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Took 1.70 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.701 2 DEBUG oslo.service.loopingcall [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.701 2 DEBUG nova.compute.manager [-] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:58:42 np0005464891 nova_compute[259907]: 2025-10-01 16:58:42.701 2 DEBUG nova.network.neutron [-] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:58:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 281 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 504 KiB/s rd, 18 KiB/s wr, 88 op/s
Oct  1 12:58:44 np0005464891 nova_compute[259907]: 2025-10-01 16:58:44.144 2 DEBUG nova.network.neutron [-] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:58:44 np0005464891 nova_compute[259907]: 2025-10-01 16:58:44.190 2 INFO nova.compute.manager [-] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Took 1.49 seconds to deallocate network for instance.#033[00m
Oct  1 12:58:44 np0005464891 nova_compute[259907]: 2025-10-01 16:58:44.247 2 DEBUG nova.compute.manager [req-5d363521-0682-4057-aa3b-112edfff5965 req-02e9b90a-c327-4d6d-afcd-9f03a9e4313b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Received event network-vif-deleted-69588747-06d2-44cb-bcb8-bfa62dd280d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:58:44 np0005464891 nova_compute[259907]: 2025-10-01 16:58:44.357 2 INFO nova.compute.manager [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Took 0.17 seconds to detach 1 volumes for instance.#033[00m
Oct  1 12:58:44 np0005464891 nova_compute[259907]: 2025-10-01 16:58:44.358 2 DEBUG nova.compute.manager [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Deleting volume: e3868bbb-c720-4557-8ae5-297fa9b8743c _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Oct  1 12:58:44 np0005464891 nova_compute[259907]: 2025-10-01 16:58:44.560 2 DEBUG oslo_concurrency.lockutils [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:58:44 np0005464891 nova_compute[259907]: 2025-10-01 16:58:44.561 2 DEBUG oslo_concurrency.lockutils [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:58:44 np0005464891 nova_compute[259907]: 2025-10-01 16:58:44.622 2 DEBUG oslo_concurrency.processutils [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:58:44 np0005464891 podman[297680]: 2025-10-01 16:58:44.981233205 +0000 UTC m=+0.092559026 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  1 12:58:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:58:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2047426384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:58:45 np0005464891 nova_compute[259907]: 2025-10-01 16:58:45.079 2 DEBUG oslo_concurrency.processutils [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:58:45 np0005464891 nova_compute[259907]: 2025-10-01 16:58:45.086 2 DEBUG nova.compute.provider_tree [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:58:45 np0005464891 nova_compute[259907]: 2025-10-01 16:58:45.100 2 DEBUG nova.scheduler.client.report [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:58:45 np0005464891 nova_compute[259907]: 2025-10-01 16:58:45.152 2 DEBUG oslo_concurrency.lockutils [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:45 np0005464891 nova_compute[259907]: 2025-10-01 16:58:45.188 2 INFO nova.scheduler.client.report [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Deleted allocations for instance 47531108-4f20-41bd-8fb8-77fae3a30b85#033[00m
Oct  1 12:58:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:58:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1095998124' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:58:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:58:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1095998124' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:58:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1766: 321 pgs: 321 active+clean; 260 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 166 op/s
Oct  1 12:58:45 np0005464891 nova_compute[259907]: 2025-10-01 16:58:45.273 2 DEBUG oslo_concurrency.lockutils [None req-85ff772b-f224-4fbf-a760-0b51d7880baa 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "47531108-4f20-41bd-8fb8-77fae3a30b85" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:58:46 np0005464891 nova_compute[259907]: 2025-10-01 16:58:46.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:46 np0005464891 nova_compute[259907]: 2025-10-01 16:58:46.343 2 DEBUG nova.compute.manager [req-000e1402-d2ad-4565-ae07-f07119f3acaf req-e4ad092e-7294-41d6-b479-a857bfb93b30 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Received event network-changed-df49da0f-d552-4921-b312-c9644f9430de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:58:46 np0005464891 nova_compute[259907]: 2025-10-01 16:58:46.344 2 DEBUG nova.compute.manager [req-000e1402-d2ad-4565-ae07-f07119f3acaf req-e4ad092e-7294-41d6-b479-a857bfb93b30 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Refreshing instance network info cache due to event network-changed-df49da0f-d552-4921-b312-c9644f9430de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:58:46 np0005464891 nova_compute[259907]: 2025-10-01 16:58:46.344 2 DEBUG oslo_concurrency.lockutils [req-000e1402-d2ad-4565-ae07-f07119f3acaf req-e4ad092e-7294-41d6-b479-a857bfb93b30 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-dd2acd48-65e4-48e1-80ae-b7404cb6fc4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:58:46 np0005464891 nova_compute[259907]: 2025-10-01 16:58:46.345 2 DEBUG oslo_concurrency.lockutils [req-000e1402-d2ad-4565-ae07-f07119f3acaf req-e4ad092e-7294-41d6-b479-a857bfb93b30 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-dd2acd48-65e4-48e1-80ae-b7404cb6fc4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:58:46 np0005464891 nova_compute[259907]: 2025-10-01 16:58:46.345 2 DEBUG nova.network.neutron [req-000e1402-d2ad-4565-ae07-f07119f3acaf req-e4ad092e-7294-41d6-b479-a857bfb93b30 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Refreshing network info cache for port df49da0f-d552-4921-b312-c9644f9430de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:58:46 np0005464891 nova_compute[259907]: 2025-10-01 16:58:46.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e439 do_prune osdmap full prune enabled
Oct  1 12:58:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 260 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 166 op/s
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e440 e440: 3 total, 3 up, 3 in
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e440: 3 total, 3 up, 3 in
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:58:47.505355) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337927505407, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1387, "num_deletes": 261, "total_data_size": 1891936, "memory_usage": 1921624, "flush_reason": "Manual Compaction"}
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337927756624, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1869992, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34373, "largest_seqno": 35758, "table_properties": {"data_size": 1863447, "index_size": 3684, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14323, "raw_average_key_size": 20, "raw_value_size": 1850055, "raw_average_value_size": 2609, "num_data_blocks": 163, "num_entries": 709, "num_filter_entries": 709, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759337814, "oldest_key_time": 1759337814, "file_creation_time": 1759337927, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 251442 microseconds, and 8778 cpu microseconds.
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:58:47.756793) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1869992 bytes OK
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:58:47.756850) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:58:47.793319) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:58:47.793371) EVENT_LOG_v1 {"time_micros": 1759337927793359, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:58:47.793400) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1885638, prev total WAL file size 1912158, number of live WAL files 2.
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:58:47.794994) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323537' seq:0, type:0; will stop at (end)
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1826KB)], [71(8889KB)]
Oct  1 12:58:47 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337927795076, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 10973086, "oldest_snapshot_seqno": -1}
Oct  1 12:58:48 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6552 keys, 10826339 bytes, temperature: kUnknown
Oct  1 12:58:48 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337928177614, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 10826339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10776358, "index_size": 32525, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 165782, "raw_average_key_size": 25, "raw_value_size": 10652426, "raw_average_value_size": 1625, "num_data_blocks": 1307, "num_entries": 6552, "num_filter_entries": 6552, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759337927, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct  1 12:58:48 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 12:58:48 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:58:48.177921) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 10826339 bytes
Oct  1 12:58:48 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:58:48.305893) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 28.7 rd, 28.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 8.7 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(11.7) write-amplify(5.8) OK, records in: 7087, records dropped: 535 output_compression: NoCompression
Oct  1 12:58:48 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:58:48.305933) EVENT_LOG_v1 {"time_micros": 1759337928305919, "job": 40, "event": "compaction_finished", "compaction_time_micros": 382625, "compaction_time_cpu_micros": 40475, "output_level": 6, "num_output_files": 1, "total_output_size": 10826339, "num_input_records": 7087, "num_output_records": 6552, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 12:58:48 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:58:48 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337928306513, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct  1 12:58:48 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 12:58:48 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759337928308642, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct  1 12:58:48 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:58:47.794841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:58:48 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:58:48.308826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:58:48 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:58:48.308834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:58:48 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:58:48.308837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:58:48 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:58:48.308839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:58:48 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-16:58:48.308840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 12:58:48 np0005464891 nova_compute[259907]: 2025-10-01 16:58:48.902 2 DEBUG nova.network.neutron [req-000e1402-d2ad-4565-ae07-f07119f3acaf req-e4ad092e-7294-41d6-b479-a857bfb93b30 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Updated VIF entry in instance network info cache for port df49da0f-d552-4921-b312-c9644f9430de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:58:48 np0005464891 nova_compute[259907]: 2025-10-01 16:58:48.904 2 DEBUG nova.network.neutron [req-000e1402-d2ad-4565-ae07-f07119f3acaf req-e4ad092e-7294-41d6-b479-a857bfb93b30 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Updating instance_info_cache with network_info: [{"id": "df49da0f-d552-4921-b312-c9644f9430de", "address": "fa:16:3e:88:2a:ee", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf49da0f-d5", "ovs_interfaceid": "df49da0f-d552-4921-b312-c9644f9430de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:58:49 np0005464891 nova_compute[259907]: 2025-10-01 16:58:49.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:49.051 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:58:49 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:49.060 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:58:49 np0005464891 nova_compute[259907]: 2025-10-01 16:58:49.127 2 DEBUG oslo_concurrency.lockutils [req-000e1402-d2ad-4565-ae07-f07119f3acaf req-e4ad092e-7294-41d6-b479-a857bfb93b30 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-dd2acd48-65e4-48e1-80ae-b7404cb6fc4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:58:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 321 active+clean; 229 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 19 KiB/s wr, 177 op/s
Oct  1 12:58:49 np0005464891 podman[297707]: 2025-10-01 16:58:49.952442801 +0000 UTC m=+0.068278263 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:58:50 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:58:50.064 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:58:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e440 do_prune osdmap full prune enabled
Oct  1 12:58:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e441 e441: 3 total, 3 up, 3 in
Oct  1 12:58:50 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e441: 3 total, 3 up, 3 in
Oct  1 12:58:50 np0005464891 nova_compute[259907]: 2025-10-01 16:58:50.659 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337915.6576917, f5c6a668-6fa1-4a25-974c-0395fc52bf1b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:58:50 np0005464891 nova_compute[259907]: 2025-10-01 16:58:50.659 2 INFO nova.compute.manager [-] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:58:50 np0005464891 nova_compute[259907]: 2025-10-01 16:58:50.683 2 DEBUG nova.compute.manager [None req-a27f8683-faa3-45e7-af1b-a3d8514a864b - - - - - -] [instance: f5c6a668-6fa1-4a25-974c-0395fc52bf1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:58:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1771: 321 pgs: 321 active+clean; 202 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.4 KiB/s wr, 111 op/s
Oct  1 12:58:51 np0005464891 nova_compute[259907]: 2025-10-01 16:58:51.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:51 np0005464891 nova_compute[259907]: 2025-10-01 16:58:51.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e441 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:58:52 np0005464891 podman[297727]: 2025-10-01 16:58:52.994567205 +0000 UTC m=+0.083949841 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid)
Oct  1 12:58:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 202 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 181 KiB/s rd, 250 KiB/s wr, 22 op/s
Oct  1 12:58:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 202 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 439 KiB/s rd, 2.4 MiB/s wr, 94 op/s
Oct  1 12:58:56 np0005464891 nova_compute[259907]: 2025-10-01 16:58:56.241 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337921.2327294, 47531108-4f20-41bd-8fb8-77fae3a30b85 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:58:56 np0005464891 nova_compute[259907]: 2025-10-01 16:58:56.241 2 INFO nova.compute.manager [-] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:58:56 np0005464891 nova_compute[259907]: 2025-10-01 16:58:56.270 2 DEBUG nova.compute.manager [None req-de10659a-a2ab-4fd7-b061-782841db1c70 - - - - - -] [instance: 47531108-4f20-41bd-8fb8-77fae3a30b85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:58:56 np0005464891 nova_compute[259907]: 2025-10-01 16:58:56.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:56 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:56Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:2a:ee 10.100.0.9
Oct  1 12:58:56 np0005464891 ovn_controller[152409]: 2025-10-01T16:58:56Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:2a:ee 10.100.0.9
Oct  1 12:58:56 np0005464891 nova_compute[259907]: 2025-10-01 16:58:56.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:58:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e441 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:58:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e441 do_prune osdmap full prune enabled
Oct  1 12:58:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 202 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 356 KiB/s rd, 1.9 MiB/s wr, 76 op/s
Oct  1 12:58:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e442 e442: 3 total, 3 up, 3 in
Oct  1 12:58:57 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e442: 3 total, 3 up, 3 in
Oct  1 12:58:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:58:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:58:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 12:58:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:58:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 12:58:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:58:59 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 08904871-fe2b-4b08-8a28-f37c04b24d62 does not exist
Oct  1 12:58:59 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev eec91304-ddf2-493a-b422-94388e1776a2 does not exist
Oct  1 12:58:59 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 03566f4c-497b-4372-9b90-e802b8001c8e does not exist
Oct  1 12:58:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 12:58:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 12:58:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 12:58:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:58:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 12:58:59 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 12:58:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 218 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Oct  1 12:58:59 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 12:58:59 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:58:59 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 12:58:59 np0005464891 podman[298021]: 2025-10-01 16:58:59.865200817 +0000 UTC m=+0.090355876 container create 877f816c8cf99f2ebb007efc45720e0b29e66d488a825e55c7e7064e80dec43c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 12:58:59 np0005464891 podman[298021]: 2025-10-01 16:58:59.807716663 +0000 UTC m=+0.032871782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:58:59 np0005464891 systemd[1]: Started libpod-conmon-877f816c8cf99f2ebb007efc45720e0b29e66d488a825e55c7e7064e80dec43c.scope.
Oct  1 12:58:59 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:58:59 np0005464891 podman[298021]: 2025-10-01 16:58:59.986798139 +0000 UTC m=+0.211953188 container init 877f816c8cf99f2ebb007efc45720e0b29e66d488a825e55c7e7064e80dec43c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 12:58:59 np0005464891 podman[298021]: 2025-10-01 16:58:59.994677635 +0000 UTC m=+0.219832654 container start 877f816c8cf99f2ebb007efc45720e0b29e66d488a825e55c7e7064e80dec43c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 12:59:00 np0005464891 vigorous_mcclintock[298039]: 167 167
Oct  1 12:59:00 np0005464891 systemd[1]: libpod-877f816c8cf99f2ebb007efc45720e0b29e66d488a825e55c7e7064e80dec43c.scope: Deactivated successfully.
Oct  1 12:59:00 np0005464891 podman[298021]: 2025-10-01 16:59:00.003098936 +0000 UTC m=+0.228254045 container attach 877f816c8cf99f2ebb007efc45720e0b29e66d488a825e55c7e7064e80dec43c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct  1 12:59:00 np0005464891 podman[298021]: 2025-10-01 16:59:00.003654701 +0000 UTC m=+0.228809760 container died 877f816c8cf99f2ebb007efc45720e0b29e66d488a825e55c7e7064e80dec43c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 12:59:00 np0005464891 systemd[1]: var-lib-containers-storage-overlay-73cb3dfa35d04568ffbab939e934d69f6fce40ee1ee8301bf3ed8b64acf0c355-merged.mount: Deactivated successfully.
Oct  1 12:59:00 np0005464891 podman[298021]: 2025-10-01 16:59:00.062409611 +0000 UTC m=+0.287564640 container remove 877f816c8cf99f2ebb007efc45720e0b29e66d488a825e55c7e7064e80dec43c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct  1 12:59:00 np0005464891 systemd[1]: libpod-conmon-877f816c8cf99f2ebb007efc45720e0b29e66d488a825e55c7e7064e80dec43c.scope: Deactivated successfully.
Oct  1 12:59:00 np0005464891 podman[298063]: 2025-10-01 16:59:00.253506986 +0000 UTC m=+0.048431948 container create 4d16bc88e1a31046e60e5ccc79fa456ff6a77de1bb77f897780831f2973e8d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 12:59:00 np0005464891 systemd[1]: Started libpod-conmon-4d16bc88e1a31046e60e5ccc79fa456ff6a77de1bb77f897780831f2973e8d7f.scope.
Oct  1 12:59:00 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:59:00 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d776684a9f50a0303db217d04532d73d1e102aa4dc285333664a157566d0f661/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:59:00 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d776684a9f50a0303db217d04532d73d1e102aa4dc285333664a157566d0f661/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:59:00 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d776684a9f50a0303db217d04532d73d1e102aa4dc285333664a157566d0f661/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:59:00 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d776684a9f50a0303db217d04532d73d1e102aa4dc285333664a157566d0f661/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:59:00 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d776684a9f50a0303db217d04532d73d1e102aa4dc285333664a157566d0f661/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 12:59:00 np0005464891 podman[298063]: 2025-10-01 16:59:00.233033975 +0000 UTC m=+0.027958957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:59:00 np0005464891 podman[298063]: 2025-10-01 16:59:00.348159419 +0000 UTC m=+0.143084421 container init 4d16bc88e1a31046e60e5ccc79fa456ff6a77de1bb77f897780831f2973e8d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cerf, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:59:00 np0005464891 podman[298063]: 2025-10-01 16:59:00.35512194 +0000 UTC m=+0.150046922 container start 4d16bc88e1a31046e60e5ccc79fa456ff6a77de1bb77f897780831f2973e8d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cerf, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 12:59:00 np0005464891 podman[298063]: 2025-10-01 16:59:00.391301321 +0000 UTC m=+0.186226283 container attach 4d16bc88e1a31046e60e5ccc79fa456ff6a77de1bb77f897780831f2973e8d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cerf, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 12:59:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 271 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 651 KiB/s rd, 7.0 MiB/s wr, 114 op/s
Oct  1 12:59:01 np0005464891 nova_compute[259907]: 2025-10-01 16:59:01.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:01 np0005464891 compassionate_cerf[298080]: --> passed data devices: 0 physical, 3 LVM
Oct  1 12:59:01 np0005464891 compassionate_cerf[298080]: --> relative data size: 1.0
Oct  1 12:59:01 np0005464891 compassionate_cerf[298080]: --> All data devices are unavailable
Oct  1 12:59:01 np0005464891 systemd[1]: libpod-4d16bc88e1a31046e60e5ccc79fa456ff6a77de1bb77f897780831f2973e8d7f.scope: Deactivated successfully.
Oct  1 12:59:01 np0005464891 podman[298063]: 2025-10-01 16:59:01.520686242 +0000 UTC m=+1.315611244 container died 4d16bc88e1a31046e60e5ccc79fa456ff6a77de1bb77f897780831f2973e8d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cerf, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 12:59:01 np0005464891 systemd[1]: libpod-4d16bc88e1a31046e60e5ccc79fa456ff6a77de1bb77f897780831f2973e8d7f.scope: Consumed 1.112s CPU time.
Oct  1 12:59:01 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d776684a9f50a0303db217d04532d73d1e102aa4dc285333664a157566d0f661-merged.mount: Deactivated successfully.
Oct  1 12:59:01 np0005464891 podman[298063]: 2025-10-01 16:59:01.654162999 +0000 UTC m=+1.449087961 container remove 4d16bc88e1a31046e60e5ccc79fa456ff6a77de1bb77f897780831f2973e8d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cerf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 12:59:01 np0005464891 systemd[1]: libpod-conmon-4d16bc88e1a31046e60e5ccc79fa456ff6a77de1bb77f897780831f2973e8d7f.scope: Deactivated successfully.
Oct  1 12:59:01 np0005464891 nova_compute[259907]: 2025-10-01 16:59:01.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e442 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:59:02 np0005464891 podman[298264]: 2025-10-01 16:59:02.399018256 +0000 UTC m=+0.091780946 container create 49165c7c7339a195008e571a7ee8fb109b5c5647ff4322ed5948634fe711396d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:59:02 np0005464891 podman[298264]: 2025-10-01 16:59:02.334076197 +0000 UTC m=+0.026838907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:59:02 np0005464891 systemd[1]: Started libpod-conmon-49165c7c7339a195008e571a7ee8fb109b5c5647ff4322ed5948634fe711396d.scope.
Oct  1 12:59:02 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:59:02 np0005464891 podman[298264]: 2025-10-01 16:59:02.573750483 +0000 UTC m=+0.266513273 container init 49165c7c7339a195008e571a7ee8fb109b5c5647ff4322ed5948634fe711396d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 12:59:02 np0005464891 podman[298264]: 2025-10-01 16:59:02.581553737 +0000 UTC m=+0.274316427 container start 49165c7c7339a195008e571a7ee8fb109b5c5647ff4322ed5948634fe711396d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 12:59:02 np0005464891 gifted_pare[298280]: 167 167
Oct  1 12:59:02 np0005464891 systemd[1]: libpod-49165c7c7339a195008e571a7ee8fb109b5c5647ff4322ed5948634fe711396d.scope: Deactivated successfully.
Oct  1 12:59:02 np0005464891 podman[298264]: 2025-10-01 16:59:02.67287967 +0000 UTC m=+0.365642360 container attach 49165c7c7339a195008e571a7ee8fb109b5c5647ff4322ed5948634fe711396d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 12:59:02 np0005464891 podman[298264]: 2025-10-01 16:59:02.674081682 +0000 UTC m=+0.366844372 container died 49165c7c7339a195008e571a7ee8fb109b5c5647ff4322ed5948634fe711396d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 12:59:02 np0005464891 systemd[1]: var-lib-containers-storage-overlay-fa625846bd55d2948ed89fa10cd81d2adc054802f641546e4bf7632b426114c0-merged.mount: Deactivated successfully.
Oct  1 12:59:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 271 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 513 KiB/s rd, 6.8 MiB/s wr, 109 op/s
Oct  1 12:59:03 np0005464891 podman[298264]: 2025-10-01 16:59:03.920247553 +0000 UTC m=+1.613010273 container remove 49165c7c7339a195008e571a7ee8fb109b5c5647ff4322ed5948634fe711396d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:59:03 np0005464891 systemd[1]: libpod-conmon-49165c7c7339a195008e571a7ee8fb109b5c5647ff4322ed5948634fe711396d.scope: Deactivated successfully.
Oct  1 12:59:04 np0005464891 podman[298299]: 2025-10-01 16:59:04.12124685 +0000 UTC m=+0.075613153 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent)
Oct  1 12:59:04 np0005464891 podman[298315]: 2025-10-01 16:59:04.238593103 +0000 UTC m=+0.137556999 container create 5196842f49af44901aec3ae1e0eedce8a8cbdb5562bb19f8a064ba167ef504a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_colden, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:59:04 np0005464891 podman[298315]: 2025-10-01 16:59:04.147271592 +0000 UTC m=+0.046235468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:59:04 np0005464891 systemd[1]: Started libpod-conmon-5196842f49af44901aec3ae1e0eedce8a8cbdb5562bb19f8a064ba167ef504a5.scope.
Oct  1 12:59:04 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:59:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc042410b19d3ca922de901f14399ff224c81f751d235c9124eb83fc5f72044/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:59:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc042410b19d3ca922de901f14399ff224c81f751d235c9124eb83fc5f72044/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:59:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc042410b19d3ca922de901f14399ff224c81f751d235c9124eb83fc5f72044/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:59:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc042410b19d3ca922de901f14399ff224c81f751d235c9124eb83fc5f72044/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:59:04 np0005464891 podman[298315]: 2025-10-01 16:59:04.573647133 +0000 UTC m=+0.472611069 container init 5196842f49af44901aec3ae1e0eedce8a8cbdb5562bb19f8a064ba167ef504a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 12:59:04 np0005464891 podman[298315]: 2025-10-01 16:59:04.583482683 +0000 UTC m=+0.482446539 container start 5196842f49af44901aec3ae1e0eedce8a8cbdb5562bb19f8a064ba167ef504a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_colden, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 12:59:04 np0005464891 podman[298315]: 2025-10-01 16:59:04.774121476 +0000 UTC m=+0.673085432 container attach 5196842f49af44901aec3ae1e0eedce8a8cbdb5562bb19f8a064ba167ef504a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_colden, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:59:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 271 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 59 op/s
Oct  1 12:59:05 np0005464891 youthful_colden[298337]: {
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:    "0": [
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:        {
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "devices": [
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "/dev/loop3"
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            ],
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "lv_name": "ceph_lv0",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "lv_size": "21470642176",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "name": "ceph_lv0",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "tags": {
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.cluster_name": "ceph",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.crush_device_class": "",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.encrypted": "0",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.osd_id": "0",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.type": "block",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.vdo": "0"
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            },
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "type": "block",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "vg_name": "ceph_vg0"
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:        }
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:    ],
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:    "1": [
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:        {
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "devices": [
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "/dev/loop4"
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            ],
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "lv_name": "ceph_lv1",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "lv_size": "21470642176",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "name": "ceph_lv1",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "tags": {
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.cluster_name": "ceph",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.crush_device_class": "",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.encrypted": "0",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.osd_id": "1",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.type": "block",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.vdo": "0"
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            },
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "type": "block",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "vg_name": "ceph_vg1"
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:        }
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:    ],
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:    "2": [
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:        {
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "devices": [
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "/dev/loop5"
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            ],
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "lv_name": "ceph_lv2",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "lv_size": "21470642176",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "name": "ceph_lv2",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "tags": {
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.cephx_lockbox_secret": "",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.cluster_name": "ceph",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.crush_device_class": "",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.encrypted": "0",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.osd_id": "2",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.type": "block",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:                "ceph.vdo": "0"
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            },
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "type": "block",
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:            "vg_name": "ceph_vg2"
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:        }
Oct  1 12:59:05 np0005464891 youthful_colden[298337]:    ]
Oct  1 12:59:05 np0005464891 youthful_colden[298337]: }
Oct  1 12:59:05 np0005464891 systemd[1]: libpod-5196842f49af44901aec3ae1e0eedce8a8cbdb5562bb19f8a064ba167ef504a5.scope: Deactivated successfully.
Oct  1 12:59:05 np0005464891 podman[298315]: 2025-10-01 16:59:05.495680504 +0000 UTC m=+1.394644400 container died 5196842f49af44901aec3ae1e0eedce8a8cbdb5562bb19f8a064ba167ef504a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_colden, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 12:59:05 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9fc042410b19d3ca922de901f14399ff224c81f751d235c9124eb83fc5f72044-merged.mount: Deactivated successfully.
Oct  1 12:59:06 np0005464891 podman[298315]: 2025-10-01 16:59:06.278045117 +0000 UTC m=+2.177009003 container remove 5196842f49af44901aec3ae1e0eedce8a8cbdb5562bb19f8a064ba167ef504a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 12:59:06 np0005464891 nova_compute[259907]: 2025-10-01 16:59:06.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:06 np0005464891 systemd[1]: libpod-conmon-5196842f49af44901aec3ae1e0eedce8a8cbdb5562bb19f8a064ba167ef504a5.scope: Deactivated successfully.
Oct  1 12:59:06 np0005464891 nova_compute[259907]: 2025-10-01 16:59:06.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:07 np0005464891 podman[298497]: 2025-10-01 16:59:06.986878808 +0000 UTC m=+0.048582393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:59:07 np0005464891 podman[298497]: 2025-10-01 16:59:07.150905191 +0000 UTC m=+0.212608696 container create 8e6441823df8504297a00909c4b33066d61c6771c3a3031821be2948e49965e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 12:59:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e442 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:59:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 271 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 59 op/s
Oct  1 12:59:07 np0005464891 systemd[1]: Started libpod-conmon-8e6441823df8504297a00909c4b33066d61c6771c3a3031821be2948e49965e2.scope.
Oct  1 12:59:07 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:59:07 np0005464891 podman[298497]: 2025-10-01 16:59:07.651939378 +0000 UTC m=+0.713642893 container init 8e6441823df8504297a00909c4b33066d61c6771c3a3031821be2948e49965e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 12:59:07 np0005464891 podman[298497]: 2025-10-01 16:59:07.664497851 +0000 UTC m=+0.726201376 container start 8e6441823df8504297a00909c4b33066d61c6771c3a3031821be2948e49965e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 12:59:07 np0005464891 reverent_mestorf[298513]: 167 167
Oct  1 12:59:07 np0005464891 systemd[1]: libpod-8e6441823df8504297a00909c4b33066d61c6771c3a3031821be2948e49965e2.scope: Deactivated successfully.
Oct  1 12:59:07 np0005464891 podman[298497]: 2025-10-01 16:59:07.676909292 +0000 UTC m=+0.738612837 container attach 8e6441823df8504297a00909c4b33066d61c6771c3a3031821be2948e49965e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:59:07 np0005464891 podman[298497]: 2025-10-01 16:59:07.677658662 +0000 UTC m=+0.739362167 container died 8e6441823df8504297a00909c4b33066d61c6771c3a3031821be2948e49965e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 12:59:08 np0005464891 systemd[1]: var-lib-containers-storage-overlay-1358b0d27fe3d64f75798686d13d943ea8acb0eec156da425fe43249657aac00-merged.mount: Deactivated successfully.
Oct  1 12:59:08 np0005464891 podman[298497]: 2025-10-01 16:59:08.470332199 +0000 UTC m=+1.532035704 container remove 8e6441823df8504297a00909c4b33066d61c6771c3a3031821be2948e49965e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 12:59:08 np0005464891 systemd[1]: libpod-conmon-8e6441823df8504297a00909c4b33066d61c6771c3a3031821be2948e49965e2.scope: Deactivated successfully.
Oct  1 12:59:08 np0005464891 podman[298537]: 2025-10-01 16:59:08.683139749 +0000 UTC m=+0.035743990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 12:59:09 np0005464891 podman[298537]: 2025-10-01 16:59:09.053904376 +0000 UTC m=+0.406508527 container create 0f535f7f5c7506d98e8898358f6c0c542655b04c45955caaf53786e86b0adcb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brown, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 12:59:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 271 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.2 MiB/s wr, 60 op/s
Oct  1 12:59:09 np0005464891 systemd[1]: Started libpod-conmon-0f535f7f5c7506d98e8898358f6c0c542655b04c45955caaf53786e86b0adcb9.scope.
Oct  1 12:59:09 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:59:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65d2a56b756f3d5eed37f68226eef18d2a954c937f5cb828df35210b204d4ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 12:59:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65d2a56b756f3d5eed37f68226eef18d2a954c937f5cb828df35210b204d4ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 12:59:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65d2a56b756f3d5eed37f68226eef18d2a954c937f5cb828df35210b204d4ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 12:59:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65d2a56b756f3d5eed37f68226eef18d2a954c937f5cb828df35210b204d4ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 12:59:09 np0005464891 podman[298537]: 2025-10-01 16:59:09.955099146 +0000 UTC m=+1.307703317 container init 0f535f7f5c7506d98e8898358f6c0c542655b04c45955caaf53786e86b0adcb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 12:59:09 np0005464891 podman[298537]: 2025-10-01 16:59:09.962749886 +0000 UTC m=+1.315354067 container start 0f535f7f5c7506d98e8898358f6c0c542655b04c45955caaf53786e86b0adcb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 12:59:10 np0005464891 podman[298537]: 2025-10-01 16:59:10.4041951 +0000 UTC m=+1.756799251 container attach 0f535f7f5c7506d98e8898358f6c0c542655b04c45955caaf53786e86b0adcb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brown, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 12:59:10 np0005464891 confident_brown[298555]: {
Oct  1 12:59:10 np0005464891 confident_brown[298555]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 12:59:10 np0005464891 confident_brown[298555]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:59:10 np0005464891 confident_brown[298555]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 12:59:10 np0005464891 confident_brown[298555]:        "osd_id": 2,
Oct  1 12:59:10 np0005464891 confident_brown[298555]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 12:59:10 np0005464891 confident_brown[298555]:        "type": "bluestore"
Oct  1 12:59:10 np0005464891 confident_brown[298555]:    },
Oct  1 12:59:10 np0005464891 confident_brown[298555]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 12:59:10 np0005464891 confident_brown[298555]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:59:10 np0005464891 confident_brown[298555]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 12:59:10 np0005464891 confident_brown[298555]:        "osd_id": 0,
Oct  1 12:59:10 np0005464891 confident_brown[298555]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 12:59:10 np0005464891 confident_brown[298555]:        "type": "bluestore"
Oct  1 12:59:10 np0005464891 confident_brown[298555]:    },
Oct  1 12:59:10 np0005464891 confident_brown[298555]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 12:59:10 np0005464891 confident_brown[298555]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 12:59:10 np0005464891 confident_brown[298555]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 12:59:10 np0005464891 confident_brown[298555]:        "osd_id": 1,
Oct  1 12:59:10 np0005464891 confident_brown[298555]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 12:59:10 np0005464891 confident_brown[298555]:        "type": "bluestore"
Oct  1 12:59:10 np0005464891 confident_brown[298555]:    }
Oct  1 12:59:10 np0005464891 confident_brown[298555]: }
Oct  1 12:59:10 np0005464891 systemd[1]: libpod-0f535f7f5c7506d98e8898358f6c0c542655b04c45955caaf53786e86b0adcb9.scope: Deactivated successfully.
Oct  1 12:59:10 np0005464891 podman[298537]: 2025-10-01 16:59:10.988006435 +0000 UTC m=+2.340610586 container died 0f535f7f5c7506d98e8898358f6c0c542655b04c45955caaf53786e86b0adcb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brown, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:59:10 np0005464891 systemd[1]: libpod-0f535f7f5c7506d98e8898358f6c0c542655b04c45955caaf53786e86b0adcb9.scope: Consumed 1.030s CPU time.
Oct  1 12:59:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 271 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.0 MiB/s wr, 48 op/s
Oct  1 12:59:11 np0005464891 nova_compute[259907]: 2025-10-01 16:59:11.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:11 np0005464891 nova_compute[259907]: 2025-10-01 16:59:11.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:11 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d65d2a56b756f3d5eed37f68226eef18d2a954c937f5cb828df35210b204d4ba-merged.mount: Deactivated successfully.
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_16:59:12
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes']
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 12:59:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:12.462 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:12.463 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:12.463 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 12:59:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e442 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:59:12 np0005464891 podman[298537]: 2025-10-01 16:59:12.801135807 +0000 UTC m=+4.153739998 container remove 0f535f7f5c7506d98e8898358f6c0c542655b04c45955caaf53786e86b0adcb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 12:59:12 np0005464891 systemd[1]: libpod-conmon-0f535f7f5c7506d98e8898358f6c0c542655b04c45955caaf53786e86b0adcb9.scope: Deactivated successfully.
Oct  1 12:59:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 12:59:12 np0005464891 ceph-mgr[74592]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2433011577
Oct  1 12:59:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 271 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.8 KiB/s wr, 24 op/s
Oct  1 12:59:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:59:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 12:59:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:59:14 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev cd93cf80-45c0-4b9c-835b-0dd24133d685 does not exist
Oct  1 12:59:14 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 46884e36-70e3-4342-ba5c-325921b7ce77 does not exist
Oct  1 12:59:14 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:14Z|00202|memory_trim|INFO|Detected inactivity (last active 30020 ms ago): trimming memory
Oct  1 12:59:14 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:59:14 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 12:59:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 309 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 35 op/s
Oct  1 12:59:16 np0005464891 podman[298652]: 2025-10-01 16:59:16.014019079 +0000 UTC m=+0.121668875 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  1 12:59:16 np0005464891 nova_compute[259907]: 2025-10-01 16:59:16.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:16 np0005464891 nova_compute[259907]: 2025-10-01 16:59:16.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1785: 321 pgs: 321 active+clean; 309 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.7 MiB/s wr, 29 op/s
Oct  1 12:59:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e442 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:59:18 np0005464891 nova_compute[259907]: 2025-10-01 16:59:18.451 2 DEBUG oslo_concurrency.lockutils [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:18 np0005464891 nova_compute[259907]: 2025-10-01 16:59:18.452 2 DEBUG oslo_concurrency.lockutils [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:18 np0005464891 nova_compute[259907]: 2025-10-01 16:59:18.452 2 DEBUG oslo_concurrency.lockutils [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:18 np0005464891 nova_compute[259907]: 2025-10-01 16:59:18.452 2 DEBUG oslo_concurrency.lockutils [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:18 np0005464891 nova_compute[259907]: 2025-10-01 16:59:18.452 2 DEBUG oslo_concurrency.lockutils [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:18 np0005464891 nova_compute[259907]: 2025-10-01 16:59:18.454 2 INFO nova.compute.manager [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Terminating instance#033[00m
Oct  1 12:59:18 np0005464891 nova_compute[259907]: 2025-10-01 16:59:18.456 2 DEBUG nova.compute.manager [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:59:18 np0005464891 kernel: tapdf49da0f-d5 (unregistering): left promiscuous mode
Oct  1 12:59:18 np0005464891 NetworkManager[44940]: <info>  [1759337958.9052] device (tapdf49da0f-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:59:18 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:18Z|00203|binding|INFO|Releasing lport df49da0f-d552-4921-b312-c9644f9430de from this chassis (sb_readonly=0)
Oct  1 12:59:18 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:18Z|00204|binding|INFO|Setting lport df49da0f-d552-4921-b312-c9644f9430de down in Southbound
Oct  1 12:59:18 np0005464891 nova_compute[259907]: 2025-10-01 16:59:18.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:18 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:18Z|00205|binding|INFO|Removing iface tapdf49da0f-d5 ovn-installed in OVS
Oct  1 12:59:18 np0005464891 nova_compute[259907]: 2025-10-01 16:59:18.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:18 np0005464891 nova_compute[259907]: 2025-10-01 16:59:18.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:19 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:19.018 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:ee 10.100.0.9'], port_security=['fa:16:3e:88:2a:ee 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dd2acd48-65e4-48e1-80ae-b7404cb6fc4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bb5e44f7928546dfb674d53cd3727027', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c51767f2-742e-4209-a278-1c1f1e9af624', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08e741b0-61e8-4126-b98f-610a01494f2d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=df49da0f-d552-4921-b312-c9644f9430de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:59:19 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:19.019 162546 INFO neutron.agent.ovn.metadata.agent [-] Port df49da0f-d552-4921-b312-c9644f9430de in datapath 2345ad6b-d676-4546-a17e-6f7405ff5f24 unbound from our chassis#033[00m
Oct  1 12:59:19 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:19.021 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2345ad6b-d676-4546-a17e-6f7405ff5f24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:59:19 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:19.023 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e8cdba3a-2558-423d-8397-8ee6f027982b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:19 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:19.024 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24 namespace which is not needed anymore#033[00m
Oct  1 12:59:19 np0005464891 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct  1 12:59:19 np0005464891 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Consumed 17.049s CPU time.
Oct  1 12:59:19 np0005464891 systemd-machined[214891]: Machine qemu-21-instance-00000015 terminated.
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.101 2 INFO nova.virt.libvirt.driver [-] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Instance destroyed successfully.#033[00m
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.103 2 DEBUG nova.objects.instance [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lazy-loading 'resources' on Instance uuid dd2acd48-65e4-48e1-80ae-b7404cb6fc4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.179 2 DEBUG nova.virt.libvirt.vif [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:58:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-233020751',display_name='tempest-TestEncryptedCinderVolumes-server-233020751',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-233020751',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIhJiMuVwk4EQ7wkCYcLaeTsPomALwyR3FBK+97oa6ynrLvPrKJKnE71uKm0O/hFbPLnI7X22RnrmUili5anoyjadz+yIM+FZfOiuxhlfC8kCRP4tSOOTh7DLMRl7W7xOg==',key_name='tempest-TestEncryptedCinderVolumes-620996693',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:58:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bb5e44f7928546dfb674d53cd3727027',ramdisk_id='',reservation_id='r-ee895gd2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-803701988',owner_user_name='tempest-TestEncryptedCinderVolumes-803701988-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:58:41Z,user_data=None,user_id='906d3d29e27b49c1860f5397c6028d96',uuid=dd2acd48-65e4-48e1-80ae-b7404cb6fc4e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df49da0f-d552-4921-b312-c9644f9430de", "address": "fa:16:3e:88:2a:ee", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf49da0f-d5", "ovs_interfaceid": "df49da0f-d552-4921-b312-c9644f9430de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.180 2 DEBUG nova.network.os_vif_util [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converting VIF {"id": "df49da0f-d552-4921-b312-c9644f9430de", "address": "fa:16:3e:88:2a:ee", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf49da0f-d5", "ovs_interfaceid": "df49da0f-d552-4921-b312-c9644f9430de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.181 2 DEBUG nova.network.os_vif_util [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:2a:ee,bridge_name='br-int',has_traffic_filtering=True,id=df49da0f-d552-4921-b312-c9644f9430de,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf49da0f-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.181 2 DEBUG os_vif [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:2a:ee,bridge_name='br-int',has_traffic_filtering=True,id=df49da0f-d552-4921-b312-c9644f9430de,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf49da0f-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.183 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf49da0f-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.190 2 INFO os_vif [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:2a:ee,bridge_name='br-int',has_traffic_filtering=True,id=df49da0f-d552-4921-b312-c9644f9430de,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf49da0f-d5')#033[00m
Oct  1 12:59:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 317 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.414 2 DEBUG nova.compute.manager [req-d4578ad3-025a-4b4d-be65-93a58b406842 req-f56c6429-867c-4630-bf96-3e87d1b3608c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Received event network-vif-unplugged-df49da0f-d552-4921-b312-c9644f9430de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.415 2 DEBUG oslo_concurrency.lockutils [req-d4578ad3-025a-4b4d-be65-93a58b406842 req-f56c6429-867c-4630-bf96-3e87d1b3608c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.415 2 DEBUG oslo_concurrency.lockutils [req-d4578ad3-025a-4b4d-be65-93a58b406842 req-f56c6429-867c-4630-bf96-3e87d1b3608c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.415 2 DEBUG oslo_concurrency.lockutils [req-d4578ad3-025a-4b4d-be65-93a58b406842 req-f56c6429-867c-4630-bf96-3e87d1b3608c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.416 2 DEBUG nova.compute.manager [req-d4578ad3-025a-4b4d-be65-93a58b406842 req-f56c6429-867c-4630-bf96-3e87d1b3608c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] No waiting events found dispatching network-vif-unplugged-df49da0f-d552-4921-b312-c9644f9430de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:59:19 np0005464891 nova_compute[259907]: 2025-10-01 16:59:19.416 2 DEBUG nova.compute.manager [req-d4578ad3-025a-4b4d-be65-93a58b406842 req-f56c6429-867c-4630-bf96-3e87d1b3608c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Received event network-vif-unplugged-df49da0f-d552-4921-b312-c9644f9430de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:59:19 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[297518]: [NOTICE]   (297522) : haproxy version is 2.8.14-c23fe91
Oct  1 12:59:19 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[297518]: [NOTICE]   (297522) : path to executable is /usr/sbin/haproxy
Oct  1 12:59:19 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[297518]: [WARNING]  (297522) : Exiting Master process...
Oct  1 12:59:19 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[297518]: [ALERT]    (297522) : Current worker (297524) exited with code 143 (Terminated)
Oct  1 12:59:19 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[297518]: [WARNING]  (297522) : All workers exited. Exiting... (0)
Oct  1 12:59:19 np0005464891 systemd[1]: libpod-a788e433c5ac2182c1cad47249c1fe398f32d1045fc74f47855a6c075c245faf.scope: Deactivated successfully.
Oct  1 12:59:19 np0005464891 podman[298712]: 2025-10-01 16:59:19.469299371 +0000 UTC m=+0.329610611 container died a788e433c5ac2182c1cad47249c1fe398f32d1045fc74f47855a6c075c245faf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:59:19 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a788e433c5ac2182c1cad47249c1fe398f32d1045fc74f47855a6c075c245faf-userdata-shm.mount: Deactivated successfully.
Oct  1 12:59:19 np0005464891 systemd[1]: var-lib-containers-storage-overlay-2e6783cd79457b80c9a298bd1072d9419f2e346539466b78ea67994ff0f3722b-merged.mount: Deactivated successfully.
Oct  1 12:59:20 np0005464891 podman[298712]: 2025-10-01 16:59:20.581631945 +0000 UTC m=+1.441943175 container cleanup a788e433c5ac2182c1cad47249c1fe398f32d1045fc74f47855a6c075c245faf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  1 12:59:20 np0005464891 systemd[1]: libpod-conmon-a788e433c5ac2182c1cad47249c1fe398f32d1045fc74f47855a6c075c245faf.scope: Deactivated successfully.
Oct  1 12:59:20 np0005464891 podman[298759]: 2025-10-01 16:59:20.957026399 +0000 UTC m=+0.334022101 container remove a788e433c5ac2182c1cad47249c1fe398f32d1045fc74f47855a6c075c245faf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:59:20 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:20.968 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[1a20b326-3f1e-4200-a856-23240e7736a2]: (4, ('Wed Oct  1 04:59:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24 (a788e433c5ac2182c1cad47249c1fe398f32d1045fc74f47855a6c075c245faf)\na788e433c5ac2182c1cad47249c1fe398f32d1045fc74f47855a6c075c245faf\nWed Oct  1 04:59:20 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24 (a788e433c5ac2182c1cad47249c1fe398f32d1045fc74f47855a6c075c245faf)\na788e433c5ac2182c1cad47249c1fe398f32d1045fc74f47855a6c075c245faf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:20 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:20.970 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec7c23f-acba-47ca-bd9a-874775e9f654]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:20 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:20.971 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2345ad6b-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:59:20 np0005464891 nova_compute[259907]: 2025-10-01 16:59:20.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:20 np0005464891 kernel: tap2345ad6b-d0: left promiscuous mode
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:21.005 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fc8d11-46c1-4385-b6c0-e133f6a11a00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:21 np0005464891 podman[298760]: 2025-10-01 16:59:21.017068735 +0000 UTC m=+0.375180030 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  1 12:59:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:21.042 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[09a72fc8-e163-4602-a4fd-2f9ebb46fa67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:21.044 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[43d072b0-d930-42c4-88f4-c39cd8cb9ed9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:21.066 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9f94cf-030d-4645-a73f-8d7b5dfeaf28]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486610, 'reachable_time': 22147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298794, 'error': None, 'target': 'ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:21.070 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:59:21 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:21.071 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[c41d8f74-1f47-45f0-9a73-75763d34f9e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:21 np0005464891 systemd[1]: run-netns-ovnmeta\x2d2345ad6b\x2dd676\x2d4546\x2da17e\x2d6f7405ff5f24.mount: Deactivated successfully.
Oct  1 12:59:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1787: 321 pgs: 321 active+clean; 317 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.351 2 INFO nova.virt.libvirt.driver [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Deleting instance files /var/lib/nova/instances/dd2acd48-65e4-48e1-80ae-b7404cb6fc4e_del#033[00m
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.353 2 INFO nova.virt.libvirt.driver [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Deletion of /var/lib/nova/instances/dd2acd48-65e4-48e1-80ae-b7404cb6fc4e_del complete#033[00m
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.419 2 INFO nova.compute.manager [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Took 2.96 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.420 2 DEBUG oslo.service.loopingcall [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.421 2 DEBUG nova.compute.manager [-] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.421 2 DEBUG nova.network.neutron [-] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.496 2 DEBUG nova.compute.manager [req-7a167489-ecf7-4503-bfd6-d2985862f3e4 req-994e15e9-b42a-4f6e-ba83-edf9a380557f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Received event network-vif-plugged-df49da0f-d552-4921-b312-c9644f9430de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.497 2 DEBUG oslo_concurrency.lockutils [req-7a167489-ecf7-4503-bfd6-d2985862f3e4 req-994e15e9-b42a-4f6e-ba83-edf9a380557f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.497 2 DEBUG oslo_concurrency.lockutils [req-7a167489-ecf7-4503-bfd6-d2985862f3e4 req-994e15e9-b42a-4f6e-ba83-edf9a380557f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.497 2 DEBUG oslo_concurrency.lockutils [req-7a167489-ecf7-4503-bfd6-d2985862f3e4 req-994e15e9-b42a-4f6e-ba83-edf9a380557f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.498 2 DEBUG nova.compute.manager [req-7a167489-ecf7-4503-bfd6-d2985862f3e4 req-994e15e9-b42a-4f6e-ba83-edf9a380557f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] No waiting events found dispatching network-vif-plugged-df49da0f-d552-4921-b312-c9644f9430de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.498 2 WARNING nova.compute.manager [req-7a167489-ecf7-4503-bfd6-d2985862f3e4 req-994e15e9-b42a-4f6e-ba83-edf9a380557f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Received unexpected event network-vif-plugged-df49da0f-d552-4921-b312-c9644f9430de for instance with vm_state active and task_state deleting.#033[00m
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.899 2 DEBUG oslo_concurrency.lockutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "830f3147-422f-4e9d-ac70-0dbc385be575" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.900 2 DEBUG oslo_concurrency.lockutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "830f3147-422f-4e9d-ac70-0dbc385be575" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:21 np0005464891 nova_compute[259907]: 2025-10-01 16:59:21.938 2 DEBUG nova.compute.manager [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:59:22 np0005464891 nova_compute[259907]: 2025-10-01 16:59:22.067 2 DEBUG oslo_concurrency.lockutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:22 np0005464891 nova_compute[259907]: 2025-10-01 16:59:22.068 2 DEBUG oslo_concurrency.lockutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:22 np0005464891 nova_compute[259907]: 2025-10-01 16:59:22.077 2 DEBUG nova.virt.hardware [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:59:22 np0005464891 nova_compute[259907]: 2025-10-01 16:59:22.078 2 INFO nova.compute.claims [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0032403281077033274 of space, bias 1.0, pg target 0.9720984323109982 quantized to 32 (current 32)
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 12:59:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 12:59:22 np0005464891 nova_compute[259907]: 2025-10-01 16:59:22.270 2 DEBUG oslo_concurrency.processutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e442 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:59:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:59:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/252002446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:59:22 np0005464891 nova_compute[259907]: 2025-10-01 16:59:22.740 2 DEBUG oslo_concurrency.processutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:22 np0005464891 nova_compute[259907]: 2025-10-01 16:59:22.748 2 DEBUG nova.compute.provider_tree [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:59:22 np0005464891 nova_compute[259907]: 2025-10-01 16:59:22.811 2 DEBUG nova.scheduler.client.report [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:59:22 np0005464891 nova_compute[259907]: 2025-10-01 16:59:22.909 2 DEBUG oslo_concurrency.lockutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:22 np0005464891 nova_compute[259907]: 2025-10-01 16:59:22.911 2 DEBUG nova.compute.manager [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:59:22 np0005464891 nova_compute[259907]: 2025-10-01 16:59:22.992 2 DEBUG nova.compute.manager [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:59:22 np0005464891 nova_compute[259907]: 2025-10-01 16:59:22.994 2 DEBUG nova.network.neutron [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.124 2 INFO nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.214 2 DEBUG nova.compute.manager [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:59:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 317 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.344 2 INFO nova.virt.block_device [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Booting with volume ce89eba0-ef68-400a-ae7c-6ce18a58a372 at /dev/vda#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.451 2 DEBUG os_brick.utils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.453 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.467 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.468 741 DEBUG oslo.privsep.daemon [-] privsep: reply[2c6bee47-13c3-409b-b95d-312fe6ed0019]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.469 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.479 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.480 741 DEBUG oslo.privsep.daemon [-] privsep: reply[40d9a742-44f9-4c9c-843d-f549181aac55]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.481 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.493 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.493 741 DEBUG oslo.privsep.daemon [-] privsep: reply[063de3f6-e404-4b73-a567-5ebaea7452b4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.495 741 DEBUG oslo.privsep.daemon [-] privsep: reply[fb00cce3-0c15-4246-8c60-2e78b6977df9]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.497 2 DEBUG oslo_concurrency.processutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.523 2 DEBUG oslo_concurrency.processutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.525 2 DEBUG os_brick.initiator.connectors.lightos [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.525 2 DEBUG os_brick.initiator.connectors.lightos [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.525 2 DEBUG os_brick.initiator.connectors.lightos [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.526 2 DEBUG os_brick.utils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] <== get_connector_properties: return (73ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:59:23 np0005464891 nova_compute[259907]: 2025-10-01 16:59:23.526 2 DEBUG nova.virt.block_device [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Updating existing volume attachment record: 402dea30-b77f-44ab-bdd8-5282c245d32d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:59:23 np0005464891 podman[298825]: 2025-10-01 16:59:23.951077496 +0000 UTC m=+0.068450406 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 12:59:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:59:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1541013326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:59:24 np0005464891 nova_compute[259907]: 2025-10-01 16:59:24.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:24 np0005464891 nova_compute[259907]: 2025-10-01 16:59:24.287 2 DEBUG nova.policy [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1280014cdfb74333ae8d71c78116e646', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8318b65fa88942a99937a0d198a04a9c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:59:24 np0005464891 nova_compute[259907]: 2025-10-01 16:59:24.610 2 DEBUG nova.network.neutron [-] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.031 2 INFO nova.compute.manager [-] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Took 3.61 seconds to deallocate network for instance.#033[00m
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.045 2 DEBUG nova.compute.manager [req-87406669-ee29-4178-a4a3-644328905fa2 req-9c275170-85d0-471f-9fa3-76588e6d79fe af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Received event network-vif-deleted-df49da0f-d552-4921-b312-c9644f9430de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.046 2 INFO nova.compute.manager [req-87406669-ee29-4178-a4a3-644328905fa2 req-9c275170-85d0-471f-9fa3-76588e6d79fe af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Neutron deleted interface df49da0f-d552-4921-b312-c9644f9430de; detaching it from the instance and deleting it from the info cache#033[00m
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.046 2 DEBUG nova.network.neutron [req-87406669-ee29-4178-a4a3-644328905fa2 req-9c275170-85d0-471f-9fa3-76588e6d79fe af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.144 2 DEBUG nova.compute.manager [req-87406669-ee29-4178-a4a3-644328905fa2 req-9c275170-85d0-471f-9fa3-76588e6d79fe af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Detach interface failed, port_id=df49da0f-d552-4921-b312-c9644f9430de, reason: Instance dd2acd48-65e4-48e1-80ae-b7404cb6fc4e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  1 12:59:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 317 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.291 2 DEBUG nova.compute.manager [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.293 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.294 2 INFO nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Creating image(s)#033[00m
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.295 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.296 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Ensure instance console log exists: /var/lib/nova/instances/830f3147-422f-4e9d-ac70-0dbc385be575/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.297 2 DEBUG oslo_concurrency.lockutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.297 2 DEBUG oslo_concurrency.lockutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.298 2 DEBUG oslo_concurrency.lockutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.362 2 INFO nova.compute.manager [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Took 0.33 seconds to detach 1 volumes for instance.#033[00m
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.499 2 DEBUG oslo_concurrency.lockutils [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.500 2 DEBUG oslo_concurrency.lockutils [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:25 np0005464891 nova_compute[259907]: 2025-10-01 16:59:25.584 2 DEBUG oslo_concurrency.processutils [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:59:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/990692888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:59:26 np0005464891 nova_compute[259907]: 2025-10-01 16:59:26.064 2 DEBUG oslo_concurrency.processutils [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:26 np0005464891 nova_compute[259907]: 2025-10-01 16:59:26.072 2 DEBUG nova.compute.provider_tree [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:59:26 np0005464891 nova_compute[259907]: 2025-10-01 16:59:26.186 2 DEBUG nova.scheduler.client.report [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:59:26 np0005464891 nova_compute[259907]: 2025-10-01 16:59:26.297 2 DEBUG oslo_concurrency.lockutils [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:26 np0005464891 nova_compute[259907]: 2025-10-01 16:59:26.335 2 INFO nova.scheduler.client.report [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Deleted allocations for instance dd2acd48-65e4-48e1-80ae-b7404cb6fc4e#033[00m
Oct  1 12:59:26 np0005464891 nova_compute[259907]: 2025-10-01 16:59:26.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:26 np0005464891 nova_compute[259907]: 2025-10-01 16:59:26.935 2 DEBUG nova.network.neutron [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Successfully created port: c646a8ad-1950-4bec-8bf5-d0039005679e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:59:27 np0005464891 nova_compute[259907]: 2025-10-01 16:59:27.025 2 DEBUG oslo_concurrency.lockutils [None req-5fc5d7da-ab26-4dee-a123-a88c1f33086d 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "dd2acd48-65e4-48e1-80ae-b7404cb6fc4e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 317 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 100 KiB/s wr, 19 op/s
Oct  1 12:59:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e442 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:59:28 np0005464891 nova_compute[259907]: 2025-10-01 16:59:28.757 2 DEBUG nova.network.neutron [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Successfully updated port: c646a8ad-1950-4bec-8bf5-d0039005679e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:59:29 np0005464891 nova_compute[259907]: 2025-10-01 16:59:29.050 2 DEBUG oslo_concurrency.lockutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "refresh_cache-830f3147-422f-4e9d-ac70-0dbc385be575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:59:29 np0005464891 nova_compute[259907]: 2025-10-01 16:59:29.051 2 DEBUG oslo_concurrency.lockutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquired lock "refresh_cache-830f3147-422f-4e9d-ac70-0dbc385be575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:59:29 np0005464891 nova_compute[259907]: 2025-10-01 16:59:29.051 2 DEBUG nova.network.neutron [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:59:29 np0005464891 nova_compute[259907]: 2025-10-01 16:59:29.058 2 DEBUG nova.compute.manager [req-31d15bef-76e9-413a-a25a-dc036f09c3a5 req-11f340d1-ebd0-4ca4-b93c-ac45fa5b5500 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Received event network-changed-c646a8ad-1950-4bec-8bf5-d0039005679e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:59:29 np0005464891 nova_compute[259907]: 2025-10-01 16:59:29.058 2 DEBUG nova.compute.manager [req-31d15bef-76e9-413a-a25a-dc036f09c3a5 req-11f340d1-ebd0-4ca4-b93c-ac45fa5b5500 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Refreshing instance network info cache due to event network-changed-c646a8ad-1950-4bec-8bf5-d0039005679e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:59:29 np0005464891 nova_compute[259907]: 2025-10-01 16:59:29.058 2 DEBUG oslo_concurrency.lockutils [req-31d15bef-76e9-413a-a25a-dc036f09c3a5 req-11f340d1-ebd0-4ca4-b93c-ac45fa5b5500 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-830f3147-422f-4e9d-ac70-0dbc385be575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:59:29 np0005464891 nova_compute[259907]: 2025-10-01 16:59:29.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 317 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 100 KiB/s wr, 19 op/s
Oct  1 12:59:29 np0005464891 nova_compute[259907]: 2025-10-01 16:59:29.234 2 DEBUG nova.network.neutron [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.008 2 DEBUG nova.network.neutron [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Updating instance_info_cache with network_info: [{"id": "c646a8ad-1950-4bec-8bf5-d0039005679e", "address": "fa:16:3e:24:01:77", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc646a8ad-19", "ovs_interfaceid": "c646a8ad-1950-4bec-8bf5-d0039005679e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.177 2 DEBUG oslo_concurrency.lockutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Releasing lock "refresh_cache-830f3147-422f-4e9d-ac70-0dbc385be575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.178 2 DEBUG nova.compute.manager [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Instance network_info: |[{"id": "c646a8ad-1950-4bec-8bf5-d0039005679e", "address": "fa:16:3e:24:01:77", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc646a8ad-19", "ovs_interfaceid": "c646a8ad-1950-4bec-8bf5-d0039005679e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.179 2 DEBUG oslo_concurrency.lockutils [req-31d15bef-76e9-413a-a25a-dc036f09c3a5 req-11f340d1-ebd0-4ca4-b93c-ac45fa5b5500 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-830f3147-422f-4e9d-ac70-0dbc385be575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.179 2 DEBUG nova.network.neutron [req-31d15bef-76e9-413a-a25a-dc036f09c3a5 req-11f340d1-ebd0-4ca4-b93c-ac45fa5b5500 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Refreshing network info cache for port c646a8ad-1950-4bec-8bf5-d0039005679e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.182 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Start _get_guest_xml network_info=[{"id": "c646a8ad-1950-4bec-8bf5-d0039005679e", "address": "fa:16:3e:24:01:77", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc646a8ad-19", "ovs_interfaceid": "c646a8ad-1950-4bec-8bf5-d0039005679e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'mount_device': '/dev/vda', 'attachment_id': '402dea30-b77f-44ab-bdd8-5282c245d32d', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ce89eba0-ef68-400a-ae7c-6ce18a58a372', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ce89eba0-ef68-400a-ae7c-6ce18a58a372', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '830f3147-422f-4e9d-ac70-0dbc385be575', 'attached_at': '', 'detached_at': '', 'volume_id': 'ce89eba0-ef68-400a-ae7c-6ce18a58a372', 'serial': 'ce89eba0-ef68-400a-ae7c-6ce18a58a372'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.189 2 WARNING nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.194 2 DEBUG nova.virt.libvirt.host [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.195 2 DEBUG nova.virt.libvirt.host [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.198 2 DEBUG nova.virt.libvirt.host [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.199 2 DEBUG nova.virt.libvirt.host [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.199 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.200 2 DEBUG nova.virt.hardware [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.200 2 DEBUG nova.virt.hardware [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.200 2 DEBUG nova.virt.hardware [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.201 2 DEBUG nova.virt.hardware [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.201 2 DEBUG nova.virt.hardware [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.201 2 DEBUG nova.virt.hardware [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.201 2 DEBUG nova.virt.hardware [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.201 2 DEBUG nova.virt.hardware [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.202 2 DEBUG nova.virt.hardware [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.202 2 DEBUG nova.virt.hardware [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.202 2 DEBUG nova.virt.hardware [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.227 2 DEBUG nova.storage.rbd_utils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image 830f3147-422f-4e9d-ac70-0dbc385be575_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.232 2 DEBUG oslo_concurrency.processutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e442 do_prune osdmap full prune enabled
Oct  1 12:59:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 e443: 3 total, 3 up, 3 in
Oct  1 12:59:30 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e443: 3 total, 3 up, 3 in
Oct  1 12:59:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:59:30 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3567179983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.691 2 DEBUG oslo_concurrency.processutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.741 2 DEBUG nova.virt.libvirt.vif [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:59:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1907784308',display_name='tempest-TestVolumeBootPattern-server-1907784308',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1907784308',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClGQJy6/uYcm9xjql2YyXBlCsn5OCvS8WdRyMV8X4KRzO9nDb+WS/fT0IKQE/81aKxS5QGeI9sR/4PyJ3PdLDqhc6ZZs5CYH0MiYsApL0NE4Z3MyQ9wrVPiCTRct79ckg==',key_name='tempest-TestVolumeBootPattern-477590145',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-hsicp09s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:59:23Z,user_data=None,user_id='1280014cdfb74333ae8d71c78116e646',uuid=830f3147-422f-4e9d-ac70-0dbc385be575,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c646a8ad-1950-4bec-8bf5-d0039005679e", "address": "fa:16:3e:24:01:77", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc646a8ad-19", "ovs_interfaceid": "c646a8ad-1950-4bec-8bf5-d0039005679e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.742 2 DEBUG nova.network.os_vif_util [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "c646a8ad-1950-4bec-8bf5-d0039005679e", "address": "fa:16:3e:24:01:77", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc646a8ad-19", "ovs_interfaceid": "c646a8ad-1950-4bec-8bf5-d0039005679e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.743 2 DEBUG nova.network.os_vif_util [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:01:77,bridge_name='br-int',has_traffic_filtering=True,id=c646a8ad-1950-4bec-8bf5-d0039005679e,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc646a8ad-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.744 2 DEBUG nova.objects.instance [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lazy-loading 'pci_devices' on Instance uuid 830f3147-422f-4e9d-ac70-0dbc385be575 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.779 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  <uuid>830f3147-422f-4e9d-ac70-0dbc385be575</uuid>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  <name>instance-00000016</name>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <nova:name>tempest-TestVolumeBootPattern-server-1907784308</nova:name>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:59:30</nova:creationTime>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:        <nova:user uuid="1280014cdfb74333ae8d71c78116e646">tempest-TestVolumeBootPattern-582136054-project-member</nova:user>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:        <nova:project uuid="8318b65fa88942a99937a0d198a04a9c">tempest-TestVolumeBootPattern-582136054</nova:project>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:        <nova:port uuid="c646a8ad-1950-4bec-8bf5-d0039005679e">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <entry name="serial">830f3147-422f-4e9d-ac70-0dbc385be575</entry>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <entry name="uuid">830f3147-422f-4e9d-ac70-0dbc385be575</entry>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/830f3147-422f-4e9d-ac70-0dbc385be575_disk.config">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="volumes/volume-ce89eba0-ef68-400a-ae7c-6ce18a58a372">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <serial>ce89eba0-ef68-400a-ae7c-6ce18a58a372</serial>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:24:01:77"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <target dev="tapc646a8ad-19"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/830f3147-422f-4e9d-ac70-0dbc385be575/console.log" append="off"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:59:30 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:59:30 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:59:30 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:59:30 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.780 2 DEBUG nova.compute.manager [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Preparing to wait for external event network-vif-plugged-c646a8ad-1950-4bec-8bf5-d0039005679e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.780 2 DEBUG oslo_concurrency.lockutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.780 2 DEBUG oslo_concurrency.lockutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.780 2 DEBUG oslo_concurrency.lockutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.781 2 DEBUG nova.virt.libvirt.vif [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:59:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1907784308',display_name='tempest-TestVolumeBootPattern-server-1907784308',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1907784308',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClGQJy6/uYcm9xjql2YyXBlCsn5OCvS8WdRyMV8X4KRzO9nDb+WS/fT0IKQE/81aKxS5QGeI9sR/4PyJ3PdLDqhc6ZZs5CYH0MiYsApL0NE4Z3MyQ9wrVPiCTRct79ckg==',key_name='tempest-TestVolumeBootPattern-477590145',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-hsicp09s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:59:23Z,user_data=None,user_id='1280014cdfb74333ae8d71c78116e646',uuid=830f3147-422f-4e9d-ac70-0dbc385be575,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c646a8ad-1950-4bec-8bf5-d0039005679e", "address": "fa:16:3e:24:01:77", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc646a8ad-19", "ovs_interfaceid": "c646a8ad-1950-4bec-8bf5-d0039005679e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.782 2 DEBUG nova.network.os_vif_util [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "c646a8ad-1950-4bec-8bf5-d0039005679e", "address": "fa:16:3e:24:01:77", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc646a8ad-19", "ovs_interfaceid": "c646a8ad-1950-4bec-8bf5-d0039005679e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.782 2 DEBUG nova.network.os_vif_util [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:01:77,bridge_name='br-int',has_traffic_filtering=True,id=c646a8ad-1950-4bec-8bf5-d0039005679e,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc646a8ad-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.783 2 DEBUG os_vif [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:01:77,bridge_name='br-int',has_traffic_filtering=True,id=c646a8ad-1950-4bec-8bf5-d0039005679e,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc646a8ad-19') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.784 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.784 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc646a8ad-19, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.788 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc646a8ad-19, col_values=(('external_ids', {'iface-id': 'c646a8ad-1950-4bec-8bf5-d0039005679e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:01:77', 'vm-uuid': '830f3147-422f-4e9d-ac70-0dbc385be575'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:30 np0005464891 NetworkManager[44940]: <info>  [1759337970.7922] manager: (tapc646a8ad-19): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.798 2 INFO os_vif [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:01:77,bridge_name='br-int',has_traffic_filtering=True,id=c646a8ad-1950-4bec-8bf5-d0039005679e,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc646a8ad-19')#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.862 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.862 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.862 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No VIF found with MAC fa:16:3e:24:01:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.863 2 INFO nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Using config drive#033[00m
Oct  1 12:59:30 np0005464891 nova_compute[259907]: 2025-10-01 16:59:30.884 2 DEBUG nova.storage.rbd_utils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image 830f3147-422f-4e9d-ac70-0dbc385be575_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:59:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 317 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 716 B/s wr, 14 op/s
Oct  1 12:59:31 np0005464891 nova_compute[259907]: 2025-10-01 16:59:31.309 2 INFO nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Creating config drive at /var/lib/nova/instances/830f3147-422f-4e9d-ac70-0dbc385be575/disk.config#033[00m
Oct  1 12:59:31 np0005464891 nova_compute[259907]: 2025-10-01 16:59:31.316 2 DEBUG oslo_concurrency.processutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/830f3147-422f-4e9d-ac70-0dbc385be575/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkssgohh8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:31 np0005464891 nova_compute[259907]: 2025-10-01 16:59:31.447 2 DEBUG oslo_concurrency.processutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/830f3147-422f-4e9d-ac70-0dbc385be575/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkssgohh8" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:31 np0005464891 nova_compute[259907]: 2025-10-01 16:59:31.468 2 DEBUG nova.storage.rbd_utils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image 830f3147-422f-4e9d-ac70-0dbc385be575_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:59:31 np0005464891 nova_compute[259907]: 2025-10-01 16:59:31.472 2 DEBUG oslo_concurrency.processutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/830f3147-422f-4e9d-ac70-0dbc385be575/disk.config 830f3147-422f-4e9d-ac70-0dbc385be575_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:31 np0005464891 nova_compute[259907]: 2025-10-01 16:59:31.521 2 DEBUG nova.network.neutron [req-31d15bef-76e9-413a-a25a-dc036f09c3a5 req-11f340d1-ebd0-4ca4-b93c-ac45fa5b5500 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Updated VIF entry in instance network info cache for port c646a8ad-1950-4bec-8bf5-d0039005679e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:59:31 np0005464891 nova_compute[259907]: 2025-10-01 16:59:31.523 2 DEBUG nova.network.neutron [req-31d15bef-76e9-413a-a25a-dc036f09c3a5 req-11f340d1-ebd0-4ca4-b93c-ac45fa5b5500 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Updating instance_info_cache with network_info: [{"id": "c646a8ad-1950-4bec-8bf5-d0039005679e", "address": "fa:16:3e:24:01:77", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc646a8ad-19", "ovs_interfaceid": "c646a8ad-1950-4bec-8bf5-d0039005679e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:59:31 np0005464891 nova_compute[259907]: 2025-10-01 16:59:31.541 2 DEBUG oslo_concurrency.lockutils [req-31d15bef-76e9-413a-a25a-dc036f09c3a5 req-11f340d1-ebd0-4ca4-b93c-ac45fa5b5500 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-830f3147-422f-4e9d-ac70-0dbc385be575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:59:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:59:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2459101619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:59:31 np0005464891 nova_compute[259907]: 2025-10-01 16:59:31.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:31 np0005464891 nova_compute[259907]: 2025-10-01 16:59:31.879 2 DEBUG oslo_concurrency.processutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/830f3147-422f-4e9d-ac70-0dbc385be575/disk.config 830f3147-422f-4e9d-ac70-0dbc385be575_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:31 np0005464891 nova_compute[259907]: 2025-10-01 16:59:31.879 2 INFO nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Deleting local config drive /var/lib/nova/instances/830f3147-422f-4e9d-ac70-0dbc385be575/disk.config because it was imported into RBD.#033[00m
Oct  1 12:59:31 np0005464891 kernel: tapc646a8ad-19: entered promiscuous mode
Oct  1 12:59:31 np0005464891 NetworkManager[44940]: <info>  [1759337971.9350] manager: (tapc646a8ad-19): new Tun device (/org/freedesktop/NetworkManager/Devices/116)
Oct  1 12:59:31 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:31Z|00206|binding|INFO|Claiming lport c646a8ad-1950-4bec-8bf5-d0039005679e for this chassis.
Oct  1 12:59:31 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:31Z|00207|binding|INFO|c646a8ad-1950-4bec-8bf5-d0039005679e: Claiming fa:16:3e:24:01:77 10.100.0.8
Oct  1 12:59:31 np0005464891 nova_compute[259907]: 2025-10-01 16:59:31.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:31.946 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:01:77 10.100.0.8'], port_security=['fa:16:3e:24:01:77 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '830f3147-422f-4e9d-ac70-0dbc385be575', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce1e1062-6685-441b-8278-667224375e38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8318b65fa88942a99937a0d198a04a9c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fda2d0b4-8d53-4a87-93c6-2f62b1be0cd1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4cdcf07-f310-4572-944c-43bd8e74f763, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=c646a8ad-1950-4bec-8bf5-d0039005679e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:59:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:31.947 162546 INFO neutron.agent.ovn.metadata.agent [-] Port c646a8ad-1950-4bec-8bf5-d0039005679e in datapath ce1e1062-6685-441b-8278-667224375e38 bound to our chassis#033[00m
Oct  1 12:59:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:31.948 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce1e1062-6685-441b-8278-667224375e38#033[00m
Oct  1 12:59:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:31.959 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[56604cfe-99d7-452f-a351-34ab46f9ab9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:31.960 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce1e1062-61 in ovnmeta-ce1e1062-6685-441b-8278-667224375e38 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:59:31 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:31Z|00208|binding|INFO|Setting lport c646a8ad-1950-4bec-8bf5-d0039005679e ovn-installed in OVS
Oct  1 12:59:31 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:31Z|00209|binding|INFO|Setting lport c646a8ad-1950-4bec-8bf5-d0039005679e up in Southbound
Oct  1 12:59:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:31.962 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce1e1062-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:59:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:31.962 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[43da869f-68fc-4d02-b788-34606c5b1355]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:31.963 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e67c4725-f9f7-4374-be66-dd1544293641]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:31 np0005464891 nova_compute[259907]: 2025-10-01 16:59:31.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:31 np0005464891 systemd-udevd[298980]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:59:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:31.977 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[30d05ab3-186e-45b6-a7b5-a6d88293da30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:31 np0005464891 NetworkManager[44940]: <info>  [1759337971.9826] device (tapc646a8ad-19): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:59:31 np0005464891 NetworkManager[44940]: <info>  [1759337971.9837] device (tapc646a8ad-19): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:59:31 np0005464891 systemd-machined[214891]: New machine qemu-22-instance-00000016.
Oct  1 12:59:31 np0005464891 systemd[1]: Started Virtual Machine qemu-22-instance-00000016.
Oct  1 12:59:31 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:31.994 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[c9752212-5170-4209-af5c-1028306b7e64]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.026 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[8e402a9b-9c16-44b9-a1e6-12935a936274]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.031 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f85410e9-2bdc-4237-bd41-eb4d40fed206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:32 np0005464891 NetworkManager[44940]: <info>  [1759337972.0343] manager: (tapce1e1062-60): new Veth device (/org/freedesktop/NetworkManager/Devices/117)
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.063 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[acbc2d29-cdcc-4085-a4e6-aec84f53f171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.067 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[fef0c1b6-d9d4-4ee8-bf93-6fbe886288c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:32 np0005464891 NetworkManager[44940]: <info>  [1759337972.0914] device (tapce1e1062-60): carrier: link connected
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.097 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b8f78c-5ca0-4836-bf04-dd6f3245430c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.113 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[a7fb93d2-214b-4f3f-a169-b540e45d081c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce1e1062-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:87:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492066, 'reachable_time': 19579, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299013, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.129 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d666b9eb-4252-4ac9-9314-828084801fc9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec5:872c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 492066, 'tstamp': 492066}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299014, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.148 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ade29be7-68c3-4cd6-afbd-6edafd0e0212]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce1e1062-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:87:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492066, 'reachable_time': 19579, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299015, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.182 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[83b3c75a-cdab-4ed5-8651-a319e956394d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.235 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[fdae0d0f-6f9e-46a5-abcc-fd64b77928f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.237 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce1e1062-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.237 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.238 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce1e1062-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:59:32 np0005464891 kernel: tapce1e1062-60: entered promiscuous mode
Oct  1 12:59:32 np0005464891 NetworkManager[44940]: <info>  [1759337972.2401] manager: (tapce1e1062-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Oct  1 12:59:32 np0005464891 nova_compute[259907]: 2025-10-01 16:59:32.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.244 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce1e1062-60, col_values=(('external_ids', {'iface-id': 'd971881d-8d8b-44dc-b0b0-4cd0065c0105'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:59:32 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:32Z|00210|binding|INFO|Releasing lport d971881d-8d8b-44dc-b0b0-4cd0065c0105 from this chassis (sb_readonly=0)
Oct  1 12:59:32 np0005464891 nova_compute[259907]: 2025-10-01 16:59:32.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.246 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce1e1062-6685-441b-8278-667224375e38.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce1e1062-6685-441b-8278-667224375e38.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.247 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[317b3300-bff1-431d-acd8-fd35673c76f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.247 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-ce1e1062-6685-441b-8278-667224375e38
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/ce1e1062-6685-441b-8278-667224375e38.pid.haproxy
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID ce1e1062-6685-441b-8278-667224375e38
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:59:32 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:32.248 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'env', 'PROCESS_TAG=haproxy-ce1e1062-6685-441b-8278-667224375e38', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce1e1062-6685-441b-8278-667224375e38.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:59:32 np0005464891 nova_compute[259907]: 2025-10-01 16:59:32.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:59:32 np0005464891 nova_compute[259907]: 2025-10-01 16:59:32.682 2 DEBUG nova.compute.manager [req-c13705db-5a93-411c-92f7-576b51d6df21 req-d34c248f-7589-4071-b499-7f700ec3da7f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Received event network-vif-plugged-c646a8ad-1950-4bec-8bf5-d0039005679e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:59:32 np0005464891 nova_compute[259907]: 2025-10-01 16:59:32.684 2 DEBUG oslo_concurrency.lockutils [req-c13705db-5a93-411c-92f7-576b51d6df21 req-d34c248f-7589-4071-b499-7f700ec3da7f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:32 np0005464891 nova_compute[259907]: 2025-10-01 16:59:32.685 2 DEBUG oslo_concurrency.lockutils [req-c13705db-5a93-411c-92f7-576b51d6df21 req-d34c248f-7589-4071-b499-7f700ec3da7f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:32 np0005464891 nova_compute[259907]: 2025-10-01 16:59:32.685 2 DEBUG oslo_concurrency.lockutils [req-c13705db-5a93-411c-92f7-576b51d6df21 req-d34c248f-7589-4071-b499-7f700ec3da7f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:32 np0005464891 nova_compute[259907]: 2025-10-01 16:59:32.686 2 DEBUG nova.compute.manager [req-c13705db-5a93-411c-92f7-576b51d6df21 req-d34c248f-7589-4071-b499-7f700ec3da7f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Processing event network-vif-plugged-c646a8ad-1950-4bec-8bf5-d0039005679e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:59:32 np0005464891 podman[299083]: 2025-10-01 16:59:32.61395532 +0000 UTC m=+0.024540083 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:59:32 np0005464891 nova_compute[259907]: 2025-10-01 16:59:32.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:59:33 np0005464891 podman[299083]: 2025-10-01 16:59:33.150093708 +0000 UTC m=+0.560678441 container create 4cc40d6c449efa3f280ae329f71a76939a7fcedef38fe060d40ddb2f480214be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.161 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337973.1608016, 830f3147-422f-4e9d-ac70-0dbc385be575 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.161 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] VM Started (Lifecycle Event)#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.165 2 DEBUG nova.compute.manager [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.168 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.172 2 INFO nova.virt.libvirt.driver [-] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Instance spawned successfully.#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.172 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.190 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.194 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:59:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 317 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 15 KiB/s wr, 18 op/s
Oct  1 12:59:33 np0005464891 systemd[1]: Started libpod-conmon-4cc40d6c449efa3f280ae329f71a76939a7fcedef38fe060d40ddb2f480214be.scope.
Oct  1 12:59:33 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.346 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.346 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.347 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.347 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.348 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.348 2 DEBUG nova.virt.libvirt.driver [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:59:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85723a35ee8a1002eb0f7b98e5172e25597c86283f00935aeb9bc17b3a7c6d8e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.449 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.450 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337973.1617854, 830f3147-422f-4e9d-ac70-0dbc385be575 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.451 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:59:33 np0005464891 podman[299083]: 2025-10-01 16:59:33.500377965 +0000 UTC m=+0.910962798 container init 4cc40d6c449efa3f280ae329f71a76939a7fcedef38fe060d40ddb2f480214be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  1 12:59:33 np0005464891 podman[299083]: 2025-10-01 16:59:33.509050193 +0000 UTC m=+0.919634946 container start 4cc40d6c449efa3f280ae329f71a76939a7fcedef38fe060d40ddb2f480214be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.525 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.530 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337973.16751, 830f3147-422f-4e9d-ac70-0dbc385be575 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.531 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:59:33 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[299104]: [NOTICE]   (299108) : New worker (299110) forked
Oct  1 12:59:33 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[299104]: [NOTICE]   (299108) : Loading success.
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.578 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.582 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.589 2 INFO nova.compute.manager [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Took 8.30 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.589 2 DEBUG nova.compute.manager [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.699 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.782 2 INFO nova.compute.manager [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Took 11.74 seconds to build instance.#033[00m
Oct  1 12:59:33 np0005464891 nova_compute[259907]: 2025-10-01 16:59:33.838 2 DEBUG oslo_concurrency.lockutils [None req-00c97b8b-0d9f-4dac-a2c4-3a247fe14a70 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "830f3147-422f-4e9d-ac70-0dbc385be575" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:34 np0005464891 nova_compute[259907]: 2025-10-01 16:59:34.097 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337959.095353, dd2acd48-65e4-48e1-80ae-b7404cb6fc4e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:59:34 np0005464891 nova_compute[259907]: 2025-10-01 16:59:34.097 2 INFO nova.compute.manager [-] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] VM Stopped (Lifecycle Event)#033[00m
Oct  1 12:59:34 np0005464891 nova_compute[259907]: 2025-10-01 16:59:34.180 2 DEBUG nova.compute.manager [None req-bdfc83a8-07d5-4b06-8beb-7bfa9b6c9f27 - - - - - -] [instance: dd2acd48-65e4-48e1-80ae-b7404cb6fc4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:59:34 np0005464891 nova_compute[259907]: 2025-10-01 16:59:34.777 2 DEBUG nova.compute.manager [req-ff520695-17d7-435e-90cb-9dfbc8a750fa req-f7580b5c-c364-49f1-9c52-13a2312eafae af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Received event network-vif-plugged-c646a8ad-1950-4bec-8bf5-d0039005679e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:59:34 np0005464891 nova_compute[259907]: 2025-10-01 16:59:34.777 2 DEBUG oslo_concurrency.lockutils [req-ff520695-17d7-435e-90cb-9dfbc8a750fa req-f7580b5c-c364-49f1-9c52-13a2312eafae af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:34 np0005464891 nova_compute[259907]: 2025-10-01 16:59:34.777 2 DEBUG oslo_concurrency.lockutils [req-ff520695-17d7-435e-90cb-9dfbc8a750fa req-f7580b5c-c364-49f1-9c52-13a2312eafae af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:34 np0005464891 nova_compute[259907]: 2025-10-01 16:59:34.778 2 DEBUG oslo_concurrency.lockutils [req-ff520695-17d7-435e-90cb-9dfbc8a750fa req-f7580b5c-c364-49f1-9c52-13a2312eafae af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:34 np0005464891 nova_compute[259907]: 2025-10-01 16:59:34.778 2 DEBUG nova.compute.manager [req-ff520695-17d7-435e-90cb-9dfbc8a750fa req-f7580b5c-c364-49f1-9c52-13a2312eafae af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] No waiting events found dispatching network-vif-plugged-c646a8ad-1950-4bec-8bf5-d0039005679e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:59:34 np0005464891 nova_compute[259907]: 2025-10-01 16:59:34.778 2 WARNING nova.compute.manager [req-ff520695-17d7-435e-90cb-9dfbc8a750fa req-f7580b5c-c364-49f1-9c52-13a2312eafae af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Received unexpected event network-vif-plugged-c646a8ad-1950-4bec-8bf5-d0039005679e for instance with vm_state active and task_state None.#033[00m
Oct  1 12:59:34 np0005464891 nova_compute[259907]: 2025-10-01 16:59:34.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:59:34 np0005464891 nova_compute[259907]: 2025-10-01 16:59:34.826 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:34 np0005464891 nova_compute[259907]: 2025-10-01 16:59:34.826 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:34 np0005464891 nova_compute[259907]: 2025-10-01 16:59:34.826 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:34 np0005464891 nova_compute[259907]: 2025-10-01 16:59:34.827 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 12:59:34 np0005464891 nova_compute[259907]: 2025-10-01 16:59:34.827 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:34 np0005464891 podman[299120]: 2025-10-01 16:59:34.939338918 +0000 UTC m=+0.053801625 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 12:59:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 317 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 17 KiB/s wr, 113 op/s
Oct  1 12:59:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:59:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3893352768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:59:35 np0005464891 nova_compute[259907]: 2025-10-01 16:59:35.398 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:35 np0005464891 nova_compute[259907]: 2025-10-01 16:59:35.678 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:59:35 np0005464891 nova_compute[259907]: 2025-10-01 16:59:35.678 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 12:59:35 np0005464891 nova_compute[259907]: 2025-10-01 16:59:35.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:35 np0005464891 nova_compute[259907]: 2025-10-01 16:59:35.833 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:59:35 np0005464891 nova_compute[259907]: 2025-10-01 16:59:35.834 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4252MB free_disk=59.988136291503906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 12:59:35 np0005464891 nova_compute[259907]: 2025-10-01 16:59:35.834 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:35 np0005464891 nova_compute[259907]: 2025-10-01 16:59:35.834 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:35 np0005464891 nova_compute[259907]: 2025-10-01 16:59:35.981 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 830f3147-422f-4e9d-ac70-0dbc385be575 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 12:59:35 np0005464891 nova_compute[259907]: 2025-10-01 16:59:35.981 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 12:59:35 np0005464891 nova_compute[259907]: 2025-10-01 16:59:35.982 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 12:59:36 np0005464891 nova_compute[259907]: 2025-10-01 16:59:36.011 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:59:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3726224125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:59:36 np0005464891 nova_compute[259907]: 2025-10-01 16:59:36.454 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:36 np0005464891 nova_compute[259907]: 2025-10-01 16:59:36.460 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:59:36 np0005464891 nova_compute[259907]: 2025-10-01 16:59:36.536 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:59:36 np0005464891 nova_compute[259907]: 2025-10-01 16:59:36.630 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 12:59:36 np0005464891 nova_compute[259907]: 2025-10-01 16:59:36.630 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:36 np0005464891 nova_compute[259907]: 2025-10-01 16:59:36.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 317 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 17 KiB/s wr, 113 op/s
Oct  1 12:59:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 12:59:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3883267351' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 12:59:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 12:59:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3883267351' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 12:59:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:59:37 np0005464891 nova_compute[259907]: 2025-10-01 16:59:37.632 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:59:37 np0005464891 nova_compute[259907]: 2025-10-01 16:59:37.632 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:59:37 np0005464891 nova_compute[259907]: 2025-10-01 16:59:37.633 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:59:37 np0005464891 nova_compute[259907]: 2025-10-01 16:59:37.633 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 12:59:37 np0005464891 nova_compute[259907]: 2025-10-01 16:59:37.799 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:59:37 np0005464891 nova_compute[259907]: 2025-10-01 16:59:37.829 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:59:37 np0005464891 nova_compute[259907]: 2025-10-01 16:59:37.830 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 12:59:37 np0005464891 nova_compute[259907]: 2025-10-01 16:59:37.830 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 12:59:37 np0005464891 nova_compute[259907]: 2025-10-01 16:59:37.933 2 DEBUG oslo_concurrency.lockutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "caeab115-da31-48fd-af65-2085a2c28333" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:37 np0005464891 nova_compute[259907]: 2025-10-01 16:59:37.934 2 DEBUG oslo_concurrency.lockutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "caeab115-da31-48fd-af65-2085a2c28333" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:37 np0005464891 nova_compute[259907]: 2025-10-01 16:59:37.990 2 DEBUG nova.compute.manager [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.092 2 DEBUG oslo_concurrency.lockutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.093 2 DEBUG oslo_concurrency.lockutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.094 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "refresh_cache-830f3147-422f-4e9d-ac70-0dbc385be575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.095 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquired lock "refresh_cache-830f3147-422f-4e9d-ac70-0dbc385be575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.095 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.095 2 DEBUG nova.objects.instance [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 830f3147-422f-4e9d-ac70-0dbc385be575 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.102 2 DEBUG nova.virt.hardware [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.102 2 INFO nova.compute.claims [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.265 2 DEBUG oslo_concurrency.processutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:59:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1803131356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.727 2 DEBUG oslo_concurrency.processutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.732 2 DEBUG nova.compute.provider_tree [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.752 2 DEBUG nova.scheduler.client.report [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.788 2 DEBUG oslo_concurrency.lockutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.788 2 DEBUG nova.compute.manager [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.856 2 DEBUG nova.compute.manager [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.856 2 DEBUG nova.network.neutron [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.876 2 INFO nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.901 2 DEBUG nova.compute.manager [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 12:59:38 np0005464891 nova_compute[259907]: 2025-10-01 16:59:38.945 2 INFO nova.virt.block_device [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Booting with volume bf6818e6-6dde-4758-b4de-98d03ab3626a at /dev/vda#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.108 2 DEBUG nova.policy [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '906d3d29e27b49c1860f5397c6028d96', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bb5e44f7928546dfb674d53cd3727027', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.143 2 DEBUG os_brick.utils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.145 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.157 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.157 741 DEBUG oslo.privsep.daemon [-] privsep: reply[708aedd7-5bd1-4409-8c0a-167cb8a1aa99]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.159 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.166 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.166 741 DEBUG oslo.privsep.daemon [-] privsep: reply[7ec5929d-a847-407c-87ab-a7e620b3c329]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.168 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.176 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.176 741 DEBUG oslo.privsep.daemon [-] privsep: reply[64cdba8b-70cb-4963-9d86-20172718ff23]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.178 741 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2e9888-fcf7-4fee-8afb-fe64f5945435]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.179 2 DEBUG oslo_concurrency.processutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.199 2 DEBUG oslo_concurrency.processutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.201 2 DEBUG os_brick.initiator.connectors.lightos [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.201 2 DEBUG os_brick.initiator.connectors.lightos [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.202 2 DEBUG os_brick.initiator.connectors.lightos [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.202 2 DEBUG os_brick.utils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] <== get_connector_properties: return (57ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.202 2 DEBUG nova.virt.block_device [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Updating existing volume attachment record: c28ed9ac-6cbc-411b-8d08-17920ed1671f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 12:59:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1797: 321 pgs: 321 active+clean; 317 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 117 op/s
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.469 2 DEBUG nova.compute.manager [req-ac30f2f8-96e3-42aa-8c6b-2021c97152e9 req-7e29b2c7-302b-41b2-b026-586d0a3b91e6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Received event network-changed-c646a8ad-1950-4bec-8bf5-d0039005679e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.469 2 DEBUG nova.compute.manager [req-ac30f2f8-96e3-42aa-8c6b-2021c97152e9 req-7e29b2c7-302b-41b2-b026-586d0a3b91e6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Refreshing instance network info cache due to event network-changed-c646a8ad-1950-4bec-8bf5-d0039005679e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.470 2 DEBUG oslo_concurrency.lockutils [req-ac30f2f8-96e3-42aa-8c6b-2021c97152e9 req-7e29b2c7-302b-41b2-b026-586d0a3b91e6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-830f3147-422f-4e9d-ac70-0dbc385be575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.471 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Updating instance_info_cache with network_info: [{"id": "c646a8ad-1950-4bec-8bf5-d0039005679e", "address": "fa:16:3e:24:01:77", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc646a8ad-19", "ovs_interfaceid": "c646a8ad-1950-4bec-8bf5-d0039005679e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.489 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Releasing lock "refresh_cache-830f3147-422f-4e9d-ac70-0dbc385be575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.489 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.490 2 DEBUG oslo_concurrency.lockutils [req-ac30f2f8-96e3-42aa-8c6b-2021c97152e9 req-7e29b2c7-302b-41b2-b026-586d0a3b91e6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-830f3147-422f-4e9d-ac70-0dbc385be575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.491 2 DEBUG nova.network.neutron [req-ac30f2f8-96e3-42aa-8c6b-2021c97152e9 req-7e29b2c7-302b-41b2-b026-586d0a3b91e6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Refreshing network info cache for port c646a8ad-1950-4bec-8bf5-d0039005679e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:59:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:59:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/88542196' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:59:39 np0005464891 nova_compute[259907]: 2025-10-01 16:59:39.956 2 DEBUG nova.network.neutron [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Successfully created port: 61aaf003-104a-4194-89f9-18ce4d3dfabb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 12:59:40 np0005464891 nova_compute[259907]: 2025-10-01 16:59:40.226 2 DEBUG nova.compute.manager [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 12:59:40 np0005464891 nova_compute[259907]: 2025-10-01 16:59:40.228 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 12:59:40 np0005464891 nova_compute[259907]: 2025-10-01 16:59:40.228 2 INFO nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Creating image(s)#033[00m
Oct  1 12:59:40 np0005464891 nova_compute[259907]: 2025-10-01 16:59:40.228 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  1 12:59:40 np0005464891 nova_compute[259907]: 2025-10-01 16:59:40.229 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Ensure instance console log exists: /var/lib/nova/instances/caeab115-da31-48fd-af65-2085a2c28333/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 12:59:40 np0005464891 nova_compute[259907]: 2025-10-01 16:59:40.229 2 DEBUG oslo_concurrency.lockutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:40 np0005464891 nova_compute[259907]: 2025-10-01 16:59:40.229 2 DEBUG oslo_concurrency.lockutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:40 np0005464891 nova_compute[259907]: 2025-10-01 16:59:40.229 2 DEBUG oslo_concurrency.lockutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:40 np0005464891 nova_compute[259907]: 2025-10-01 16:59:40.553 2 DEBUG nova.network.neutron [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Successfully updated port: 61aaf003-104a-4194-89f9-18ce4d3dfabb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 12:59:40 np0005464891 nova_compute[259907]: 2025-10-01 16:59:40.570 2 DEBUG oslo_concurrency.lockutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "refresh_cache-caeab115-da31-48fd-af65-2085a2c28333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:59:40 np0005464891 nova_compute[259907]: 2025-10-01 16:59:40.571 2 DEBUG oslo_concurrency.lockutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquired lock "refresh_cache-caeab115-da31-48fd-af65-2085a2c28333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:59:40 np0005464891 nova_compute[259907]: 2025-10-01 16:59:40.571 2 DEBUG nova.network.neutron [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 12:59:40 np0005464891 nova_compute[259907]: 2025-10-01 16:59:40.721 2 DEBUG nova.network.neutron [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 12:59:40 np0005464891 nova_compute[259907]: 2025-10-01 16:59:40.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:40 np0005464891 nova_compute[259907]: 2025-10-01 16:59:40.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:59:40 np0005464891 nova_compute[259907]: 2025-10-01 16:59:40.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.038 2 DEBUG nova.network.neutron [req-ac30f2f8-96e3-42aa-8c6b-2021c97152e9 req-7e29b2c7-302b-41b2-b026-586d0a3b91e6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Updated VIF entry in instance network info cache for port c646a8ad-1950-4bec-8bf5-d0039005679e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.039 2 DEBUG nova.network.neutron [req-ac30f2f8-96e3-42aa-8c6b-2021c97152e9 req-7e29b2c7-302b-41b2-b026-586d0a3b91e6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Updating instance_info_cache with network_info: [{"id": "c646a8ad-1950-4bec-8bf5-d0039005679e", "address": "fa:16:3e:24:01:77", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc646a8ad-19", "ovs_interfaceid": "c646a8ad-1950-4bec-8bf5-d0039005679e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.053 2 DEBUG oslo_concurrency.lockutils [req-ac30f2f8-96e3-42aa-8c6b-2021c97152e9 req-7e29b2c7-302b-41b2-b026-586d0a3b91e6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-830f3147-422f-4e9d-ac70-0dbc385be575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:59:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 317 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 16 KiB/s wr, 107 op/s
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.552 2 DEBUG nova.compute.manager [req-3b6f86fa-4eee-4539-ac6c-1adf3451ae02 req-12585fc0-80e5-40d6-9c32-5957b855c834 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Received event network-changed-61aaf003-104a-4194-89f9-18ce4d3dfabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.552 2 DEBUG nova.compute.manager [req-3b6f86fa-4eee-4539-ac6c-1adf3451ae02 req-12585fc0-80e5-40d6-9c32-5957b855c834 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Refreshing instance network info cache due to event network-changed-61aaf003-104a-4194-89f9-18ce4d3dfabb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.552 2 DEBUG oslo_concurrency.lockutils [req-3b6f86fa-4eee-4539-ac6c-1adf3451ae02 req-12585fc0-80e5-40d6-9c32-5957b855c834 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-caeab115-da31-48fd-af65-2085a2c28333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.574 2 DEBUG nova.network.neutron [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Updating instance_info_cache with network_info: [{"id": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "address": "fa:16:3e:cd:c1:82", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61aaf003-10", "ovs_interfaceid": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.596 2 DEBUG oslo_concurrency.lockutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Releasing lock "refresh_cache-caeab115-da31-48fd-af65-2085a2c28333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.596 2 DEBUG nova.compute.manager [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Instance network_info: |[{"id": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "address": "fa:16:3e:cd:c1:82", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61aaf003-10", "ovs_interfaceid": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.596 2 DEBUG oslo_concurrency.lockutils [req-3b6f86fa-4eee-4539-ac6c-1adf3451ae02 req-12585fc0-80e5-40d6-9c32-5957b855c834 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-caeab115-da31-48fd-af65-2085a2c28333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.597 2 DEBUG nova.network.neutron [req-3b6f86fa-4eee-4539-ac6c-1adf3451ae02 req-12585fc0-80e5-40d6-9c32-5957b855c834 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Refreshing network info cache for port 61aaf003-104a-4194-89f9-18ce4d3dfabb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.599 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Start _get_guest_xml network_info=[{"id": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "address": "fa:16:3e:cd:c1:82", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61aaf003-10", "ovs_interfaceid": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'mount_device': '/dev/vda', 'attachment_id': 'c28ed9ac-6cbc-411b-8d08-17920ed1671f', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-bf6818e6-6dde-4758-b4de-98d03ab3626a', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'bf6818e6-6dde-4758-b4de-98d03ab3626a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'caeab115-da31-48fd-af65-2085a2c28333', 'attached_at': '', 'detached_at': '', 'volume_id': 'bf6818e6-6dde-4758-b4de-98d03ab3626a', 'serial': 'bf6818e6-6dde-4758-b4de-98d03ab3626a'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.603 2 WARNING nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.609 2 DEBUG nova.virt.libvirt.host [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.610 2 DEBUG nova.virt.libvirt.host [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.612 2 DEBUG nova.virt.libvirt.host [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.613 2 DEBUG nova.virt.libvirt.host [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.613 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.613 2 DEBUG nova.virt.hardware [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.614 2 DEBUG nova.virt.hardware [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.614 2 DEBUG nova.virt.hardware [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.614 2 DEBUG nova.virt.hardware [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.614 2 DEBUG nova.virt.hardware [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.615 2 DEBUG nova.virt.hardware [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.615 2 DEBUG nova.virt.hardware [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.615 2 DEBUG nova.virt.hardware [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.615 2 DEBUG nova.virt.hardware [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.615 2 DEBUG nova.virt.hardware [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.616 2 DEBUG nova.virt.hardware [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.639 2 DEBUG nova.storage.rbd_utils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] rbd image caeab115-da31-48fd-af65-2085a2c28333_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.644 2 DEBUG oslo_concurrency.processutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:41 np0005464891 nova_compute[259907]: 2025-10-01 16:59:41.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:59:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:59:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:59:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:59:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 12:59:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 12:59:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 12:59:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3796035610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.172 2 DEBUG oslo_concurrency.processutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.303 2 DEBUG os_brick.encryptors [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Using volume encryption metadata '{'encryption_key_id': '1263d49f-48ec-4a54-9410-6aff589331b2', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-bf6818e6-6dde-4758-b4de-98d03ab3626a', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'bf6818e6-6dde-4758-b4de-98d03ab3626a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'caeab115-da31-48fd-af65-2085a2c28333', 'attached_at': '', 'detached_at': '', 'volume_id': 'bf6818e6-6dde-4758-b4de-98d03ab3626a', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.305 2 DEBUG barbicanclient.client [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.321 2 DEBUG barbicanclient.v1.secrets [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/1263d49f-48ec-4a54-9410-6aff589331b2 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.322 2 INFO barbicanclient.base [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/1263d49f-48ec-4a54-9410-6aff589331b2#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.349 2 DEBUG barbicanclient.client [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.349 2 INFO barbicanclient.base [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/1263d49f-48ec-4a54-9410-6aff589331b2#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.441 2 DEBUG barbicanclient.client [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.442 2 INFO barbicanclient.base [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/1263d49f-48ec-4a54-9410-6aff589331b2#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.466 2 DEBUG barbicanclient.client [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.467 2 INFO barbicanclient.base [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/1263d49f-48ec-4a54-9410-6aff589331b2#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.489 2 DEBUG barbicanclient.client [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.490 2 INFO barbicanclient.base [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/1263d49f-48ec-4a54-9410-6aff589331b2#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.509 2 DEBUG barbicanclient.client [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.510 2 INFO barbicanclient.base [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/1263d49f-48ec-4a54-9410-6aff589331b2#033[00m
Oct  1 12:59:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.535 2 DEBUG barbicanclient.client [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.535 2 INFO barbicanclient.base [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/1263d49f-48ec-4a54-9410-6aff589331b2#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.562 2 DEBUG barbicanclient.client [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.562 2 INFO barbicanclient.base [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/1263d49f-48ec-4a54-9410-6aff589331b2#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.580 2 DEBUG barbicanclient.client [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.581 2 INFO barbicanclient.base [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/1263d49f-48ec-4a54-9410-6aff589331b2#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.608 2 DEBUG barbicanclient.client [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.609 2 INFO barbicanclient.base [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/1263d49f-48ec-4a54-9410-6aff589331b2#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.632 2 DEBUG barbicanclient.client [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.632 2 INFO barbicanclient.base [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/1263d49f-48ec-4a54-9410-6aff589331b2#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.658 2 DEBUG barbicanclient.client [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.659 2 INFO barbicanclient.base [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/1263d49f-48ec-4a54-9410-6aff589331b2#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.680 2 DEBUG barbicanclient.client [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.681 2 INFO barbicanclient.base [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/1263d49f-48ec-4a54-9410-6aff589331b2#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.704 2 DEBUG barbicanclient.client [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.705 2 INFO barbicanclient.base [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/1263d49f-48ec-4a54-9410-6aff589331b2#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.730 2 DEBUG barbicanclient.client [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.730 2 INFO barbicanclient.base [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Calculated Secrets uuid ref: secrets/1263d49f-48ec-4a54-9410-6aff589331b2#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.757 2 DEBUG barbicanclient.client [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.758 2 DEBUG nova.virt.libvirt.host [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  <usage type="volume">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <volume>bf6818e6-6dde-4758-b4de-98d03ab3626a</volume>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  </usage>
Oct  1 12:59:42 np0005464891 nova_compute[259907]: </secret>
Oct  1 12:59:42 np0005464891 nova_compute[259907]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.791 2 DEBUG nova.virt.libvirt.vif [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:59:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1137424607',display_name='tempest-TestEncryptedCinderVolumes-server-1137424607',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1137424607',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIhJiMuVwk4EQ7wkCYcLaeTsPomALwyR3FBK+97oa6ynrLvPrKJKnE71uKm0O/hFbPLnI7X22RnrmUili5anoyjadz+yIM+FZfOiuxhlfC8kCRP4tSOOTh7DLMRl7W7xOg==',key_name='tempest-TestEncryptedCinderVolumes-620996693',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bb5e44f7928546dfb674d53cd3727027',ramdisk_id='',reservation_id='r-nkl87b2i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-803701988',owner_user_name='tempest-TestEncryptedCinderVolumes-803701988-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:59:38Z,user_data=None,user_id='906d3d29e27b49c1860f5397c6028d96',uuid=caeab115-da31-48fd-af65-2085a2c28333,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "address": "fa:16:3e:cd:c1:82", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61aaf003-10", "ovs_interfaceid": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.791 2 DEBUG nova.network.os_vif_util [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converting VIF {"id": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "address": "fa:16:3e:cd:c1:82", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61aaf003-10", "ovs_interfaceid": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.792 2 DEBUG nova.network.os_vif_util [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:c1:82,bridge_name='br-int',has_traffic_filtering=True,id=61aaf003-104a-4194-89f9-18ce4d3dfabb,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61aaf003-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.793 2 DEBUG nova.objects.instance [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lazy-loading 'pci_devices' on Instance uuid caeab115-da31-48fd-af65-2085a2c28333 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.807 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] End _get_guest_xml xml=<domain type="kvm">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  <uuid>caeab115-da31-48fd-af65-2085a2c28333</uuid>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  <name>instance-00000017</name>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-1137424607</nova:name>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 16:59:41</nova:creationTime>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:        <nova:user uuid="906d3d29e27b49c1860f5397c6028d96">tempest-TestEncryptedCinderVolumes-803701988-project-member</nova:user>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:        <nova:project uuid="bb5e44f7928546dfb674d53cd3727027">tempest-TestEncryptedCinderVolumes-803701988</nova:project>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:        <nova:port uuid="61aaf003-104a-4194-89f9-18ce4d3dfabb">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <system>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <entry name="serial">caeab115-da31-48fd-af65-2085a2c28333</entry>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <entry name="uuid">caeab115-da31-48fd-af65-2085a2c28333</entry>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    </system>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  <os>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  </os>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  <features>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  </features>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  </clock>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  <devices>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/caeab115-da31-48fd-af65-2085a2c28333_disk.config">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="volumes/volume-bf6818e6-6dde-4758-b4de-98d03ab3626a">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      </source>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      </auth>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <serial>bf6818e6-6dde-4758-b4de-98d03ab3626a</serial>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <encryption format="luks">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:        <secret type="passphrase" uuid="07d053d7-d577-48ad-93f1-15cbe1bea659"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      </encryption>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    </disk>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:cd:c1:82"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <target dev="tap61aaf003-10"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    </interface>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/caeab115-da31-48fd-af65-2085a2c28333/console.log" append="off"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    </serial>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <video>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    </video>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    </rng>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 12:59:42 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 12:59:42 np0005464891 nova_compute[259907]:  </devices>
Oct  1 12:59:42 np0005464891 nova_compute[259907]: </domain>
Oct  1 12:59:42 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.809 2 DEBUG nova.compute.manager [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Preparing to wait for external event network-vif-plugged-61aaf003-104a-4194-89f9-18ce4d3dfabb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.809 2 DEBUG oslo_concurrency.lockutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "caeab115-da31-48fd-af65-2085a2c28333-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.809 2 DEBUG oslo_concurrency.lockutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "caeab115-da31-48fd-af65-2085a2c28333-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.810 2 DEBUG oslo_concurrency.lockutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "caeab115-da31-48fd-af65-2085a2c28333-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.811 2 DEBUG nova.virt.libvirt.vif [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T16:59:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1137424607',display_name='tempest-TestEncryptedCinderVolumes-server-1137424607',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1137424607',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIhJiMuVwk4EQ7wkCYcLaeTsPomALwyR3FBK+97oa6ynrLvPrKJKnE71uKm0O/hFbPLnI7X22RnrmUili5anoyjadz+yIM+FZfOiuxhlfC8kCRP4tSOOTh7DLMRl7W7xOg==',key_name='tempest-TestEncryptedCinderVolumes-620996693',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bb5e44f7928546dfb674d53cd3727027',ramdisk_id='',reservation_id='r-nkl87b2i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-803701988',owner_user_name='tempest-TestEncryptedCinderVolumes-803701988-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T16:59:38Z,user_data=None,user_id='906d3d29e27b49c1860f5397c6028d96',uuid=caeab115-da31-48fd-af65-2085a2c28333,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "address": "fa:16:3e:cd:c1:82", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61aaf003-10", "ovs_interfaceid": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.811 2 DEBUG nova.network.os_vif_util [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converting VIF {"id": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "address": "fa:16:3e:cd:c1:82", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61aaf003-10", "ovs_interfaceid": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.812 2 DEBUG nova.network.os_vif_util [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:c1:82,bridge_name='br-int',has_traffic_filtering=True,id=61aaf003-104a-4194-89f9-18ce4d3dfabb,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61aaf003-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.812 2 DEBUG os_vif [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:c1:82,bridge_name='br-int',has_traffic_filtering=True,id=61aaf003-104a-4194-89f9-18ce4d3dfabb,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61aaf003-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.814 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.814 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.818 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61aaf003-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.819 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap61aaf003-10, col_values=(('external_ids', {'iface-id': '61aaf003-104a-4194-89f9-18ce4d3dfabb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:c1:82', 'vm-uuid': 'caeab115-da31-48fd-af65-2085a2c28333'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:42 np0005464891 NetworkManager[44940]: <info>  [1759337982.8213] manager: (tap61aaf003-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.831 2 INFO os_vif [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:c1:82,bridge_name='br-int',has_traffic_filtering=True,id=61aaf003-104a-4194-89f9-18ce4d3dfabb,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61aaf003-10')#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.886 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.886 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.886 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] No VIF found with MAC fa:16:3e:cd:c1:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.887 2 INFO nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Using config drive#033[00m
Oct  1 12:59:42 np0005464891 nova_compute[259907]: 2025-10-01 16:59:42.903 2 DEBUG nova.storage.rbd_utils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] rbd image caeab115-da31-48fd-af65-2085a2c28333_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:59:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 317 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 95 op/s
Oct  1 12:59:43 np0005464891 nova_compute[259907]: 2025-10-01 16:59:43.290 2 DEBUG nova.network.neutron [req-3b6f86fa-4eee-4539-ac6c-1adf3451ae02 req-12585fc0-80e5-40d6-9c32-5957b855c834 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Updated VIF entry in instance network info cache for port 61aaf003-104a-4194-89f9-18ce4d3dfabb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:59:43 np0005464891 nova_compute[259907]: 2025-10-01 16:59:43.291 2 DEBUG nova.network.neutron [req-3b6f86fa-4eee-4539-ac6c-1adf3451ae02 req-12585fc0-80e5-40d6-9c32-5957b855c834 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Updating instance_info_cache with network_info: [{"id": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "address": "fa:16:3e:cd:c1:82", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61aaf003-10", "ovs_interfaceid": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:59:43 np0005464891 nova_compute[259907]: 2025-10-01 16:59:43.329 2 DEBUG oslo_concurrency.lockutils [req-3b6f86fa-4eee-4539-ac6c-1adf3451ae02 req-12585fc0-80e5-40d6-9c32-5957b855c834 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-caeab115-da31-48fd-af65-2085a2c28333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:59:43 np0005464891 nova_compute[259907]: 2025-10-01 16:59:43.419 2 INFO nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Creating config drive at /var/lib/nova/instances/caeab115-da31-48fd-af65-2085a2c28333/disk.config#033[00m
Oct  1 12:59:43 np0005464891 nova_compute[259907]: 2025-10-01 16:59:43.432 2 DEBUG oslo_concurrency.processutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/caeab115-da31-48fd-af65-2085a2c28333/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvnf1jwjg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:43 np0005464891 nova_compute[259907]: 2025-10-01 16:59:43.565 2 DEBUG oslo_concurrency.processutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/caeab115-da31-48fd-af65-2085a2c28333/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvnf1jwjg" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:43 np0005464891 nova_compute[259907]: 2025-10-01 16:59:43.593 2 DEBUG nova.storage.rbd_utils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] rbd image caeab115-da31-48fd-af65-2085a2c28333_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 12:59:43 np0005464891 nova_compute[259907]: 2025-10-01 16:59:43.597 2 DEBUG oslo_concurrency.processutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/caeab115-da31-48fd-af65-2085a2c28333/disk.config caeab115-da31-48fd-af65-2085a2c28333_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:44 np0005464891 nova_compute[259907]: 2025-10-01 16:59:44.174 2 DEBUG oslo_concurrency.processutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/caeab115-da31-48fd-af65-2085a2c28333/disk.config caeab115-da31-48fd-af65-2085a2c28333_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:44 np0005464891 nova_compute[259907]: 2025-10-01 16:59:44.175 2 INFO nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Deleting local config drive /var/lib/nova/instances/caeab115-da31-48fd-af65-2085a2c28333/disk.config because it was imported into RBD.#033[00m
Oct  1 12:59:44 np0005464891 kernel: tap61aaf003-10: entered promiscuous mode
Oct  1 12:59:44 np0005464891 NetworkManager[44940]: <info>  [1759337984.2269] manager: (tap61aaf003-10): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Oct  1 12:59:44 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:44Z|00211|binding|INFO|Claiming lport 61aaf003-104a-4194-89f9-18ce4d3dfabb for this chassis.
Oct  1 12:59:44 np0005464891 nova_compute[259907]: 2025-10-01 16:59:44.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:44 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:44Z|00212|binding|INFO|61aaf003-104a-4194-89f9-18ce4d3dfabb: Claiming fa:16:3e:cd:c1:82 10.100.0.10
Oct  1 12:59:44 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:44Z|00213|binding|INFO|Setting lport 61aaf003-104a-4194-89f9-18ce4d3dfabb ovn-installed in OVS
Oct  1 12:59:44 np0005464891 nova_compute[259907]: 2025-10-01 16:59:44.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:44 np0005464891 systemd-machined[214891]: New machine qemu-23-instance-00000017.
Oct  1 12:59:44 np0005464891 nova_compute[259907]: 2025-10-01 16:59:44.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:44 np0005464891 systemd[1]: Started Virtual Machine qemu-23-instance-00000017.
Oct  1 12:59:44 np0005464891 systemd-udevd[299328]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 12:59:44 np0005464891 NetworkManager[44940]: <info>  [1759337984.2892] device (tap61aaf003-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 12:59:44 np0005464891 NetworkManager[44940]: <info>  [1759337984.2901] device (tap61aaf003-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 12:59:44 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:44Z|00214|binding|INFO|Setting lport 61aaf003-104a-4194-89f9-18ce4d3dfabb up in Southbound
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.292 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:c1:82 10.100.0.10'], port_security=['fa:16:3e:cd:c1:82 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'caeab115-da31-48fd-af65-2085a2c28333', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bb5e44f7928546dfb674d53cd3727027', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c51767f2-742e-4209-a278-1c1f1e9af624', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08e741b0-61e8-4126-b98f-610a01494f2d, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=61aaf003-104a-4194-89f9-18ce4d3dfabb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.294 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 61aaf003-104a-4194-89f9-18ce4d3dfabb in datapath 2345ad6b-d676-4546-a17e-6f7405ff5f24 bound to our chassis#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.296 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2345ad6b-d676-4546-a17e-6f7405ff5f24#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.307 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[16454060-1e02-4f23-be2a-429df52bb1bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.308 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2345ad6b-d1 in ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.310 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2345ad6b-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.310 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[32d92966-4b1d-42be-affa-2a6d8daba922]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.311 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[974de222-cdd4-4e71-8f2e-1433e19f6ce7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.322 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[28cf1b7a-0930-4005-8dab-8e9578077bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.344 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3a538a86-8764-426b-aa4f-c1767a59baf0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.379 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c10914-cceb-4641-93c1-04cf7b061898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.385 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ca6569-f181-468d-9328-50a2e8c804a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:44 np0005464891 NetworkManager[44940]: <info>  [1759337984.3859] manager: (tap2345ad6b-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/121)
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.427 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[44a2b9b2-cb4d-451c-8a58-24e4a0272c59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.431 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[302cbae8-83ac-48ac-9ee7-a51895311d10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:44 np0005464891 NetworkManager[44940]: <info>  [1759337984.4532] device (tap2345ad6b-d0): carrier: link connected
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.459 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[d4cba3f7-3772-4bab-bcd6-893eb9c3397a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.479 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[943b62f3-90d2-4671-ae91-d0f0ffde8724]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2345ad6b-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:95:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493302, 'reachable_time': 42511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299361, 'error': None, 'target': 'ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.496 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[93319660-b319-454a-801b-c0698e17a9a8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:9597'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493302, 'tstamp': 493302}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299362, 'error': None, 'target': 'ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.512 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b6fdaff3-fdad-447e-8b68-8d88530ec1c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2345ad6b-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:95:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493302, 'reachable_time': 42511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299363, 'error': None, 'target': 'ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.543 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b22ab974-7ca8-48dd-ac4b-808013c487c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.610 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[c23a2ba4-afb0-4024-8830-5c9e6111a95c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.612 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2345ad6b-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.612 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.613 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2345ad6b-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:59:44 np0005464891 nova_compute[259907]: 2025-10-01 16:59:44.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:44 np0005464891 NetworkManager[44940]: <info>  [1759337984.6161] manager: (tap2345ad6b-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Oct  1 12:59:44 np0005464891 kernel: tap2345ad6b-d0: entered promiscuous mode
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.618 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2345ad6b-d0, col_values=(('external_ids', {'iface-id': '459f1bd9-9c63-458d-a0ce-6bd274d1ecbb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:59:44 np0005464891 nova_compute[259907]: 2025-10-01 16:59:44.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:44 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:44Z|00215|binding|INFO|Releasing lport 459f1bd9-9c63-458d-a0ce-6bd274d1ecbb from this chassis (sb_readonly=0)
Oct  1 12:59:44 np0005464891 nova_compute[259907]: 2025-10-01 16:59:44.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.634 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2345ad6b-d676-4546-a17e-6f7405ff5f24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2345ad6b-d676-4546-a17e-6f7405ff5f24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.635 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[19f88070-dfce-4ebe-8cf5-757a7a5210db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.636 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-2345ad6b-d676-4546-a17e-6f7405ff5f24
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/2345ad6b-d676-4546-a17e-6f7405ff5f24.pid.haproxy
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID 2345ad6b-d676-4546-a17e-6f7405ff5f24
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 12:59:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:44.637 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'env', 'PROCESS_TAG=haproxy-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2345ad6b-d676-4546-a17e-6f7405ff5f24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 12:59:45 np0005464891 podman[299393]: 2025-10-01 16:59:45.052181527 +0000 UTC m=+0.095471987 container create b698e00f4d7119f77649064dfabfe756ce2c6ef34a7edeb285798d7a0fec8660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  1 12:59:45 np0005464891 podman[299393]: 2025-10-01 16:59:44.978706423 +0000 UTC m=+0.021996903 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 12:59:45 np0005464891 systemd[1]: Started libpod-conmon-b698e00f4d7119f77649064dfabfe756ce2c6ef34a7edeb285798d7a0fec8660.scope.
Oct  1 12:59:45 np0005464891 systemd[1]: Started libcrun container.
Oct  1 12:59:45 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace6416769f5b01f4c5cd5405a25ce02070ba686ecc21bac93fc68be326bde0f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 12:59:45 np0005464891 podman[299393]: 2025-10-01 16:59:45.235294644 +0000 UTC m=+0.278585144 container init b698e00f4d7119f77649064dfabfe756ce2c6ef34a7edeb285798d7a0fec8660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  1 12:59:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 321 active+clean; 323 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 811 KiB/s wr, 108 op/s
Oct  1 12:59:45 np0005464891 podman[299393]: 2025-10-01 16:59:45.244180607 +0000 UTC m=+0.287471097 container start b698e00f4d7119f77649064dfabfe756ce2c6ef34a7edeb285798d7a0fec8660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:59:45 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[299410]: [NOTICE]   (299429) : New worker (299434) forked
Oct  1 12:59:45 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[299410]: [NOTICE]   (299429) : Loading success.
Oct  1 12:59:45 np0005464891 nova_compute[259907]: 2025-10-01 16:59:45.554 2 DEBUG nova.compute.manager [req-556df990-0521-4035-a0a9-20266e0e80e5 req-d018e791-4a4f-41c2-adec-b7bd2a8641da af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Received event network-vif-plugged-61aaf003-104a-4194-89f9-18ce4d3dfabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:59:45 np0005464891 nova_compute[259907]: 2025-10-01 16:59:45.555 2 DEBUG oslo_concurrency.lockutils [req-556df990-0521-4035-a0a9-20266e0e80e5 req-d018e791-4a4f-41c2-adec-b7bd2a8641da af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "caeab115-da31-48fd-af65-2085a2c28333-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:45 np0005464891 nova_compute[259907]: 2025-10-01 16:59:45.555 2 DEBUG oslo_concurrency.lockutils [req-556df990-0521-4035-a0a9-20266e0e80e5 req-d018e791-4a4f-41c2-adec-b7bd2a8641da af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "caeab115-da31-48fd-af65-2085a2c28333-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:45 np0005464891 nova_compute[259907]: 2025-10-01 16:59:45.556 2 DEBUG oslo_concurrency.lockutils [req-556df990-0521-4035-a0a9-20266e0e80e5 req-d018e791-4a4f-41c2-adec-b7bd2a8641da af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "caeab115-da31-48fd-af65-2085a2c28333-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:45 np0005464891 nova_compute[259907]: 2025-10-01 16:59:45.556 2 DEBUG nova.compute.manager [req-556df990-0521-4035-a0a9-20266e0e80e5 req-d018e791-4a4f-41c2-adec-b7bd2a8641da af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Processing event network-vif-plugged-61aaf003-104a-4194-89f9-18ce4d3dfabb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 12:59:46 np0005464891 nova_compute[259907]: 2025-10-01 16:59:46.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:47 np0005464891 podman[299461]: 2025-10-01 16:59:47.024767209 +0000 UTC m=+0.088675100 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  1 12:59:47 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:47Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:01:77 10.100.0.8
Oct  1 12:59:47 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:47Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:01:77 10.100.0.8
Oct  1 12:59:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 321 active+clean; 323 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 809 KiB/s wr, 20 op/s
Oct  1 12:59:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:59:47 np0005464891 nova_compute[259907]: 2025-10-01 16:59:47.659 2 DEBUG nova.compute.manager [req-fc00b8d5-4144-48de-8f06-bf88cadb4183 req-9800cb56-cb44-4737-9814-f84b522857f9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Received event network-vif-plugged-61aaf003-104a-4194-89f9-18ce4d3dfabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:59:47 np0005464891 nova_compute[259907]: 2025-10-01 16:59:47.660 2 DEBUG oslo_concurrency.lockutils [req-fc00b8d5-4144-48de-8f06-bf88cadb4183 req-9800cb56-cb44-4737-9814-f84b522857f9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "caeab115-da31-48fd-af65-2085a2c28333-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:47 np0005464891 nova_compute[259907]: 2025-10-01 16:59:47.660 2 DEBUG oslo_concurrency.lockutils [req-fc00b8d5-4144-48de-8f06-bf88cadb4183 req-9800cb56-cb44-4737-9814-f84b522857f9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "caeab115-da31-48fd-af65-2085a2c28333-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:47 np0005464891 nova_compute[259907]: 2025-10-01 16:59:47.660 2 DEBUG oslo_concurrency.lockutils [req-fc00b8d5-4144-48de-8f06-bf88cadb4183 req-9800cb56-cb44-4737-9814-f84b522857f9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "caeab115-da31-48fd-af65-2085a2c28333-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:47 np0005464891 nova_compute[259907]: 2025-10-01 16:59:47.661 2 DEBUG nova.compute.manager [req-fc00b8d5-4144-48de-8f06-bf88cadb4183 req-9800cb56-cb44-4737-9814-f84b522857f9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] No waiting events found dispatching network-vif-plugged-61aaf003-104a-4194-89f9-18ce4d3dfabb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:59:47 np0005464891 nova_compute[259907]: 2025-10-01 16:59:47.661 2 WARNING nova.compute.manager [req-fc00b8d5-4144-48de-8f06-bf88cadb4183 req-9800cb56-cb44-4737-9814-f84b522857f9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Received unexpected event network-vif-plugged-61aaf003-104a-4194-89f9-18ce4d3dfabb for instance with vm_state building and task_state spawning.#033[00m
Oct  1 12:59:47 np0005464891 nova_compute[259907]: 2025-10-01 16:59:47.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.407 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337988.4072359, caeab115-da31-48fd-af65-2085a2c28333 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.408 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: caeab115-da31-48fd-af65-2085a2c28333] VM Started (Lifecycle Event)#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.410 2 DEBUG nova.compute.manager [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.414 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.418 2 INFO nova.virt.libvirt.driver [-] [instance: caeab115-da31-48fd-af65-2085a2c28333] Instance spawned successfully.#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.419 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.507 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: caeab115-da31-48fd-af65-2085a2c28333] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.512 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: caeab115-da31-48fd-af65-2085a2c28333] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.618 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.618 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.619 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.620 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.620 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.621 2 DEBUG nova.virt.libvirt.driver [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.694 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: caeab115-da31-48fd-af65-2085a2c28333] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.694 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337988.4082048, caeab115-da31-48fd-af65-2085a2c28333 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.695 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: caeab115-da31-48fd-af65-2085a2c28333] VM Paused (Lifecycle Event)#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.766 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: caeab115-da31-48fd-af65-2085a2c28333] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.769 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759337988.413696, caeab115-da31-48fd-af65-2085a2c28333 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.770 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: caeab115-da31-48fd-af65-2085a2c28333] VM Resumed (Lifecycle Event)#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.866 2 INFO nova.compute.manager [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Took 8.64 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 12:59:48 np0005464891 nova_compute[259907]: 2025-10-01 16:59:48.867 2 DEBUG nova.compute.manager [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:59:49 np0005464891 nova_compute[259907]: 2025-10-01 16:59:49.225 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: caeab115-da31-48fd-af65-2085a2c28333] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 12:59:49 np0005464891 nova_compute[259907]: 2025-10-01 16:59:49.229 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: caeab115-da31-48fd-af65-2085a2c28333] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 12:59:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 335 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 1.4 MiB/s wr, 35 op/s
Oct  1 12:59:49 np0005464891 nova_compute[259907]: 2025-10-01 16:59:49.296 2 INFO nova.compute.manager [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Took 11.23 seconds to build instance.#033[00m
Oct  1 12:59:49 np0005464891 nova_compute[259907]: 2025-10-01 16:59:49.338 2 DEBUG oslo_concurrency.lockutils [None req-9571c064-18ec-482e-99fc-4761a29d0a23 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "caeab115-da31-48fd-af65-2085a2c28333" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 346 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Oct  1 12:59:51 np0005464891 nova_compute[259907]: 2025-10-01 16:59:51.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:51 np0005464891 podman[299494]: 2025-10-01 16:59:51.946841298 +0000 UTC m=+0.059625095 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  1 12:59:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:59:52 np0005464891 nova_compute[259907]: 2025-10-01 16:59:52.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 350 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Oct  1 12:59:54 np0005464891 podman[299514]: 2025-10-01 16:59:54.960735819 +0000 UTC m=+0.064178860 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  1 12:59:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 350 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.267 2 DEBUG oslo_concurrency.lockutils [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "830f3147-422f-4e9d-ac70-0dbc385be575" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.268 2 DEBUG oslo_concurrency.lockutils [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "830f3147-422f-4e9d-ac70-0dbc385be575" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.269 2 DEBUG oslo_concurrency.lockutils [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.269 2 DEBUG oslo_concurrency.lockutils [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.269 2 DEBUG oslo_concurrency.lockutils [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.271 2 INFO nova.compute.manager [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Terminating instance#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.272 2 DEBUG nova.compute.manager [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 12:59:55 np0005464891 kernel: tapc646a8ad-19 (unregistering): left promiscuous mode
Oct  1 12:59:55 np0005464891 NetworkManager[44940]: <info>  [1759337995.3360] device (tapc646a8ad-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 12:59:55 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:55Z|00216|binding|INFO|Releasing lport c646a8ad-1950-4bec-8bf5-d0039005679e from this chassis (sb_readonly=0)
Oct  1 12:59:55 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:55Z|00217|binding|INFO|Setting lport c646a8ad-1950-4bec-8bf5-d0039005679e down in Southbound
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:55 np0005464891 ovn_controller[152409]: 2025-10-01T16:59:55Z|00218|binding|INFO|Removing iface tapc646a8ad-19 ovn-installed in OVS
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:55.358 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:01:77 10.100.0.8'], port_security=['fa:16:3e:24:01:77 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '830f3147-422f-4e9d-ac70-0dbc385be575', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce1e1062-6685-441b-8278-667224375e38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8318b65fa88942a99937a0d198a04a9c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fda2d0b4-8d53-4a87-93c6-2f62b1be0cd1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4cdcf07-f310-4572-944c-43bd8e74f763, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=c646a8ad-1950-4bec-8bf5-d0039005679e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:59:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:55.359 162546 INFO neutron.agent.ovn.metadata.agent [-] Port c646a8ad-1950-4bec-8bf5-d0039005679e in datapath ce1e1062-6685-441b-8278-667224375e38 unbound from our chassis#033[00m
Oct  1 12:59:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:55.361 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce1e1062-6685-441b-8278-667224375e38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:55.364 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b345144d-49b1-45f3-b8ff-2c72065600a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:55.365 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce1e1062-6685-441b-8278-667224375e38 namespace which is not needed anymore#033[00m
Oct  1 12:59:55 np0005464891 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct  1 12:59:55 np0005464891 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Consumed 13.417s CPU time.
Oct  1 12:59:55 np0005464891 systemd-machined[214891]: Machine qemu-22-instance-00000016 terminated.
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.504 2 INFO nova.virt.libvirt.driver [-] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Instance destroyed successfully.#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.505 2 DEBUG nova.objects.instance [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lazy-loading 'resources' on Instance uuid 830f3147-422f-4e9d-ac70-0dbc385be575 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 12:59:55 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[299104]: [NOTICE]   (299108) : haproxy version is 2.8.14-c23fe91
Oct  1 12:59:55 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[299104]: [NOTICE]   (299108) : path to executable is /usr/sbin/haproxy
Oct  1 12:59:55 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[299104]: [WARNING]  (299108) : Exiting Master process...
Oct  1 12:59:55 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[299104]: [ALERT]    (299108) : Current worker (299110) exited with code 143 (Terminated)
Oct  1 12:59:55 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[299104]: [WARNING]  (299108) : All workers exited. Exiting... (0)
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.562 2 DEBUG nova.virt.libvirt.vif [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:59:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1907784308',display_name='tempest-TestVolumeBootPattern-server-1907784308',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1907784308',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClGQJy6/uYcm9xjql2YyXBlCsn5OCvS8WdRyMV8X4KRzO9nDb+WS/fT0IKQE/81aKxS5QGeI9sR/4PyJ3PdLDqhc6ZZs5CYH0MiYsApL0NE4Z3MyQ9wrVPiCTRct79ckg==',key_name='tempest-TestVolumeBootPattern-477590145',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:59:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-hsicp09s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:59:33Z,user_data=None,user_id='1280014cdfb74333ae8d71c78116e646',uuid=830f3147-422f-4e9d-ac70-0dbc385be575,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c646a8ad-1950-4bec-8bf5-d0039005679e", "address": "fa:16:3e:24:01:77", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc646a8ad-19", "ovs_interfaceid": "c646a8ad-1950-4bec-8bf5-d0039005679e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.562 2 DEBUG nova.network.os_vif_util [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "c646a8ad-1950-4bec-8bf5-d0039005679e", "address": "fa:16:3e:24:01:77", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc646a8ad-19", "ovs_interfaceid": "c646a8ad-1950-4bec-8bf5-d0039005679e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.564 2 DEBUG nova.network.os_vif_util [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:01:77,bridge_name='br-int',has_traffic_filtering=True,id=c646a8ad-1950-4bec-8bf5-d0039005679e,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc646a8ad-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.564 2 DEBUG os_vif [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:01:77,bridge_name='br-int',has_traffic_filtering=True,id=c646a8ad-1950-4bec-8bf5-d0039005679e,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc646a8ad-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.567 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc646a8ad-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:55 np0005464891 systemd[1]: libpod-4cc40d6c449efa3f280ae329f71a76939a7fcedef38fe060d40ddb2f480214be.scope: Deactivated successfully.
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.576 2 INFO os_vif [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:01:77,bridge_name='br-int',has_traffic_filtering=True,id=c646a8ad-1950-4bec-8bf5-d0039005679e,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc646a8ad-19')#033[00m
Oct  1 12:59:55 np0005464891 podman[299558]: 2025-10-01 16:59:55.579722937 +0000 UTC m=+0.125574322 container died 4cc40d6c449efa3f280ae329f71a76939a7fcedef38fe060d40ddb2f480214be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 12:59:55 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4cc40d6c449efa3f280ae329f71a76939a7fcedef38fe060d40ddb2f480214be-userdata-shm.mount: Deactivated successfully.
Oct  1 12:59:55 np0005464891 systemd[1]: var-lib-containers-storage-overlay-85723a35ee8a1002eb0f7b98e5172e25597c86283f00935aeb9bc17b3a7c6d8e-merged.mount: Deactivated successfully.
Oct  1 12:59:55 np0005464891 podman[299558]: 2025-10-01 16:59:55.627263178 +0000 UTC m=+0.173114553 container cleanup 4cc40d6c449efa3f280ae329f71a76939a7fcedef38fe060d40ddb2f480214be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.629 2 DEBUG nova.compute.manager [req-72d19890-8065-499b-9fa9-86316fb033cb req-d5e4b3c6-35a9-485c-ab6a-c94deb71ec07 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Received event network-vif-unplugged-c646a8ad-1950-4bec-8bf5-d0039005679e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.630 2 DEBUG oslo_concurrency.lockutils [req-72d19890-8065-499b-9fa9-86316fb033cb req-d5e4b3c6-35a9-485c-ab6a-c94deb71ec07 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.630 2 DEBUG oslo_concurrency.lockutils [req-72d19890-8065-499b-9fa9-86316fb033cb req-d5e4b3c6-35a9-485c-ab6a-c94deb71ec07 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.631 2 DEBUG oslo_concurrency.lockutils [req-72d19890-8065-499b-9fa9-86316fb033cb req-d5e4b3c6-35a9-485c-ab6a-c94deb71ec07 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.631 2 DEBUG nova.compute.manager [req-72d19890-8065-499b-9fa9-86316fb033cb req-d5e4b3c6-35a9-485c-ab6a-c94deb71ec07 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] No waiting events found dispatching network-vif-unplugged-c646a8ad-1950-4bec-8bf5-d0039005679e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.631 2 DEBUG nova.compute.manager [req-72d19890-8065-499b-9fa9-86316fb033cb req-d5e4b3c6-35a9-485c-ab6a-c94deb71ec07 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Received event network-vif-unplugged-c646a8ad-1950-4bec-8bf5-d0039005679e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 12:59:55 np0005464891 systemd[1]: libpod-conmon-4cc40d6c449efa3f280ae329f71a76939a7fcedef38fe060d40ddb2f480214be.scope: Deactivated successfully.
Oct  1 12:59:55 np0005464891 podman[299617]: 2025-10-01 16:59:55.688392003 +0000 UTC m=+0.039954075 container remove 4cc40d6c449efa3f280ae329f71a76939a7fcedef38fe060d40ddb2f480214be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  1 12:59:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:55.701 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[cdaea503-1587-4d73-85dc-fd20328c7754]: (4, ('Wed Oct  1 04:59:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38 (4cc40d6c449efa3f280ae329f71a76939a7fcedef38fe060d40ddb2f480214be)\n4cc40d6c449efa3f280ae329f71a76939a7fcedef38fe060d40ddb2f480214be\nWed Oct  1 04:59:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38 (4cc40d6c449efa3f280ae329f71a76939a7fcedef38fe060d40ddb2f480214be)\n4cc40d6c449efa3f280ae329f71a76939a7fcedef38fe060d40ddb2f480214be\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:55.702 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[a474dbfb-52dd-4d7e-8d2a-26540e22fb9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:55.703 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce1e1062-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 12:59:55 np0005464891 kernel: tapce1e1062-60: left promiscuous mode
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:55.721 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[46e59d3a-9f3b-4b79-8edc-aea7d0c16877]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:55.742 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[aded9b6f-e5b9-432a-87d9-eae45b03d75c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:55.744 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3b3e4d1e-c09f-4591-9d0f-7d8f43bb20be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:55.760 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[734a86e6-484b-4225-8379-65ed32d6b4fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492059, 'reachable_time': 35150, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299633, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:55.764 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce1e1062-6685-441b-8278-667224375e38 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 12:59:55 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:55.764 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[df98d648-a914-4b4e-bbfd-5e3d37650bef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 12:59:55 np0005464891 systemd[1]: run-netns-ovnmeta\x2dce1e1062\x2d6685\x2d441b\x2d8278\x2d667224375e38.mount: Deactivated successfully.
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.791 2 INFO nova.virt.libvirt.driver [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Deleting instance files /var/lib/nova/instances/830f3147-422f-4e9d-ac70-0dbc385be575_del#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.794 2 INFO nova.virt.libvirt.driver [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Deletion of /var/lib/nova/instances/830f3147-422f-4e9d-ac70-0dbc385be575_del complete#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.851 2 INFO nova.compute.manager [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Took 0.58 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.852 2 DEBUG oslo.service.loopingcall [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.852 2 DEBUG nova.compute.manager [-] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 12:59:55 np0005464891 nova_compute[259907]: 2025-10-01 16:59:55.853 2 DEBUG nova.network.neutron [-] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 12:59:56 np0005464891 nova_compute[259907]: 2025-10-01 16:59:56.749 2 DEBUG nova.compute.manager [req-9daa32bc-ec50-4668-b635-6c43dd74e5e7 req-06d683f0-72c1-4b9a-9420-7ddb08a939eb af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Received event network-changed-61aaf003-104a-4194-89f9-18ce4d3dfabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:59:56 np0005464891 nova_compute[259907]: 2025-10-01 16:59:56.750 2 DEBUG nova.compute.manager [req-9daa32bc-ec50-4668-b635-6c43dd74e5e7 req-06d683f0-72c1-4b9a-9420-7ddb08a939eb af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Refreshing instance network info cache due to event network-changed-61aaf003-104a-4194-89f9-18ce4d3dfabb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 12:59:56 np0005464891 nova_compute[259907]: 2025-10-01 16:59:56.750 2 DEBUG oslo_concurrency.lockutils [req-9daa32bc-ec50-4668-b635-6c43dd74e5e7 req-06d683f0-72c1-4b9a-9420-7ddb08a939eb af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-caeab115-da31-48fd-af65-2085a2c28333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 12:59:56 np0005464891 nova_compute[259907]: 2025-10-01 16:59:56.751 2 DEBUG oslo_concurrency.lockutils [req-9daa32bc-ec50-4668-b635-6c43dd74e5e7 req-06d683f0-72c1-4b9a-9420-7ddb08a939eb af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-caeab115-da31-48fd-af65-2085a2c28333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 12:59:56 np0005464891 nova_compute[259907]: 2025-10-01 16:59:56.751 2 DEBUG nova.network.neutron [req-9daa32bc-ec50-4668-b635-6c43dd74e5e7 req-06d683f0-72c1-4b9a-9420-7ddb08a939eb af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Refreshing network info cache for port 61aaf003-104a-4194-89f9-18ce4d3dfabb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 12:59:56 np0005464891 nova_compute[259907]: 2025-10-01 16:59:56.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:56 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:56.843 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 12:59:56 np0005464891 nova_compute[259907]: 2025-10-01 16:59:56.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 12:59:56 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 16:59:56.845 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.063 2 DEBUG nova.network.neutron [-] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.090 2 INFO nova.compute.manager [-] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Took 1.24 seconds to deallocate network for instance.#033[00m
Oct  1 12:59:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 350 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 123 op/s
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.270 2 INFO nova.compute.manager [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Took 0.18 seconds to detach 1 volumes for instance.#033[00m
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.325 2 DEBUG oslo_concurrency.lockutils [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.326 2 DEBUG oslo_concurrency.lockutils [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.390 2 DEBUG oslo_concurrency.processutils [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 12:59:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.709 2 DEBUG nova.compute.manager [req-25dce472-a3bf-4051-9782-ec06035342c7 req-06e6e77b-4c57-4564-840b-20fb3b897156 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Received event network-vif-plugged-c646a8ad-1950-4bec-8bf5-d0039005679e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.709 2 DEBUG oslo_concurrency.lockutils [req-25dce472-a3bf-4051-9782-ec06035342c7 req-06e6e77b-4c57-4564-840b-20fb3b897156 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.710 2 DEBUG oslo_concurrency.lockutils [req-25dce472-a3bf-4051-9782-ec06035342c7 req-06e6e77b-4c57-4564-840b-20fb3b897156 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.710 2 DEBUG oslo_concurrency.lockutils [req-25dce472-a3bf-4051-9782-ec06035342c7 req-06e6e77b-4c57-4564-840b-20fb3b897156 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "830f3147-422f-4e9d-ac70-0dbc385be575-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.711 2 DEBUG nova.compute.manager [req-25dce472-a3bf-4051-9782-ec06035342c7 req-06e6e77b-4c57-4564-840b-20fb3b897156 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] No waiting events found dispatching network-vif-plugged-c646a8ad-1950-4bec-8bf5-d0039005679e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.711 2 WARNING nova.compute.manager [req-25dce472-a3bf-4051-9782-ec06035342c7 req-06e6e77b-4c57-4564-840b-20fb3b897156 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Received unexpected event network-vif-plugged-c646a8ad-1950-4bec-8bf5-d0039005679e for instance with vm_state deleted and task_state None.#033[00m
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.711 2 DEBUG nova.compute.manager [req-25dce472-a3bf-4051-9782-ec06035342c7 req-06e6e77b-4c57-4564-840b-20fb3b897156 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Received event network-vif-deleted-c646a8ad-1950-4bec-8bf5-d0039005679e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 12:59:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 12:59:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2902862638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.842 2 DEBUG oslo_concurrency.processutils [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.849 2 DEBUG nova.compute.provider_tree [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.875 2 DEBUG nova.scheduler.client.report [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.943 2 DEBUG oslo_concurrency.lockutils [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:57 np0005464891 nova_compute[259907]: 2025-10-01 16:59:57.994 2 INFO nova.scheduler.client.report [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Deleted allocations for instance 830f3147-422f-4e9d-ac70-0dbc385be575#033[00m
Oct  1 12:59:58 np0005464891 nova_compute[259907]: 2025-10-01 16:59:58.094 2 DEBUG oslo_concurrency.lockutils [None req-c0735fa5-8db8-4116-b73a-7dcef4aa8f5f 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "830f3147-422f-4e9d-ac70-0dbc385be575" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 12:59:58 np0005464891 nova_compute[259907]: 2025-10-01 16:59:58.290 2 DEBUG nova.network.neutron [req-9daa32bc-ec50-4668-b635-6c43dd74e5e7 req-06d683f0-72c1-4b9a-9420-7ddb08a939eb af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Updated VIF entry in instance network info cache for port 61aaf003-104a-4194-89f9-18ce4d3dfabb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 12:59:58 np0005464891 nova_compute[259907]: 2025-10-01 16:59:58.292 2 DEBUG nova.network.neutron [req-9daa32bc-ec50-4668-b635-6c43dd74e5e7 req-06d683f0-72c1-4b9a-9420-7ddb08a939eb af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Updating instance_info_cache with network_info: [{"id": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "address": "fa:16:3e:cd:c1:82", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61aaf003-10", "ovs_interfaceid": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 12:59:58 np0005464891 nova_compute[259907]: 2025-10-01 16:59:58.378 2 DEBUG oslo_concurrency.lockutils [req-9daa32bc-ec50-4668-b635-6c43dd74e5e7 req-06d683f0-72c1-4b9a-9420-7ddb08a939eb af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-caeab115-da31-48fd-af65-2085a2c28333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 12:59:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 350 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 124 op/s
Oct  1 13:00:00 np0005464891 nova_compute[259907]: 2025-10-01 17:00:00.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:01 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:01Z|00044|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.10
Oct  1 13:00:01 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:01Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:cd:c1:82 10.100.0.10
Oct  1 13:00:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 354 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.1 MiB/s wr, 135 op/s
Oct  1 13:00:01 np0005464891 nova_compute[259907]: 2025-10-01 17:00:01.736 2 DEBUG oslo_concurrency.lockutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:00:01 np0005464891 nova_compute[259907]: 2025-10-01 17:00:01.737 2 DEBUG oslo_concurrency.lockutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:00:01 np0005464891 nova_compute[259907]: 2025-10-01 17:00:01.772 2 DEBUG nova.compute.manager [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 13:00:01 np0005464891 nova_compute[259907]: 2025-10-01 17:00:01.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:01 np0005464891 nova_compute[259907]: 2025-10-01 17:00:01.845 2 DEBUG oslo_concurrency.lockutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:00:01 np0005464891 nova_compute[259907]: 2025-10-01 17:00:01.846 2 DEBUG oslo_concurrency.lockutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:00:01 np0005464891 nova_compute[259907]: 2025-10-01 17:00:01.852 2 DEBUG nova.virt.hardware [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 13:00:01 np0005464891 nova_compute[259907]: 2025-10-01 17:00:01.853 2 INFO nova.compute.claims [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.035 2 DEBUG oslo_concurrency.processutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:00:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:00:02 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3223280260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.456 2 DEBUG oslo_concurrency.processutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.463 2 DEBUG nova.compute.provider_tree [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.478 2 DEBUG nova.scheduler.client.report [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.500 2 DEBUG oslo_concurrency.lockutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.501 2 DEBUG nova.compute.manager [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 13:00:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.545 2 DEBUG nova.compute.manager [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.546 2 DEBUG nova.network.neutron [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.562 2 INFO nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.578 2 DEBUG nova.compute.manager [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.628 2 INFO nova.virt.block_device [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Booting with volume ce89eba0-ef68-400a-ae7c-6ce18a58a372 at /dev/vda#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.791 2 DEBUG os_brick.utils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.793 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.804 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.804 741 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d159e7-0e3f-44cd-98bd-2929f8a68e58]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.805 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.813 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.814 741 DEBUG oslo.privsep.daemon [-] privsep: reply[9d05cc8d-8784-4ad2-bed2-d8709f2b928a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.815 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.824 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.824 741 DEBUG oslo.privsep.daemon [-] privsep: reply[da380dc6-4638-48c8-927c-ec5735d9e749]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.825 741 DEBUG oslo.privsep.daemon [-] privsep: reply[043b4ddb-ac48-4b98-ab3e-ff81168ff78a]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.826 2 DEBUG oslo_concurrency.processutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.847 2 DEBUG oslo_concurrency.processutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.849 2 DEBUG os_brick.initiator.connectors.lightos [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.850 2 DEBUG os_brick.initiator.connectors.lightos [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.850 2 DEBUG os_brick.initiator.connectors.lightos [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.850 2 DEBUG os_brick.utils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] <== get_connector_properties: return (58ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 13:00:02 np0005464891 nova_compute[259907]: 2025-10-01 17:00:02.851 2 DEBUG nova.virt.block_device [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Updating existing volume attachment record: a369a70b-a1bb-4542-b390-9200a60a525d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 13:00:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 362 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.0 MiB/s wr, 81 op/s
Oct  1 13:00:03 np0005464891 nova_compute[259907]: 2025-10-01 17:00:03.350 2 DEBUG nova.policy [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1280014cdfb74333ae8d71c78116e646', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8318b65fa88942a99937a0d198a04a9c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 13:00:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:00:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/246301357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:00:03 np0005464891 nova_compute[259907]: 2025-10-01 17:00:03.806 2 DEBUG nova.compute.manager [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 13:00:03 np0005464891 nova_compute[259907]: 2025-10-01 17:00:03.808 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 13:00:03 np0005464891 nova_compute[259907]: 2025-10-01 17:00:03.808 2 INFO nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Creating image(s)#033[00m
Oct  1 13:00:03 np0005464891 nova_compute[259907]: 2025-10-01 17:00:03.809 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  1 13:00:03 np0005464891 nova_compute[259907]: 2025-10-01 17:00:03.809 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Ensure instance console log exists: /var/lib/nova/instances/1affd3fe-8ee0-455e-bcef-79fe7bcb283d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 13:00:03 np0005464891 nova_compute[259907]: 2025-10-01 17:00:03.809 2 DEBUG oslo_concurrency.lockutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:00:03 np0005464891 nova_compute[259907]: 2025-10-01 17:00:03.810 2 DEBUG oslo_concurrency.lockutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:00:03 np0005464891 nova_compute[259907]: 2025-10-01 17:00:03.810 2 DEBUG oslo_concurrency.lockutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:00:04 np0005464891 nova_compute[259907]: 2025-10-01 17:00:04.073 2 DEBUG nova.network.neutron [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Successfully created port: ee3f438c-5db5-4c88-b0c0-51835235bc99 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 13:00:04 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:04Z|00046|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.10
Oct  1 13:00:04 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:04Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:cd:c1:82 10.100.0.10
Oct  1 13:00:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 362 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 75 op/s
Oct  1 13:00:05 np0005464891 nova_compute[259907]: 2025-10-01 17:00:05.615 2 DEBUG nova.network.neutron [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Successfully updated port: ee3f438c-5db5-4c88-b0c0-51835235bc99 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 13:00:05 np0005464891 nova_compute[259907]: 2025-10-01 17:00:05.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:05 np0005464891 nova_compute[259907]: 2025-10-01 17:00:05.636 2 DEBUG oslo_concurrency.lockutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:00:05 np0005464891 nova_compute[259907]: 2025-10-01 17:00:05.637 2 DEBUG oslo_concurrency.lockutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquired lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:00:05 np0005464891 nova_compute[259907]: 2025-10-01 17:00:05.637 2 DEBUG nova.network.neutron [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 13:00:05 np0005464891 nova_compute[259907]: 2025-10-01 17:00:05.710 2 DEBUG nova.compute.manager [req-8d4314c8-88bd-48d7-a5dd-2b1b10d8bb86 req-5728e906-40ab-4a73-9624-bf45279afdc1 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Received event network-changed-ee3f438c-5db5-4c88-b0c0-51835235bc99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:00:05 np0005464891 nova_compute[259907]: 2025-10-01 17:00:05.710 2 DEBUG nova.compute.manager [req-8d4314c8-88bd-48d7-a5dd-2b1b10d8bb86 req-5728e906-40ab-4a73-9624-bf45279afdc1 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Refreshing instance network info cache due to event network-changed-ee3f438c-5db5-4c88-b0c0-51835235bc99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 13:00:05 np0005464891 nova_compute[259907]: 2025-10-01 17:00:05.711 2 DEBUG oslo_concurrency.lockutils [req-8d4314c8-88bd-48d7-a5dd-2b1b10d8bb86 req-5728e906-40ab-4a73-9624-bf45279afdc1 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:00:05 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:05.847 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:00:05 np0005464891 podman[299686]: 2025-10-01 17:00:05.960273129 +0000 UTC m=+0.062202895 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  1 13:00:06 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:06Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cd:c1:82 10.100.0.10
Oct  1 13:00:06 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:06Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cd:c1:82 10.100.0.10
Oct  1 13:00:06 np0005464891 nova_compute[259907]: 2025-10-01 17:00:06.300 2 DEBUG nova.network.neutron [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 13:00:06 np0005464891 nova_compute[259907]: 2025-10-01 17:00:06.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.052 2 DEBUG nova.network.neutron [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Updating instance_info_cache with network_info: [{"id": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "address": "fa:16:3e:62:31:95", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee3f438c-5d", "ovs_interfaceid": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.071 2 DEBUG oslo_concurrency.lockutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Releasing lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.072 2 DEBUG nova.compute.manager [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Instance network_info: |[{"id": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "address": "fa:16:3e:62:31:95", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee3f438c-5d", "ovs_interfaceid": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.073 2 DEBUG oslo_concurrency.lockutils [req-8d4314c8-88bd-48d7-a5dd-2b1b10d8bb86 req-5728e906-40ab-4a73-9624-bf45279afdc1 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.073 2 DEBUG nova.network.neutron [req-8d4314c8-88bd-48d7-a5dd-2b1b10d8bb86 req-5728e906-40ab-4a73-9624-bf45279afdc1 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Refreshing network info cache for port ee3f438c-5db5-4c88-b0c0-51835235bc99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.079 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Start _get_guest_xml network_info=[{"id": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "address": "fa:16:3e:62:31:95", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee3f438c-5d", "ovs_interfaceid": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'mount_device': '/dev/vda', 'attachment_id': 'a369a70b-a1bb-4542-b390-9200a60a525d', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ce89eba0-ef68-400a-ae7c-6ce18a58a372', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ce89eba0-ef68-400a-ae7c-6ce18a58a372', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '1affd3fe-8ee0-455e-bcef-79fe7bcb283d', 'attached_at': '', 'detached_at': '', 'volume_id': 'ce89eba0-ef68-400a-ae7c-6ce18a58a372', 'serial': 'ce89eba0-ef68-400a-ae7c-6ce18a58a372'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.086 2 WARNING nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.093 2 DEBUG nova.virt.libvirt.host [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.094 2 DEBUG nova.virt.libvirt.host [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.097 2 DEBUG nova.virt.libvirt.host [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.098 2 DEBUG nova.virt.libvirt.host [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.099 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.099 2 DEBUG nova.virt.hardware [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.099 2 DEBUG nova.virt.hardware [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.099 2 DEBUG nova.virt.hardware [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.100 2 DEBUG nova.virt.hardware [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.100 2 DEBUG nova.virt.hardware [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.100 2 DEBUG nova.virt.hardware [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.100 2 DEBUG nova.virt.hardware [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.101 2 DEBUG nova.virt.hardware [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.101 2 DEBUG nova.virt.hardware [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.101 2 DEBUG nova.virt.hardware [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.101 2 DEBUG nova.virt.hardware [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.122 2 DEBUG nova.storage.rbd_utils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image 1affd3fe-8ee0-455e-bcef-79fe7bcb283d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.125 2 DEBUG oslo_concurrency.processutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:00:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 362 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.0 MiB/s wr, 62 op/s
Oct  1 13:00:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:00:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:00:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/725641202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.592 2 DEBUG oslo_concurrency.processutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.623 2 DEBUG nova.virt.libvirt.vif [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T17:00:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1755453090',display_name='tempest-TestVolumeBootPattern-server-1755453090',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1755453090',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClGQJy6/uYcm9xjql2YyXBlCsn5OCvS8WdRyMV8X4KRzO9nDb+WS/fT0IKQE/81aKxS5QGeI9sR/4PyJ3PdLDqhc6ZZs5CYH0MiYsApL0NE4Z3MyQ9wrVPiCTRct79ckg==',key_name='tempest-TestVolumeBootPattern-477590145',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-555c3v3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T17:00:02Z,user_data=None,user_id='1280014cdfb74333ae8d71c78116e646',uuid=1affd3fe-8ee0-455e-bcef-79fe7bcb283d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "address": "fa:16:3e:62:31:95", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee3f438c-5d", "ovs_interfaceid": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.623 2 DEBUG nova.network.os_vif_util [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "address": "fa:16:3e:62:31:95", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee3f438c-5d", "ovs_interfaceid": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.624 2 DEBUG nova.network.os_vif_util [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:31:95,bridge_name='br-int',has_traffic_filtering=True,id=ee3f438c-5db5-4c88-b0c0-51835235bc99,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee3f438c-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.625 2 DEBUG nova.objects.instance [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lazy-loading 'pci_devices' on Instance uuid 1affd3fe-8ee0-455e-bcef-79fe7bcb283d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.648 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] End _get_guest_xml xml=<domain type="kvm">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  <uuid>1affd3fe-8ee0-455e-bcef-79fe7bcb283d</uuid>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  <name>instance-00000018</name>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <nova:name>tempest-TestVolumeBootPattern-server-1755453090</nova:name>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 17:00:07</nova:creationTime>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:        <nova:user uuid="1280014cdfb74333ae8d71c78116e646">tempest-TestVolumeBootPattern-582136054-project-member</nova:user>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:        <nova:project uuid="8318b65fa88942a99937a0d198a04a9c">tempest-TestVolumeBootPattern-582136054</nova:project>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:        <nova:port uuid="ee3f438c-5db5-4c88-b0c0-51835235bc99">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <system>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <entry name="serial">1affd3fe-8ee0-455e-bcef-79fe7bcb283d</entry>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <entry name="uuid">1affd3fe-8ee0-455e-bcef-79fe7bcb283d</entry>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    </system>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  <os>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  </os>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  <features>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  </features>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  </clock>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  <devices>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/1affd3fe-8ee0-455e-bcef-79fe7bcb283d_disk.config">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      </source>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      </auth>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    </disk>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="volumes/volume-ce89eba0-ef68-400a-ae7c-6ce18a58a372">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      </source>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      </auth>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <serial>ce89eba0-ef68-400a-ae7c-6ce18a58a372</serial>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    </disk>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:62:31:95"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <target dev="tapee3f438c-5d"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    </interface>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/1affd3fe-8ee0-455e-bcef-79fe7bcb283d/console.log" append="off"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    </serial>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <video>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    </video>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    </rng>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 13:00:07 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 13:00:07 np0005464891 nova_compute[259907]:  </devices>
Oct  1 13:00:07 np0005464891 nova_compute[259907]: </domain>
Oct  1 13:00:07 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.649 2 DEBUG nova.compute.manager [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Preparing to wait for external event network-vif-plugged-ee3f438c-5db5-4c88-b0c0-51835235bc99 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.650 2 DEBUG oslo_concurrency.lockutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.650 2 DEBUG oslo_concurrency.lockutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.650 2 DEBUG oslo_concurrency.lockutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.651 2 DEBUG nova.virt.libvirt.vif [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T17:00:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1755453090',display_name='tempest-TestVolumeBootPattern-server-1755453090',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1755453090',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClGQJy6/uYcm9xjql2YyXBlCsn5OCvS8WdRyMV8X4KRzO9nDb+WS/fT0IKQE/81aKxS5QGeI9sR/4PyJ3PdLDqhc6ZZs5CYH0MiYsApL0NE4Z3MyQ9wrVPiCTRct79ckg==',key_name='tempest-TestVolumeBootPattern-477590145',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-555c3v3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T17:00:02Z,user_data=None,user_id='1280014cdfb74333ae8d71c78116e646',uuid=1affd3fe-8ee0-455e-bcef-79fe7bcb283d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "address": "fa:16:3e:62:31:95", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee3f438c-5d", "ovs_interfaceid": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.651 2 DEBUG nova.network.os_vif_util [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "address": "fa:16:3e:62:31:95", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee3f438c-5d", "ovs_interfaceid": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.652 2 DEBUG nova.network.os_vif_util [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:31:95,bridge_name='br-int',has_traffic_filtering=True,id=ee3f438c-5db5-4c88-b0c0-51835235bc99,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee3f438c-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.653 2 DEBUG os_vif [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:31:95,bridge_name='br-int',has_traffic_filtering=True,id=ee3f438c-5db5-4c88-b0c0-51835235bc99,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee3f438c-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.654 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.654 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.656 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee3f438c-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.657 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee3f438c-5d, col_values=(('external_ids', {'iface-id': 'ee3f438c-5db5-4c88-b0c0-51835235bc99', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:31:95', 'vm-uuid': '1affd3fe-8ee0-455e-bcef-79fe7bcb283d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:07 np0005464891 NetworkManager[44940]: <info>  [1759338007.6593] manager: (tapee3f438c-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.664 2 INFO os_vif [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:31:95,bridge_name='br-int',has_traffic_filtering=True,id=ee3f438c-5db5-4c88-b0c0-51835235bc99,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee3f438c-5d')#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.723 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.724 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.724 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No VIF found with MAC fa:16:3e:62:31:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.725 2 INFO nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Using config drive#033[00m
Oct  1 13:00:07 np0005464891 nova_compute[259907]: 2025-10-01 17:00:07.749 2 DEBUG nova.storage.rbd_utils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image 1affd3fe-8ee0-455e-bcef-79fe7bcb283d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:00:08 np0005464891 nova_compute[259907]: 2025-10-01 17:00:08.494 2 INFO nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Creating config drive at /var/lib/nova/instances/1affd3fe-8ee0-455e-bcef-79fe7bcb283d/disk.config#033[00m
Oct  1 13:00:08 np0005464891 nova_compute[259907]: 2025-10-01 17:00:08.501 2 DEBUG oslo_concurrency.processutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1affd3fe-8ee0-455e-bcef-79fe7bcb283d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuqm99c8h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:00:08 np0005464891 nova_compute[259907]: 2025-10-01 17:00:08.544 2 DEBUG nova.network.neutron [req-8d4314c8-88bd-48d7-a5dd-2b1b10d8bb86 req-5728e906-40ab-4a73-9624-bf45279afdc1 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Updated VIF entry in instance network info cache for port ee3f438c-5db5-4c88-b0c0-51835235bc99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 13:00:08 np0005464891 nova_compute[259907]: 2025-10-01 17:00:08.545 2 DEBUG nova.network.neutron [req-8d4314c8-88bd-48d7-a5dd-2b1b10d8bb86 req-5728e906-40ab-4a73-9624-bf45279afdc1 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Updating instance_info_cache with network_info: [{"id": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "address": "fa:16:3e:62:31:95", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee3f438c-5d", "ovs_interfaceid": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:00:08 np0005464891 nova_compute[259907]: 2025-10-01 17:00:08.560 2 DEBUG oslo_concurrency.lockutils [req-8d4314c8-88bd-48d7-a5dd-2b1b10d8bb86 req-5728e906-40ab-4a73-9624-bf45279afdc1 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:00:08 np0005464891 nova_compute[259907]: 2025-10-01 17:00:08.630 2 DEBUG oslo_concurrency.processutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1affd3fe-8ee0-455e-bcef-79fe7bcb283d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuqm99c8h" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:00:08 np0005464891 nova_compute[259907]: 2025-10-01 17:00:08.651 2 DEBUG nova.storage.rbd_utils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image 1affd3fe-8ee0-455e-bcef-79fe7bcb283d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:00:08 np0005464891 nova_compute[259907]: 2025-10-01 17:00:08.655 2 DEBUG oslo_concurrency.processutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1affd3fe-8ee0-455e-bcef-79fe7bcb283d/disk.config 1affd3fe-8ee0-455e-bcef-79fe7bcb283d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:00:09 np0005464891 nova_compute[259907]: 2025-10-01 17:00:09.006 2 DEBUG oslo_concurrency.processutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1affd3fe-8ee0-455e-bcef-79fe7bcb283d/disk.config 1affd3fe-8ee0-455e-bcef-79fe7bcb283d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:00:09 np0005464891 nova_compute[259907]: 2025-10-01 17:00:09.007 2 INFO nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Deleting local config drive /var/lib/nova/instances/1affd3fe-8ee0-455e-bcef-79fe7bcb283d/disk.config because it was imported into RBD.#033[00m
Oct  1 13:00:09 np0005464891 kernel: tapee3f438c-5d: entered promiscuous mode
Oct  1 13:00:09 np0005464891 NetworkManager[44940]: <info>  [1759338009.0691] manager: (tapee3f438c-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Oct  1 13:00:09 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:09Z|00219|binding|INFO|Claiming lport ee3f438c-5db5-4c88-b0c0-51835235bc99 for this chassis.
Oct  1 13:00:09 np0005464891 nova_compute[259907]: 2025-10-01 17:00:09.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:09 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:09Z|00220|binding|INFO|ee3f438c-5db5-4c88-b0c0-51835235bc99: Claiming fa:16:3e:62:31:95 10.100.0.9
Oct  1 13:00:09 np0005464891 nova_compute[259907]: 2025-10-01 17:00:09.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:09 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:09Z|00221|binding|INFO|Setting lport ee3f438c-5db5-4c88-b0c0-51835235bc99 ovn-installed in OVS
Oct  1 13:00:09 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:09Z|00222|binding|INFO|Setting lport ee3f438c-5db5-4c88-b0c0-51835235bc99 up in Southbound
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.087 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:31:95 10.100.0.9'], port_security=['fa:16:3e:62:31:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1affd3fe-8ee0-455e-bcef-79fe7bcb283d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce1e1062-6685-441b-8278-667224375e38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8318b65fa88942a99937a0d198a04a9c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fda2d0b4-8d53-4a87-93c6-2f62b1be0cd1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4cdcf07-f310-4572-944c-43bd8e74f763, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=ee3f438c-5db5-4c88-b0c0-51835235bc99) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.089 162546 INFO neutron.agent.ovn.metadata.agent [-] Port ee3f438c-5db5-4c88-b0c0-51835235bc99 in datapath ce1e1062-6685-441b-8278-667224375e38 bound to our chassis#033[00m
Oct  1 13:00:09 np0005464891 nova_compute[259907]: 2025-10-01 17:00:09.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.091 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce1e1062-6685-441b-8278-667224375e38#033[00m
Oct  1 13:00:09 np0005464891 systemd-udevd[299819]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.103 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[15fcc55e-53dd-4bb9-bc18-de823ff68822]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.104 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce1e1062-61 in ovnmeta-ce1e1062-6685-441b-8278-667224375e38 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.106 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce1e1062-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.106 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[7246e46f-8b49-4bb9-9d0a-cbb732d10f57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.107 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3242cb09-04a4-44bb-914a-dcb179038df4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:09 np0005464891 systemd-machined[214891]: New machine qemu-24-instance-00000018.
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.122 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0b69f8-7071-4adf-b941-092f2202d598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:09 np0005464891 NetworkManager[44940]: <info>  [1759338009.1248] device (tapee3f438c-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 13:00:09 np0005464891 NetworkManager[44940]: <info>  [1759338009.1258] device (tapee3f438c-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 13:00:09 np0005464891 systemd[1]: Started Virtual Machine qemu-24-instance-00000018.
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.147 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2da172b5-801e-44c8-8706-b214e6685fe7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.174 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[053c08e1-7ecd-44c2-97ab-bad08db16e77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.178 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[403d5ff3-dd7f-4072-9ebf-dbde23149af5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:09 np0005464891 NetworkManager[44940]: <info>  [1759338009.1798] manager: (tapce1e1062-60): new Veth device (/org/freedesktop/NetworkManager/Devices/125)
Oct  1 13:00:09 np0005464891 systemd-udevd[299823]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.208 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[70fb4755-bdf0-4fd6-a5fb-8082f1f512f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.211 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[fc65d6a3-e912-4251-9614-76bf06f35a70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:09 np0005464891 NetworkManager[44940]: <info>  [1759338009.2312] device (tapce1e1062-60): carrier: link connected
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.237 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[7cebef5c-fd23-4ad8-a155-c0d3086d9c32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 362 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 62 op/s
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.255 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8f64e495-6e4c-46a3-8781-79151d76bb4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce1e1062-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:87:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495780, 'reachable_time': 37017, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299852, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.274 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5a38bc5b-3a5b-45c8-bdf6-f4fd5d512009]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec5:872c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495780, 'tstamp': 495780}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299853, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.291 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[7927da48-6e91-41d3-8501-92fa4298ab31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce1e1062-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:87:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495780, 'reachable_time': 37017, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299854, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.321 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8866657b-865c-4496-808a-30f327f5a595]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.386 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ef42dd26-401a-4bed-aba3-c83d3af4ef9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.387 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce1e1062-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.387 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.388 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce1e1062-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:00:09 np0005464891 kernel: tapce1e1062-60: entered promiscuous mode
Oct  1 13:00:09 np0005464891 nova_compute[259907]: 2025-10-01 17:00:09.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:09 np0005464891 NetworkManager[44940]: <info>  [1759338009.3930] manager: (tapce1e1062-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.394 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce1e1062-60, col_values=(('external_ids', {'iface-id': 'd971881d-8d8b-44dc-b0b0-4cd0065c0105'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:00:09 np0005464891 nova_compute[259907]: 2025-10-01 17:00:09.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:09 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:09Z|00223|binding|INFO|Releasing lport d971881d-8d8b-44dc-b0b0-4cd0065c0105 from this chassis (sb_readonly=0)
Oct  1 13:00:09 np0005464891 nova_compute[259907]: 2025-10-01 17:00:09.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:09 np0005464891 nova_compute[259907]: 2025-10-01 17:00:09.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:09 np0005464891 nova_compute[259907]: 2025-10-01 17:00:09.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.412 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce1e1062-6685-441b-8278-667224375e38.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce1e1062-6685-441b-8278-667224375e38.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.413 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[7b2ef105-1979-4df8-91e3-72f24fbc5f6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.413 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-ce1e1062-6685-441b-8278-667224375e38
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/ce1e1062-6685-441b-8278-667224375e38.pid.haproxy
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID ce1e1062-6685-441b-8278-667224375e38
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 13:00:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:09.414 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'env', 'PROCESS_TAG=haproxy-ce1e1062-6685-441b-8278-667224375e38', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce1e1062-6685-441b-8278-667224375e38.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 13:00:09 np0005464891 nova_compute[259907]: 2025-10-01 17:00:09.460 2 DEBUG nova.compute.manager [req-be0b60cc-0724-44d8-a07d-5517e1ff9dfa req-09ab7061-af37-40a7-987d-f9d34aa8feb0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Received event network-vif-plugged-ee3f438c-5db5-4c88-b0c0-51835235bc99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:00:09 np0005464891 nova_compute[259907]: 2025-10-01 17:00:09.460 2 DEBUG oslo_concurrency.lockutils [req-be0b60cc-0724-44d8-a07d-5517e1ff9dfa req-09ab7061-af37-40a7-987d-f9d34aa8feb0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:00:09 np0005464891 nova_compute[259907]: 2025-10-01 17:00:09.461 2 DEBUG oslo_concurrency.lockutils [req-be0b60cc-0724-44d8-a07d-5517e1ff9dfa req-09ab7061-af37-40a7-987d-f9d34aa8feb0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:00:09 np0005464891 nova_compute[259907]: 2025-10-01 17:00:09.461 2 DEBUG oslo_concurrency.lockutils [req-be0b60cc-0724-44d8-a07d-5517e1ff9dfa req-09ab7061-af37-40a7-987d-f9d34aa8feb0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:00:09 np0005464891 nova_compute[259907]: 2025-10-01 17:00:09.462 2 DEBUG nova.compute.manager [req-be0b60cc-0724-44d8-a07d-5517e1ff9dfa req-09ab7061-af37-40a7-987d-f9d34aa8feb0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Processing event network-vif-plugged-ee3f438c-5db5-4c88-b0c0-51835235bc99 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 13:00:09 np0005464891 podman[299906]: 2025-10-01 17:00:09.780773328 +0000 UTC m=+0.035743220 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 13:00:09 np0005464891 podman[299906]: 2025-10-01 17:00:09.877815776 +0000 UTC m=+0.132785648 container create 1188e28848f76a76b937c58d829079c068e0c5aab34b2d6e0b1ac1f9589a9303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  1 13:00:09 np0005464891 systemd[1]: Started libpod-conmon-1188e28848f76a76b937c58d829079c068e0c5aab34b2d6e0b1ac1f9589a9303.scope.
Oct  1 13:00:09 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:00:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae930b0a08bfc282e4641ba26fb07dbb846f9a470c55d4ccf78c2975089063f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 13:00:09 np0005464891 podman[299906]: 2025-10-01 17:00:09.995543101 +0000 UTC m=+0.250512993 container init 1188e28848f76a76b937c58d829079c068e0c5aab34b2d6e0b1ac1f9589a9303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  1 13:00:10 np0005464891 podman[299906]: 2025-10-01 17:00:10.003185111 +0000 UTC m=+0.258154983 container start 1188e28848f76a76b937c58d829079c068e0c5aab34b2d6e0b1ac1f9589a9303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 13:00:10 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[299942]: [NOTICE]   (299946) : New worker (299948) forked
Oct  1 13:00:10 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[299942]: [NOTICE]   (299946) : Loading success.
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.273 2 DEBUG nova.compute.manager [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.275 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338010.2724001, 1affd3fe-8ee0-455e-bcef-79fe7bcb283d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.275 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] VM Started (Lifecycle Event)#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.277 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.280 2 INFO nova.virt.libvirt.driver [-] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Instance spawned successfully.#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.280 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.303 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.306 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.323 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.324 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.325 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.325 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.326 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.327 2 DEBUG nova.virt.libvirt.driver [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.488 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.489 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338010.2726808, 1affd3fe-8ee0-455e-bcef-79fe7bcb283d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.489 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] VM Paused (Lifecycle Event)#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.508 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759337995.5018055, 830f3147-422f-4e9d-ac70-0dbc385be575 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.509 2 INFO nova.compute.manager [-] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] VM Stopped (Lifecycle Event)#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.726 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.729 2 DEBUG nova.compute.manager [None req-283ccc37-3af1-4957-98bd-39ae3c99dc50 - - - - - -] [instance: 830f3147-422f-4e9d-ac70-0dbc385be575] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.734 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338010.277603, 1affd3fe-8ee0-455e-bcef-79fe7bcb283d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.734 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] VM Resumed (Lifecycle Event)#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.905 2 INFO nova.compute.manager [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Took 7.10 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.906 2 DEBUG nova.compute.manager [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.917 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:00:10 np0005464891 nova_compute[259907]: 2025-10-01 17:00:10.921 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 13:00:11 np0005464891 nova_compute[259907]: 2025-10-01 17:00:11.199 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 13:00:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1813: 321 pgs: 321 active+clean; 366 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 69 op/s
Oct  1 13:00:11 np0005464891 nova_compute[259907]: 2025-10-01 17:00:11.467 2 INFO nova.compute.manager [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Took 9.64 seconds to build instance.#033[00m
Oct  1 13:00:11 np0005464891 nova_compute[259907]: 2025-10-01 17:00:11.552 2 DEBUG oslo_concurrency.lockutils [None req-44106671-de91-4776-8d2f-1f599d4050a5 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:00:11 np0005464891 nova_compute[259907]: 2025-10-01 17:00:11.836 2 DEBUG nova.compute.manager [req-e26968c6-f9c9-474f-854e-87cb78ffa35e req-e08f33f3-bb6e-4e0a-ba7d-de36a14d93be af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Received event network-vif-plugged-ee3f438c-5db5-4c88-b0c0-51835235bc99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:00:11 np0005464891 nova_compute[259907]: 2025-10-01 17:00:11.837 2 DEBUG oslo_concurrency.lockutils [req-e26968c6-f9c9-474f-854e-87cb78ffa35e req-e08f33f3-bb6e-4e0a-ba7d-de36a14d93be af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:00:11 np0005464891 nova_compute[259907]: 2025-10-01 17:00:11.838 2 DEBUG oslo_concurrency.lockutils [req-e26968c6-f9c9-474f-854e-87cb78ffa35e req-e08f33f3-bb6e-4e0a-ba7d-de36a14d93be af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:00:11 np0005464891 nova_compute[259907]: 2025-10-01 17:00:11.838 2 DEBUG oslo_concurrency.lockutils [req-e26968c6-f9c9-474f-854e-87cb78ffa35e req-e08f33f3-bb6e-4e0a-ba7d-de36a14d93be af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:00:11 np0005464891 nova_compute[259907]: 2025-10-01 17:00:11.838 2 DEBUG nova.compute.manager [req-e26968c6-f9c9-474f-854e-87cb78ffa35e req-e08f33f3-bb6e-4e0a-ba7d-de36a14d93be af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] No waiting events found dispatching network-vif-plugged-ee3f438c-5db5-4c88-b0c0-51835235bc99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:00:11 np0005464891 nova_compute[259907]: 2025-10-01 17:00:11.839 2 WARNING nova.compute.manager [req-e26968c6-f9c9-474f-854e-87cb78ffa35e req-e08f33f3-bb6e-4e0a-ba7d-de36a14d93be af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Received unexpected event network-vif-plugged-ee3f438c-5db5-4c88-b0c0-51835235bc99 for instance with vm_state active and task_state None.#033[00m
Oct  1 13:00:11 np0005464891 nova_compute[259907]: 2025-10-01 17:00:11.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_17:00:12
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'backups', '.mgr', 'default.rgw.meta', 'images', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta']
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 13:00:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:12.463 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:00:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:12.465 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:00:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:12.467 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:00:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:00:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:00:12 np0005464891 nova_compute[259907]: 2025-10-01 17:00:12.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1814: 321 pgs: 321 active+clean; 366 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.0 MiB/s wr, 50 op/s
Oct  1 13:00:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 366 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 370 KiB/s wr, 85 op/s
Oct  1 13:00:15 np0005464891 podman[300129]: 2025-10-01 17:00:15.580569703 +0000 UTC m=+0.227878834 container exec 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct  1 13:00:15 np0005464891 podman[300129]: 2025-10-01 17:00:15.689182059 +0000 UTC m=+0.336491170 container exec_died 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:00:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:00:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:00:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:00:16 np0005464891 nova_compute[259907]: 2025-10-01 17:00:16.871 2 DEBUG nova.compute.manager [req-04fce93a-dac9-44c6-89a6-848eb7211e40 req-112bc7e7-bd16-4c78-8c1a-c11056e02c72 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Received event network-changed-ee3f438c-5db5-4c88-b0c0-51835235bc99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:00:16 np0005464891 nova_compute[259907]: 2025-10-01 17:00:16.874 2 DEBUG nova.compute.manager [req-04fce93a-dac9-44c6-89a6-848eb7211e40 req-112bc7e7-bd16-4c78-8c1a-c11056e02c72 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Refreshing instance network info cache due to event network-changed-ee3f438c-5db5-4c88-b0c0-51835235bc99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 13:00:16 np0005464891 nova_compute[259907]: 2025-10-01 17:00:16.874 2 DEBUG oslo_concurrency.lockutils [req-04fce93a-dac9-44c6-89a6-848eb7211e40 req-112bc7e7-bd16-4c78-8c1a-c11056e02c72 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:00:16 np0005464891 nova_compute[259907]: 2025-10-01 17:00:16.875 2 DEBUG oslo_concurrency.lockutils [req-04fce93a-dac9-44c6-89a6-848eb7211e40 req-112bc7e7-bd16-4c78-8c1a-c11056e02c72 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:00:16 np0005464891 nova_compute[259907]: 2025-10-01 17:00:16.875 2 DEBUG nova.network.neutron [req-04fce93a-dac9-44c6-89a6-848eb7211e40 req-112bc7e7-bd16-4c78-8c1a-c11056e02c72 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Refreshing network info cache for port ee3f438c-5db5-4c88-b0c0-51835235bc99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 13:00:16 np0005464891 nova_compute[259907]: 2025-10-01 17:00:16.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:00:17.048890) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338017048933, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1023, "num_deletes": 252, "total_data_size": 1436405, "memory_usage": 1455984, "flush_reason": "Manual Compaction"}
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338017111911, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 1411348, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35759, "largest_seqno": 36781, "table_properties": {"data_size": 1406327, "index_size": 2545, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11107, "raw_average_key_size": 19, "raw_value_size": 1396137, "raw_average_value_size": 2506, "num_data_blocks": 113, "num_entries": 557, "num_filter_entries": 557, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759337927, "oldest_key_time": 1759337927, "file_creation_time": 1759338017, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 63092 microseconds, and 4477 cpu microseconds.
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:00:17.111977) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 1411348 bytes OK
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:00:17.112002) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:00:17.225056) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:00:17.225099) EVENT_LOG_v1 {"time_micros": 1759338017225090, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:00:17.225122) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1431548, prev total WAL file size 1431548, number of live WAL files 2.
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:00:17.225898) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(1378KB)], [74(10MB)]
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338017225956, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12237687, "oldest_snapshot_seqno": -1}
Oct  1 13:00:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 366 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 370 KiB/s wr, 76 op/s
Oct  1 13:00:17 np0005464891 podman[300312]: 2025-10-01 17:00:17.25728616 +0000 UTC m=+0.115035572 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller)
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6589 keys, 10454358 bytes, temperature: kUnknown
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338017442963, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10454358, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10404902, "index_size": 31872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 167260, "raw_average_key_size": 25, "raw_value_size": 10281097, "raw_average_value_size": 1560, "num_data_blocks": 1271, "num_entries": 6589, "num_filter_entries": 6589, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759338017, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:00:17.443868) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10454358 bytes
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:00:17.536264) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 56.2 rd, 48.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.3 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(16.1) write-amplify(7.4) OK, records in: 7109, records dropped: 520 output_compression: NoCompression
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:00:17.536317) EVENT_LOG_v1 {"time_micros": 1759338017536297, "job": 42, "event": "compaction_finished", "compaction_time_micros": 217728, "compaction_time_cpu_micros": 24412, "output_level": 6, "num_output_files": 1, "total_output_size": 10454358, "num_input_records": 7109, "num_output_records": 6589, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338017537119, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338017540620, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:00:17.225776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:00:17.540679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:00:17.540685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:00:17.540688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:00:17.540690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:00:17.540692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:00:17 np0005464891 nova_compute[259907]: 2025-10-01 17:00:17.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:00:17 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:00:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:00:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:00:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 13:00:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:00:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 13:00:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:00:18 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 3fe28b2b-7f58-4754-bfe5-7b5919af78b1 does not exist
Oct  1 13:00:18 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev c0dbcce7-5752-4b9e-97c6-b5327d92c8d8 does not exist
Oct  1 13:00:18 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 9bc7cd68-aff2-4cb3-95eb-07d89b0d6fe6 does not exist
Oct  1 13:00:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 13:00:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 13:00:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 13:00:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:00:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:00:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:00:18 np0005464891 nova_compute[259907]: 2025-10-01 17:00:18.086 2 DEBUG nova.network.neutron [req-04fce93a-dac9-44c6-89a6-848eb7211e40 req-112bc7e7-bd16-4c78-8c1a-c11056e02c72 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Updated VIF entry in instance network info cache for port ee3f438c-5db5-4c88-b0c0-51835235bc99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 13:00:18 np0005464891 nova_compute[259907]: 2025-10-01 17:00:18.087 2 DEBUG nova.network.neutron [req-04fce93a-dac9-44c6-89a6-848eb7211e40 req-112bc7e7-bd16-4c78-8c1a-c11056e02c72 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Updating instance_info_cache with network_info: [{"id": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "address": "fa:16:3e:62:31:95", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee3f438c-5d", "ovs_interfaceid": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:00:18 np0005464891 nova_compute[259907]: 2025-10-01 17:00:18.108 2 DEBUG oslo_concurrency.lockutils [req-04fce93a-dac9-44c6-89a6-848eb7211e40 req-112bc7e7-bd16-4c78-8c1a-c11056e02c72 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:00:18 np0005464891 podman[300585]: 2025-10-01 17:00:18.623973263 +0000 UTC m=+0.043910264 container create 4f5c78a562798d08870789a2941f09fa9191095c66dbad25ef662dd1a6afea02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bouman, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 13:00:18 np0005464891 systemd[1]: Started libpod-conmon-4f5c78a562798d08870789a2941f09fa9191095c66dbad25ef662dd1a6afea02.scope.
Oct  1 13:00:18 np0005464891 podman[300585]: 2025-10-01 17:00:18.602312759 +0000 UTC m=+0.022249780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:00:18 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:00:18 np0005464891 podman[300585]: 2025-10-01 17:00:18.716493628 +0000 UTC m=+0.136430649 container init 4f5c78a562798d08870789a2941f09fa9191095c66dbad25ef662dd1a6afea02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bouman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 13:00:18 np0005464891 podman[300585]: 2025-10-01 17:00:18.722749779 +0000 UTC m=+0.142686780 container start 4f5c78a562798d08870789a2941f09fa9191095c66dbad25ef662dd1a6afea02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bouman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 13:00:18 np0005464891 podman[300585]: 2025-10-01 17:00:18.725561916 +0000 UTC m=+0.145498947 container attach 4f5c78a562798d08870789a2941f09fa9191095c66dbad25ef662dd1a6afea02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 13:00:18 np0005464891 affectionate_bouman[300601]: 167 167
Oct  1 13:00:18 np0005464891 systemd[1]: libpod-4f5c78a562798d08870789a2941f09fa9191095c66dbad25ef662dd1a6afea02.scope: Deactivated successfully.
Oct  1 13:00:18 np0005464891 conmon[300601]: conmon 4f5c78a562798d088707 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f5c78a562798d08870789a2941f09fa9191095c66dbad25ef662dd1a6afea02.scope/container/memory.events
Oct  1 13:00:18 np0005464891 podman[300606]: 2025-10-01 17:00:18.771468283 +0000 UTC m=+0.026804124 container died 4f5c78a562798d08870789a2941f09fa9191095c66dbad25ef662dd1a6afea02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bouman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:00:18 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:00:18 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:00:18 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:00:18 np0005464891 systemd[1]: var-lib-containers-storage-overlay-109f32238388ba7e877a0f1b1a3d212fc129d3bd5b51ece21472f325c794a987-merged.mount: Deactivated successfully.
Oct  1 13:00:18 np0005464891 podman[300606]: 2025-10-01 17:00:18.803857441 +0000 UTC m=+0.059193282 container remove 4f5c78a562798d08870789a2941f09fa9191095c66dbad25ef662dd1a6afea02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bouman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:00:18 np0005464891 systemd[1]: libpod-conmon-4f5c78a562798d08870789a2941f09fa9191095c66dbad25ef662dd1a6afea02.scope: Deactivated successfully.
Oct  1 13:00:19 np0005464891 podman[300629]: 2025-10-01 17:00:19.022140501 +0000 UTC m=+0.053838826 container create d35c6b6a3bd04295e33d886c27546e2977d193348a27c295a018eb46c85706b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct  1 13:00:19 np0005464891 systemd[1]: Started libpod-conmon-d35c6b6a3bd04295e33d886c27546e2977d193348a27c295a018eb46c85706b5.scope.
Oct  1 13:00:19 np0005464891 podman[300629]: 2025-10-01 17:00:18.999244723 +0000 UTC m=+0.030943068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:00:19 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:00:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f9d1cb4044d4543e04b62a118bb625042e7b01858f14a6df0a3e3295a92cad3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:00:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f9d1cb4044d4543e04b62a118bb625042e7b01858f14a6df0a3e3295a92cad3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:00:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f9d1cb4044d4543e04b62a118bb625042e7b01858f14a6df0a3e3295a92cad3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:00:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f9d1cb4044d4543e04b62a118bb625042e7b01858f14a6df0a3e3295a92cad3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:00:19 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f9d1cb4044d4543e04b62a118bb625042e7b01858f14a6df0a3e3295a92cad3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 13:00:19 np0005464891 podman[300629]: 2025-10-01 17:00:19.125244935 +0000 UTC m=+0.156943290 container init d35c6b6a3bd04295e33d886c27546e2977d193348a27c295a018eb46c85706b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_murdock, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 13:00:19 np0005464891 podman[300629]: 2025-10-01 17:00:19.131715083 +0000 UTC m=+0.163413418 container start d35c6b6a3bd04295e33d886c27546e2977d193348a27c295a018eb46c85706b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_murdock, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 13:00:19 np0005464891 podman[300629]: 2025-10-01 17:00:19.134598462 +0000 UTC m=+0.166296787 container attach d35c6b6a3bd04295e33d886c27546e2977d193348a27c295a018eb46c85706b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:00:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 366 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 370 KiB/s wr, 76 op/s
Oct  1 13:00:20 np0005464891 suspicious_murdock[300645]: --> passed data devices: 0 physical, 3 LVM
Oct  1 13:00:20 np0005464891 suspicious_murdock[300645]: --> relative data size: 1.0
Oct  1 13:00:20 np0005464891 suspicious_murdock[300645]: --> All data devices are unavailable
Oct  1 13:00:20 np0005464891 systemd[1]: libpod-d35c6b6a3bd04295e33d886c27546e2977d193348a27c295a018eb46c85706b5.scope: Deactivated successfully.
Oct  1 13:00:20 np0005464891 podman[300629]: 2025-10-01 17:00:20.186743777 +0000 UTC m=+1.218442092 container died d35c6b6a3bd04295e33d886c27546e2977d193348a27c295a018eb46c85706b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 13:00:20 np0005464891 systemd[1]: var-lib-containers-storage-overlay-7f9d1cb4044d4543e04b62a118bb625042e7b01858f14a6df0a3e3295a92cad3-merged.mount: Deactivated successfully.
Oct  1 13:00:20 np0005464891 podman[300629]: 2025-10-01 17:00:20.290046417 +0000 UTC m=+1.321744732 container remove d35c6b6a3bd04295e33d886c27546e2977d193348a27c295a018eb46c85706b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 13:00:20 np0005464891 systemd[1]: libpod-conmon-d35c6b6a3bd04295e33d886c27546e2977d193348a27c295a018eb46c85706b5.scope: Deactivated successfully.
Oct  1 13:00:20 np0005464891 podman[300828]: 2025-10-01 17:00:20.877128472 +0000 UTC m=+0.041496548 container create c2329a8b49c9b33c9728804fb9d35a547be72d90726565966d5a411d92ab4e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heisenberg, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 13:00:20 np0005464891 systemd[1]: Started libpod-conmon-c2329a8b49c9b33c9728804fb9d35a547be72d90726565966d5a411d92ab4e8b.scope.
Oct  1 13:00:20 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:00:20 np0005464891 podman[300828]: 2025-10-01 17:00:20.946004748 +0000 UTC m=+0.110372844 container init c2329a8b49c9b33c9728804fb9d35a547be72d90726565966d5a411d92ab4e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:00:20 np0005464891 podman[300828]: 2025-10-01 17:00:20.952152378 +0000 UTC m=+0.116520454 container start c2329a8b49c9b33c9728804fb9d35a547be72d90726565966d5a411d92ab4e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heisenberg, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 13:00:20 np0005464891 podman[300828]: 2025-10-01 17:00:20.856999411 +0000 UTC m=+0.021367507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:00:20 np0005464891 wizardly_heisenberg[300844]: 167 167
Oct  1 13:00:20 np0005464891 systemd[1]: libpod-c2329a8b49c9b33c9728804fb9d35a547be72d90726565966d5a411d92ab4e8b.scope: Deactivated successfully.
Oct  1 13:00:20 np0005464891 podman[300828]: 2025-10-01 17:00:20.96467794 +0000 UTC m=+0.129046016 container attach c2329a8b49c9b33c9728804fb9d35a547be72d90726565966d5a411d92ab4e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heisenberg, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:00:20 np0005464891 podman[300828]: 2025-10-01 17:00:20.965349998 +0000 UTC m=+0.129718074 container died c2329a8b49c9b33c9728804fb9d35a547be72d90726565966d5a411d92ab4e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heisenberg, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 13:00:21 np0005464891 systemd[1]: var-lib-containers-storage-overlay-69761c260f85bc3f6d55feee3a04e055c3831074a136dc7f7d8ce8cf6c8e6276-merged.mount: Deactivated successfully.
Oct  1 13:00:21 np0005464891 podman[300828]: 2025-10-01 17:00:21.037897666 +0000 UTC m=+0.202265742 container remove c2329a8b49c9b33c9728804fb9d35a547be72d90726565966d5a411d92ab4e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heisenberg, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:00:21 np0005464891 systemd[1]: libpod-conmon-c2329a8b49c9b33c9728804fb9d35a547be72d90726565966d5a411d92ab4e8b.scope: Deactivated successfully.
Oct  1 13:00:21 np0005464891 podman[300867]: 2025-10-01 17:00:21.238310447 +0000 UTC m=+0.040949323 container create 338b5dca5af6fb2408d69532d58bf0dff8260b7b577142307374a2910107c684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shirley, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct  1 13:00:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1818: 321 pgs: 321 active+clean; 366 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 374 KiB/s wr, 77 op/s
Oct  1 13:00:21 np0005464891 systemd[1]: Started libpod-conmon-338b5dca5af6fb2408d69532d58bf0dff8260b7b577142307374a2910107c684.scope.
Oct  1 13:00:21 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:00:21 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79bb3508cc75a889f349e1bfb5360779b25433f05a5e3551e436660f3c8215b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:00:21 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79bb3508cc75a889f349e1bfb5360779b25433f05a5e3551e436660f3c8215b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:00:21 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79bb3508cc75a889f349e1bfb5360779b25433f05a5e3551e436660f3c8215b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:00:21 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79bb3508cc75a889f349e1bfb5360779b25433f05a5e3551e436660f3c8215b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:00:21 np0005464891 podman[300867]: 2025-10-01 17:00:21.222043611 +0000 UTC m=+0.024682507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:00:21 np0005464891 podman[300867]: 2025-10-01 17:00:21.324274552 +0000 UTC m=+0.126913428 container init 338b5dca5af6fb2408d69532d58bf0dff8260b7b577142307374a2910107c684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shirley, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:00:21 np0005464891 podman[300867]: 2025-10-01 17:00:21.333570637 +0000 UTC m=+0.136209513 container start 338b5dca5af6fb2408d69532d58bf0dff8260b7b577142307374a2910107c684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 13:00:21 np0005464891 podman[300867]: 2025-10-01 17:00:21.337278158 +0000 UTC m=+0.139917034 container attach 338b5dca5af6fb2408d69532d58bf0dff8260b7b577142307374a2910107c684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:00:21 np0005464891 nova_compute[259907]: 2025-10-01 17:00:21.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]: {
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:    "0": [
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:        {
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "devices": [
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "/dev/loop3"
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            ],
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "lv_name": "ceph_lv0",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "lv_size": "21470642176",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "name": "ceph_lv0",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "tags": {
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.cluster_name": "ceph",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.crush_device_class": "",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.encrypted": "0",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.osd_id": "0",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.type": "block",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.vdo": "0"
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            },
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "type": "block",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "vg_name": "ceph_vg0"
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:        }
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:    ],
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:    "1": [
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:        {
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "devices": [
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "/dev/loop4"
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            ],
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "lv_name": "ceph_lv1",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "lv_size": "21470642176",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "name": "ceph_lv1",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "tags": {
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.cluster_name": "ceph",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.crush_device_class": "",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.encrypted": "0",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.osd_id": "1",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.type": "block",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.vdo": "0"
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            },
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "type": "block",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "vg_name": "ceph_vg1"
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:        }
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:    ],
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:    "2": [
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:        {
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "devices": [
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "/dev/loop5"
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            ],
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "lv_name": "ceph_lv2",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "lv_size": "21470642176",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "name": "ceph_lv2",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "tags": {
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.cluster_name": "ceph",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.crush_device_class": "",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.encrypted": "0",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.osd_id": "2",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.type": "block",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:                "ceph.vdo": "0"
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            },
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "type": "block",
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:            "vg_name": "ceph_vg2"
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:        }
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]:    ]
Oct  1 13:00:22 np0005464891 reverent_shirley[300882]: }
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 4.896484502181415e-06 of space, bias 1.0, pg target 0.0014689453506544247 quantized to 32 (current 32)
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003912736252197695 of space, bias 1.0, pg target 1.1738208756593085 quantized to 32 (current 32)
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.9013621638340822e-05 quantized to 32 (current 32)
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.19918670028325844 quantized to 32 (current 32)
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:00:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct  1 13:00:22 np0005464891 systemd[1]: libpod-338b5dca5af6fb2408d69532d58bf0dff8260b7b577142307374a2910107c684.scope: Deactivated successfully.
Oct  1 13:00:22 np0005464891 conmon[300882]: conmon 338b5dca5af6fb2408d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-338b5dca5af6fb2408d69532d58bf0dff8260b7b577142307374a2910107c684.scope/container/memory.events
Oct  1 13:00:22 np0005464891 podman[300867]: 2025-10-01 17:00:22.265248351 +0000 UTC m=+1.067887227 container died 338b5dca5af6fb2408d69532d58bf0dff8260b7b577142307374a2910107c684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:00:22 np0005464891 systemd[1]: var-lib-containers-storage-overlay-79bb3508cc75a889f349e1bfb5360779b25433f05a5e3551e436660f3c8215b6-merged.mount: Deactivated successfully.
Oct  1 13:00:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:00:22 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:22Z|00050|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.9
Oct  1 13:00:22 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:22Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:62:31:95 10.100.0.9
Oct  1 13:00:22 np0005464891 podman[300867]: 2025-10-01 17:00:22.636241876 +0000 UTC m=+1.438880752 container remove 338b5dca5af6fb2408d69532d58bf0dff8260b7b577142307374a2910107c684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct  1 13:00:22 np0005464891 systemd[1]: libpod-conmon-338b5dca5af6fb2408d69532d58bf0dff8260b7b577142307374a2910107c684.scope: Deactivated successfully.
Oct  1 13:00:22 np0005464891 nova_compute[259907]: 2025-10-01 17:00:22.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:22 np0005464891 podman[300893]: 2025-10-01 17:00:22.721515152 +0000 UTC m=+0.431708709 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  1 13:00:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 366 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 KiB/s wr, 75 op/s
Oct  1 13:00:23 np0005464891 podman[301061]: 2025-10-01 17:00:23.296529855 +0000 UTC m=+0.077107013 container create 68a56a3aace13e6285eb0815bc2055b8b6873b95367ba4b47ab584efc4d280bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:00:23 np0005464891 podman[301061]: 2025-10-01 17:00:23.240397497 +0000 UTC m=+0.020974665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:00:23 np0005464891 systemd[1]: Started libpod-conmon-68a56a3aace13e6285eb0815bc2055b8b6873b95367ba4b47ab584efc4d280bc.scope.
Oct  1 13:00:23 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:00:23 np0005464891 podman[301061]: 2025-10-01 17:00:23.614548928 +0000 UTC m=+0.395126106 container init 68a56a3aace13e6285eb0815bc2055b8b6873b95367ba4b47ab584efc4d280bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_moore, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct  1 13:00:23 np0005464891 podman[301061]: 2025-10-01 17:00:23.623914124 +0000 UTC m=+0.404491272 container start 68a56a3aace13e6285eb0815bc2055b8b6873b95367ba4b47ab584efc4d280bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_moore, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:00:23 np0005464891 sharp_moore[301078]: 167 167
Oct  1 13:00:23 np0005464891 systemd[1]: libpod-68a56a3aace13e6285eb0815bc2055b8b6873b95367ba4b47ab584efc4d280bc.scope: Deactivated successfully.
Oct  1 13:00:23 np0005464891 podman[301061]: 2025-10-01 17:00:23.72706273 +0000 UTC m=+0.507639898 container attach 68a56a3aace13e6285eb0815bc2055b8b6873b95367ba4b47ab584efc4d280bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 13:00:23 np0005464891 podman[301061]: 2025-10-01 17:00:23.727749799 +0000 UTC m=+0.508326947 container died 68a56a3aace13e6285eb0815bc2055b8b6873b95367ba4b47ab584efc4d280bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_moore, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:00:24 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4c3ed60124948fab380b79b1c58cc50bc00d2c47ec72fb0605b8262f5728547a-merged.mount: Deactivated successfully.
Oct  1 13:00:24 np0005464891 podman[301061]: 2025-10-01 17:00:24.531721385 +0000 UTC m=+1.312298533 container remove 68a56a3aace13e6285eb0815bc2055b8b6873b95367ba4b47ab584efc4d280bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:00:24 np0005464891 systemd[1]: libpod-conmon-68a56a3aace13e6285eb0815bc2055b8b6873b95367ba4b47ab584efc4d280bc.scope: Deactivated successfully.
Oct  1 13:00:24 np0005464891 podman[301103]: 2025-10-01 17:00:24.825591706 +0000 UTC m=+0.083255161 container create fbcb551e8e04613a2bcdd88b194abf1d15b7eaca70c51361ee50ddfb2e2cd903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 13:00:24 np0005464891 podman[301103]: 2025-10-01 17:00:24.779225297 +0000 UTC m=+0.036888802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:00:24 np0005464891 systemd[1]: Started libpod-conmon-fbcb551e8e04613a2bcdd88b194abf1d15b7eaca70c51361ee50ddfb2e2cd903.scope.
Oct  1 13:00:24 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:00:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91f969db57c30de03e74d286b8a86a552c9177aaad2531d0573932c516f4bc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:00:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91f969db57c30de03e74d286b8a86a552c9177aaad2531d0573932c516f4bc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:00:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91f969db57c30de03e74d286b8a86a552c9177aaad2531d0573932c516f4bc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:00:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91f969db57c30de03e74d286b8a86a552c9177aaad2531d0573932c516f4bc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:00:24 np0005464891 podman[301103]: 2025-10-01 17:00:24.985831926 +0000 UTC m=+0.243495411 container init fbcb551e8e04613a2bcdd88b194abf1d15b7eaca70c51361ee50ddfb2e2cd903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 13:00:24 np0005464891 podman[301103]: 2025-10-01 17:00:24.99724202 +0000 UTC m=+0.254905495 container start fbcb551e8e04613a2bcdd88b194abf1d15b7eaca70c51361ee50ddfb2e2cd903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:00:25 np0005464891 podman[301103]: 2025-10-01 17:00:25.097820814 +0000 UTC m=+0.355484289 container attach fbcb551e8e04613a2bcdd88b194abf1d15b7eaca70c51361ee50ddfb2e2cd903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:00:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1820: 321 pgs: 321 active+clean; 366 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 19 KiB/s wr, 106 op/s
Oct  1 13:00:25 np0005464891 podman[301142]: 2025-10-01 17:00:25.9514096 +0000 UTC m=+0.061402763 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]: {
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:        "osd_id": 2,
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:        "type": "bluestore"
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:    },
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:        "osd_id": 0,
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:        "type": "bluestore"
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:    },
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:        "osd_id": 1,
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:        "type": "bluestore"
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]:    }
Oct  1 13:00:25 np0005464891 upbeat_northcutt[301120]: }
Oct  1 13:00:26 np0005464891 systemd[1]: libpod-fbcb551e8e04613a2bcdd88b194abf1d15b7eaca70c51361ee50ddfb2e2cd903.scope: Deactivated successfully.
Oct  1 13:00:26 np0005464891 systemd[1]: libpod-fbcb551e8e04613a2bcdd88b194abf1d15b7eaca70c51361ee50ddfb2e2cd903.scope: Consumed 1.034s CPU time.
Oct  1 13:00:26 np0005464891 podman[301103]: 2025-10-01 17:00:26.025955943 +0000 UTC m=+1.283619408 container died fbcb551e8e04613a2bcdd88b194abf1d15b7eaca70c51361ee50ddfb2e2cd903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:00:26 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b91f969db57c30de03e74d286b8a86a552c9177aaad2531d0573932c516f4bc1-merged.mount: Deactivated successfully.
Oct  1 13:00:26 np0005464891 podman[301103]: 2025-10-01 17:00:26.108851653 +0000 UTC m=+1.366515108 container remove fbcb551e8e04613a2bcdd88b194abf1d15b7eaca70c51361ee50ddfb2e2cd903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:00:26 np0005464891 systemd[1]: libpod-conmon-fbcb551e8e04613a2bcdd88b194abf1d15b7eaca70c51361ee50ddfb2e2cd903.scope: Deactivated successfully.
Oct  1 13:00:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:00:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:00:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:00:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:00:26 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev c549bf41-4e96-4ac5-b8e8-086f38473078 does not exist
Oct  1 13:00:26 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 9f7a240d-c55c-4e83-9650-86d75897dd0a does not exist
Oct  1 13:00:26 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:26Z|00052|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.9
Oct  1 13:00:26 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:26Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:62:31:95 10.100.0.9
Oct  1 13:00:26 np0005464891 nova_compute[259907]: 2025-10-01 17:00:26.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:27 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:00:27 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:00:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 366 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 532 KiB/s rd, 16 KiB/s wr, 45 op/s
Oct  1 13:00:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:00:27 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:27Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:62:31:95 10.100.0.9
Oct  1 13:00:27 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:27Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:62:31:95 10.100.0.9
Oct  1 13:00:27 np0005464891 nova_compute[259907]: 2025-10-01 17:00:27.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:28 np0005464891 nova_compute[259907]: 2025-10-01 17:00:28.922 2 DEBUG oslo_concurrency.lockutils [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "caeab115-da31-48fd-af65-2085a2c28333" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:00:28 np0005464891 nova_compute[259907]: 2025-10-01 17:00:28.923 2 DEBUG oslo_concurrency.lockutils [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "caeab115-da31-48fd-af65-2085a2c28333" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:00:28 np0005464891 nova_compute[259907]: 2025-10-01 17:00:28.923 2 DEBUG oslo_concurrency.lockutils [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "caeab115-da31-48fd-af65-2085a2c28333-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:00:28 np0005464891 nova_compute[259907]: 2025-10-01 17:00:28.923 2 DEBUG oslo_concurrency.lockutils [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "caeab115-da31-48fd-af65-2085a2c28333-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:00:28 np0005464891 nova_compute[259907]: 2025-10-01 17:00:28.924 2 DEBUG oslo_concurrency.lockutils [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "caeab115-da31-48fd-af65-2085a2c28333-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:00:28 np0005464891 nova_compute[259907]: 2025-10-01 17:00:28.925 2 INFO nova.compute.manager [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Terminating instance#033[00m
Oct  1 13:00:28 np0005464891 nova_compute[259907]: 2025-10-01 17:00:28.927 2 DEBUG nova.compute.manager [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 13:00:28 np0005464891 kernel: tap61aaf003-10 (unregistering): left promiscuous mode
Oct  1 13:00:28 np0005464891 NetworkManager[44940]: <info>  [1759338028.9975] device (tap61aaf003-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 13:00:29 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:29Z|00224|binding|INFO|Releasing lport 61aaf003-104a-4194-89f9-18ce4d3dfabb from this chassis (sb_readonly=0)
Oct  1 13:00:29 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:29Z|00225|binding|INFO|Setting lport 61aaf003-104a-4194-89f9-18ce4d3dfabb down in Southbound
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:29 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:29Z|00226|binding|INFO|Removing iface tap61aaf003-10 ovn-installed in OVS
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:29.027 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:c1:82 10.100.0.10'], port_security=['fa:16:3e:cd:c1:82 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'caeab115-da31-48fd-af65-2085a2c28333', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bb5e44f7928546dfb674d53cd3727027', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c51767f2-742e-4209-a278-1c1f1e9af624', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08e741b0-61e8-4126-b98f-610a01494f2d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=61aaf003-104a-4194-89f9-18ce4d3dfabb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:29.029 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 61aaf003-104a-4194-89f9-18ce4d3dfabb in datapath 2345ad6b-d676-4546-a17e-6f7405ff5f24 unbound from our chassis#033[00m
Oct  1 13:00:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:29.031 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2345ad6b-d676-4546-a17e-6f7405ff5f24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 13:00:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:29.032 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[06929954-a072-4a2b-9852-d072f749b5cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:29.032 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24 namespace which is not needed anymore#033[00m
Oct  1 13:00:29 np0005464891 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Deactivated successfully.
Oct  1 13:00:29 np0005464891 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Consumed 17.409s CPU time.
Oct  1 13:00:29 np0005464891 systemd-machined[214891]: Machine qemu-23-instance-00000017 terminated.
Oct  1 13:00:29 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[299410]: [NOTICE]   (299429) : haproxy version is 2.8.14-c23fe91
Oct  1 13:00:29 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[299410]: [NOTICE]   (299429) : path to executable is /usr/sbin/haproxy
Oct  1 13:00:29 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[299410]: [WARNING]  (299429) : Exiting Master process...
Oct  1 13:00:29 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[299410]: [WARNING]  (299429) : Exiting Master process...
Oct  1 13:00:29 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[299410]: [ALERT]    (299429) : Current worker (299434) exited with code 143 (Terminated)
Oct  1 13:00:29 np0005464891 neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24[299410]: [WARNING]  (299429) : All workers exited. Exiting... (0)
Oct  1 13:00:29 np0005464891 systemd[1]: libpod-b698e00f4d7119f77649064dfabfe756ce2c6ef34a7edeb285798d7a0fec8660.scope: Deactivated successfully.
Oct  1 13:00:29 np0005464891 conmon[299410]: conmon b698e00f4d7119f77649 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b698e00f4d7119f77649064dfabfe756ce2c6ef34a7edeb285798d7a0fec8660.scope/container/memory.events
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.165 2 INFO nova.virt.libvirt.driver [-] [instance: caeab115-da31-48fd-af65-2085a2c28333] Instance destroyed successfully.#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.165 2 DEBUG nova.objects.instance [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lazy-loading 'resources' on Instance uuid caeab115-da31-48fd-af65-2085a2c28333 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:00:29 np0005464891 podman[301258]: 2025-10-01 17:00:29.170762899 +0000 UTC m=+0.047354208 container died b698e00f4d7119f77649064dfabfe756ce2c6ef34a7edeb285798d7a0fec8660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.184 2 DEBUG nova.virt.libvirt.vif [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T16:59:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1137424607',display_name='tempest-TestEncryptedCinderVolumes-server-1137424607',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1137424607',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIhJiMuVwk4EQ7wkCYcLaeTsPomALwyR3FBK+97oa6ynrLvPrKJKnE71uKm0O/hFbPLnI7X22RnrmUili5anoyjadz+yIM+FZfOiuxhlfC8kCRP4tSOOTh7DLMRl7W7xOg==',key_name='tempest-TestEncryptedCinderVolumes-620996693',keypairs=<?>,launch_index=0,launched_at=2025-10-01T16:59:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bb5e44f7928546dfb674d53cd3727027',ramdisk_id='',reservation_id='r-nkl87b2i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-803701988',owner_user_name='tempest-TestEncryptedCinderVolumes-803701988-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T16:59:49Z,user_data=None,user_id='906d3d29e27b49c1860f5397c6028d96',uuid=caeab115-da31-48fd-af65-2085a2c28333,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "address": "fa:16:3e:cd:c1:82", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61aaf003-10", "ovs_interfaceid": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.185 2 DEBUG nova.network.os_vif_util [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converting VIF {"id": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "address": "fa:16:3e:cd:c1:82", "network": {"id": "2345ad6b-d676-4546-a17e-6f7405ff5f24", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-76227351-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bb5e44f7928546dfb674d53cd3727027", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61aaf003-10", "ovs_interfaceid": "61aaf003-104a-4194-89f9-18ce4d3dfabb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.185 2 DEBUG nova.network.os_vif_util [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cd:c1:82,bridge_name='br-int',has_traffic_filtering=True,id=61aaf003-104a-4194-89f9-18ce4d3dfabb,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61aaf003-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.186 2 DEBUG os_vif [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:c1:82,bridge_name='br-int',has_traffic_filtering=True,id=61aaf003-104a-4194-89f9-18ce4d3dfabb,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61aaf003-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.188 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61aaf003-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.196 2 INFO os_vif [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:c1:82,bridge_name='br-int',has_traffic_filtering=True,id=61aaf003-104a-4194-89f9-18ce4d3dfabb,network=Network(2345ad6b-d676-4546-a17e-6f7405ff5f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61aaf003-10')#033[00m
Oct  1 13:00:29 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b698e00f4d7119f77649064dfabfe756ce2c6ef34a7edeb285798d7a0fec8660-userdata-shm.mount: Deactivated successfully.
Oct  1 13:00:29 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ace6416769f5b01f4c5cd5405a25ce02070ba686ecc21bac93fc68be326bde0f-merged.mount: Deactivated successfully.
Oct  1 13:00:29 np0005464891 podman[301258]: 2025-10-01 17:00:29.220396969 +0000 UTC m=+0.096988278 container cleanup b698e00f4d7119f77649064dfabfe756ce2c6ef34a7edeb285798d7a0fec8660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 13:00:29 np0005464891 systemd[1]: libpod-conmon-b698e00f4d7119f77649064dfabfe756ce2c6ef34a7edeb285798d7a0fec8660.scope: Deactivated successfully.
Oct  1 13:00:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 366 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 532 KiB/s rd, 16 KiB/s wr, 45 op/s
Oct  1 13:00:29 np0005464891 podman[301315]: 2025-10-01 17:00:29.287565419 +0000 UTC m=+0.042982088 container remove b698e00f4d7119f77649064dfabfe756ce2c6ef34a7edeb285798d7a0fec8660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 13:00:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:29.295 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[397b51c2-669c-4a7f-926d-fa589d8e1eab]: (4, ('Wed Oct  1 05:00:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24 (b698e00f4d7119f77649064dfabfe756ce2c6ef34a7edeb285798d7a0fec8660)\nb698e00f4d7119f77649064dfabfe756ce2c6ef34a7edeb285798d7a0fec8660\nWed Oct  1 05:00:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24 (b698e00f4d7119f77649064dfabfe756ce2c6ef34a7edeb285798d7a0fec8660)\nb698e00f4d7119f77649064dfabfe756ce2c6ef34a7edeb285798d7a0fec8660\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:29.297 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2aacffc8-7119-4b46-8209-d45dc483ca49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:29.298 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2345ad6b-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:29 np0005464891 kernel: tap2345ad6b-d0: left promiscuous mode
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:29.316 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5103682a-3792-47ca-9788-e0ac82162bb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:29.357 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[17d518f8-3924-489a-a2a3-1549e4baa7df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:29.359 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[9449176a-e56e-4be1-a2ca-23b2ab28b26d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.368 2 INFO nova.virt.libvirt.driver [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Deleting instance files /var/lib/nova/instances/caeab115-da31-48fd-af65-2085a2c28333_del#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.369 2 INFO nova.virt.libvirt.driver [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Deletion of /var/lib/nova/instances/caeab115-da31-48fd-af65-2085a2c28333_del complete#033[00m
Oct  1 13:00:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:29.376 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff8d113-d2f1-4d1c-aad8-1e91fd2eeb45]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 30727, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301334, 'error': None, 'target': 'ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:29.379 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2345ad6b-d676-4546-a17e-6f7405ff5f24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 13:00:29 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:00:29.379 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[0b78fd04-8045-408c-91cd-e458b52973e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:00:29 np0005464891 systemd[1]: run-netns-ovnmeta\x2d2345ad6b\x2dd676\x2d4546\x2da17e\x2d6f7405ff5f24.mount: Deactivated successfully.
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.436 2 INFO nova.compute.manager [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Took 0.51 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.437 2 DEBUG oslo.service.loopingcall [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.437 2 DEBUG nova.compute.manager [-] [instance: caeab115-da31-48fd-af65-2085a2c28333] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.437 2 DEBUG nova.network.neutron [-] [instance: caeab115-da31-48fd-af65-2085a2c28333] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.492 2 DEBUG nova.compute.manager [req-3834da65-37b1-42a1-b710-c63b6e7d7cd0 req-dd4d8a9a-2ca9-466e-9812-55d949831da6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Received event network-vif-unplugged-61aaf003-104a-4194-89f9-18ce4d3dfabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.493 2 DEBUG oslo_concurrency.lockutils [req-3834da65-37b1-42a1-b710-c63b6e7d7cd0 req-dd4d8a9a-2ca9-466e-9812-55d949831da6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "caeab115-da31-48fd-af65-2085a2c28333-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.493 2 DEBUG oslo_concurrency.lockutils [req-3834da65-37b1-42a1-b710-c63b6e7d7cd0 req-dd4d8a9a-2ca9-466e-9812-55d949831da6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "caeab115-da31-48fd-af65-2085a2c28333-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.493 2 DEBUG oslo_concurrency.lockutils [req-3834da65-37b1-42a1-b710-c63b6e7d7cd0 req-dd4d8a9a-2ca9-466e-9812-55d949831da6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "caeab115-da31-48fd-af65-2085a2c28333-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.494 2 DEBUG nova.compute.manager [req-3834da65-37b1-42a1-b710-c63b6e7d7cd0 req-dd4d8a9a-2ca9-466e-9812-55d949831da6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] No waiting events found dispatching network-vif-unplugged-61aaf003-104a-4194-89f9-18ce4d3dfabb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:00:29 np0005464891 nova_compute[259907]: 2025-10-01 17:00:29.494 2 DEBUG nova.compute.manager [req-3834da65-37b1-42a1-b710-c63b6e7d7cd0 req-dd4d8a9a-2ca9-466e-9812-55d949831da6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Received event network-vif-unplugged-61aaf003-104a-4194-89f9-18ce4d3dfabb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 13:00:30 np0005464891 nova_compute[259907]: 2025-10-01 17:00:30.293 2 DEBUG nova.network.neutron [-] [instance: caeab115-da31-48fd-af65-2085a2c28333] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:00:30 np0005464891 nova_compute[259907]: 2025-10-01 17:00:30.316 2 INFO nova.compute.manager [-] [instance: caeab115-da31-48fd-af65-2085a2c28333] Took 0.88 seconds to deallocate network for instance.#033[00m
Oct  1 13:00:30 np0005464891 nova_compute[259907]: 2025-10-01 17:00:30.504 2 INFO nova.compute.manager [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Took 0.19 seconds to detach 1 volumes for instance.#033[00m
Oct  1 13:00:30 np0005464891 nova_compute[259907]: 2025-10-01 17:00:30.627 2 DEBUG oslo_concurrency.lockutils [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:00:30 np0005464891 nova_compute[259907]: 2025-10-01 17:00:30.628 2 DEBUG oslo_concurrency.lockutils [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:00:30 np0005464891 nova_compute[259907]: 2025-10-01 17:00:30.913 2 DEBUG oslo_concurrency.processutils [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:00:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 368 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 655 KiB/s rd, 30 KiB/s wr, 58 op/s
Oct  1 13:00:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:00:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2867719856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:00:31 np0005464891 nova_compute[259907]: 2025-10-01 17:00:31.403 2 DEBUG oslo_concurrency.processutils [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:00:31 np0005464891 nova_compute[259907]: 2025-10-01 17:00:31.409 2 DEBUG nova.compute.provider_tree [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:00:31 np0005464891 nova_compute[259907]: 2025-10-01 17:00:31.426 2 DEBUG nova.scheduler.client.report [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:00:31 np0005464891 nova_compute[259907]: 2025-10-01 17:00:31.453 2 DEBUG oslo_concurrency.lockutils [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:00:31 np0005464891 nova_compute[259907]: 2025-10-01 17:00:31.490 2 INFO nova.scheduler.client.report [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Deleted allocations for instance caeab115-da31-48fd-af65-2085a2c28333#033[00m
Oct  1 13:00:31 np0005464891 nova_compute[259907]: 2025-10-01 17:00:31.558 2 DEBUG oslo_concurrency.lockutils [None req-9af7862d-1407-42d8-a90b-6805b2fa8a40 906d3d29e27b49c1860f5397c6028d96 bb5e44f7928546dfb674d53cd3727027 - - default default] Lock "caeab115-da31-48fd-af65-2085a2c28333" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:00:31 np0005464891 nova_compute[259907]: 2025-10-01 17:00:31.580 2 DEBUG nova.compute.manager [req-9884751c-2214-4cf8-b706-70cd7b7d56a2 req-b9d3ebcf-967e-4cf7-ac5f-5fe508cab00b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Received event network-vif-plugged-61aaf003-104a-4194-89f9-18ce4d3dfabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:00:31 np0005464891 nova_compute[259907]: 2025-10-01 17:00:31.581 2 DEBUG oslo_concurrency.lockutils [req-9884751c-2214-4cf8-b706-70cd7b7d56a2 req-b9d3ebcf-967e-4cf7-ac5f-5fe508cab00b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "caeab115-da31-48fd-af65-2085a2c28333-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:00:31 np0005464891 nova_compute[259907]: 2025-10-01 17:00:31.581 2 DEBUG oslo_concurrency.lockutils [req-9884751c-2214-4cf8-b706-70cd7b7d56a2 req-b9d3ebcf-967e-4cf7-ac5f-5fe508cab00b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "caeab115-da31-48fd-af65-2085a2c28333-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:00:31 np0005464891 nova_compute[259907]: 2025-10-01 17:00:31.581 2 DEBUG oslo_concurrency.lockutils [req-9884751c-2214-4cf8-b706-70cd7b7d56a2 req-b9d3ebcf-967e-4cf7-ac5f-5fe508cab00b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "caeab115-da31-48fd-af65-2085a2c28333-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:00:31 np0005464891 nova_compute[259907]: 2025-10-01 17:00:31.582 2 DEBUG nova.compute.manager [req-9884751c-2214-4cf8-b706-70cd7b7d56a2 req-b9d3ebcf-967e-4cf7-ac5f-5fe508cab00b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] No waiting events found dispatching network-vif-plugged-61aaf003-104a-4194-89f9-18ce4d3dfabb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:00:31 np0005464891 nova_compute[259907]: 2025-10-01 17:00:31.582 2 WARNING nova.compute.manager [req-9884751c-2214-4cf8-b706-70cd7b7d56a2 req-b9d3ebcf-967e-4cf7-ac5f-5fe508cab00b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Received unexpected event network-vif-plugged-61aaf003-104a-4194-89f9-18ce4d3dfabb for instance with vm_state deleted and task_state None.#033[00m
Oct  1 13:00:31 np0005464891 nova_compute[259907]: 2025-10-01 17:00:31.582 2 DEBUG nova.compute.manager [req-9884751c-2214-4cf8-b706-70cd7b7d56a2 req-b9d3ebcf-967e-4cf7-ac5f-5fe508cab00b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: caeab115-da31-48fd-af65-2085a2c28333] Received event network-vif-deleted-61aaf003-104a-4194-89f9-18ce4d3dfabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:00:31 np0005464891 nova_compute[259907]: 2025-10-01 17:00:31.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:00:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1824: 321 pgs: 321 active+clean; 368 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 752 KiB/s rd, 27 KiB/s wr, 64 op/s
Oct  1 13:00:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:00:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2126433679' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:00:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:00:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2126433679' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:00:34 np0005464891 nova_compute[259907]: 2025-10-01 17:00:34.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:34 np0005464891 nova_compute[259907]: 2025-10-01 17:00:34.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:00:34 np0005464891 nova_compute[259907]: 2025-10-01 17:00:34.806 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:00:34 np0005464891 nova_compute[259907]: 2025-10-01 17:00:34.831 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:00:34 np0005464891 nova_compute[259907]: 2025-10-01 17:00:34.832 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:00:34 np0005464891 nova_compute[259907]: 2025-10-01 17:00:34.832 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:00:34 np0005464891 nova_compute[259907]: 2025-10-01 17:00:34.833 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 13:00:34 np0005464891 nova_compute[259907]: 2025-10-01 17:00:34.833 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:00:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:00:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3544342543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:00:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 364 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 666 KiB/s rd, 31 KiB/s wr, 71 op/s
Oct  1 13:00:35 np0005464891 nova_compute[259907]: 2025-10-01 17:00:35.271 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:00:35 np0005464891 nova_compute[259907]: 2025-10-01 17:00:35.346 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 13:00:35 np0005464891 nova_compute[259907]: 2025-10-01 17:00:35.346 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 13:00:35 np0005464891 nova_compute[259907]: 2025-10-01 17:00:35.580 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:00:35 np0005464891 nova_compute[259907]: 2025-10-01 17:00:35.581 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4181MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 13:00:35 np0005464891 nova_compute[259907]: 2025-10-01 17:00:35.581 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:00:35 np0005464891 nova_compute[259907]: 2025-10-01 17:00:35.582 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:00:35 np0005464891 nova_compute[259907]: 2025-10-01 17:00:35.690 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 1affd3fe-8ee0-455e-bcef-79fe7bcb283d actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 13:00:35 np0005464891 nova_compute[259907]: 2025-10-01 17:00:35.690 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 13:00:35 np0005464891 nova_compute[259907]: 2025-10-01 17:00:35.691 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 13:00:35 np0005464891 nova_compute[259907]: 2025-10-01 17:00:35.739 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:00:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:00:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4110678286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:00:36 np0005464891 nova_compute[259907]: 2025-10-01 17:00:36.183 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:00:36 np0005464891 nova_compute[259907]: 2025-10-01 17:00:36.190 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:00:36 np0005464891 nova_compute[259907]: 2025-10-01 17:00:36.206 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:00:36 np0005464891 nova_compute[259907]: 2025-10-01 17:00:36.227 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 13:00:36 np0005464891 nova_compute[259907]: 2025-10-01 17:00:36.227 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:00:36 np0005464891 nova_compute[259907]: 2025-10-01 17:00:36.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:36 np0005464891 podman[301403]: 2025-10-01 17:00:36.962225458 +0000 UTC m=+0.057542138 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  1 13:00:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:00:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/467929184' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:00:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:00:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/467929184' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:00:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1826: 321 pgs: 321 active+clean; 364 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 232 KiB/s rd, 18 KiB/s wr, 34 op/s
Oct  1 13:00:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:00:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/560150037' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:00:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:00:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/560150037' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:00:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:00:39 np0005464891 nova_compute[259907]: 2025-10-01 17:00:39.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:39 np0005464891 nova_compute[259907]: 2025-10-01 17:00:39.223 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:00:39 np0005464891 nova_compute[259907]: 2025-10-01 17:00:39.224 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:00:39 np0005464891 nova_compute[259907]: 2025-10-01 17:00:39.224 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 13:00:39 np0005464891 nova_compute[259907]: 2025-10-01 17:00:39.224 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 13:00:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 19 KiB/s wr, 39 op/s
Oct  1 13:00:39 np0005464891 nova_compute[259907]: 2025-10-01 17:00:39.383 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:00:39 np0005464891 nova_compute[259907]: 2025-10-01 17:00:39.383 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquired lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:00:39 np0005464891 nova_compute[259907]: 2025-10-01 17:00:39.383 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  1 13:00:39 np0005464891 nova_compute[259907]: 2025-10-01 17:00:39.384 2 DEBUG nova.objects.instance [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1affd3fe-8ee0-455e-bcef-79fe7bcb283d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:00:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 240 KiB/s rd, 19 KiB/s wr, 46 op/s
Oct  1 13:00:41 np0005464891 nova_compute[259907]: 2025-10-01 17:00:41.535 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Updating instance_info_cache with network_info: [{"id": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "address": "fa:16:3e:62:31:95", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee3f438c-5d", "ovs_interfaceid": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:00:41 np0005464891 nova_compute[259907]: 2025-10-01 17:00:41.636 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Releasing lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:00:41 np0005464891 nova_compute[259907]: 2025-10-01 17:00:41.637 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  1 13:00:41 np0005464891 nova_compute[259907]: 2025-10-01 17:00:41.637 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:00:41 np0005464891 nova_compute[259907]: 2025-10-01 17:00:41.637 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:00:41 np0005464891 nova_compute[259907]: 2025-10-01 17:00:41.638 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 13:00:41 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:41Z|00227|binding|INFO|Releasing lport d971881d-8d8b-44dc-b0b0-4cd0065c0105 from this chassis (sb_readonly=0)
Oct  1 13:00:41 np0005464891 nova_compute[259907]: 2025-10-01 17:00:41.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:41 np0005464891 nova_compute[259907]: 2025-10-01 17:00:41.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:00:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:00:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:00:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:00:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:00:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:00:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:00:42 np0005464891 nova_compute[259907]: 2025-10-01 17:00:42.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:00:42 np0005464891 nova_compute[259907]: 2025-10-01 17:00:42.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:00:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 5.2 KiB/s wr, 33 op/s
Oct  1 13:00:43 np0005464891 nova_compute[259907]: 2025-10-01 17:00:43.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:00:44 np0005464891 nova_compute[259907]: 2025-10-01 17:00:44.163 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759338029.1624665, caeab115-da31-48fd-af65-2085a2c28333 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:00:44 np0005464891 nova_compute[259907]: 2025-10-01 17:00:44.163 2 INFO nova.compute.manager [-] [instance: caeab115-da31-48fd-af65-2085a2c28333] VM Stopped (Lifecycle Event)#033[00m
Oct  1 13:00:44 np0005464891 nova_compute[259907]: 2025-10-01 17:00:44.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:44 np0005464891 nova_compute[259907]: 2025-10-01 17:00:44.202 2 DEBUG nova.compute.manager [None req-86e6efaa-ac4e-4a68-9037-3ef7799781e2 - - - - - -] [instance: caeab115-da31-48fd-af65-2085a2c28333] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:00:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 5.6 KiB/s wr, 27 op/s
Oct  1 13:00:45 np0005464891 nova_compute[259907]: 2025-10-01 17:00:45.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:00:45 np0005464891 nova_compute[259907]: 2025-10-01 17:00:45.806 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  1 13:00:46 np0005464891 nova_compute[259907]: 2025-10-01 17:00:46.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 1.8 KiB/s wr, 12 op/s
Oct  1 13:00:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:00:47 np0005464891 ovn_controller[152409]: 2025-10-01T17:00:47Z|00228|binding|INFO|Releasing lport d971881d-8d8b-44dc-b0b0-4cd0065c0105 from this chassis (sb_readonly=0)
Oct  1 13:00:47 np0005464891 nova_compute[259907]: 2025-10-01 17:00:47.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:47 np0005464891 podman[301422]: 2025-10-01 17:00:47.973302083 +0000 UTC m=+0.092523695 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  1 13:00:49 np0005464891 nova_compute[259907]: 2025-10-01 17:00:49.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 1.8 KiB/s wr, 12 op/s
Oct  1 13:00:49 np0005464891 nova_compute[259907]: 2025-10-01 17:00:49.894 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:00:49 np0005464891 nova_compute[259907]: 2025-10-01 17:00:49.895 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  1 13:00:49 np0005464891 nova_compute[259907]: 2025-10-01 17:00:49.939 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  1 13:00:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e443 do_prune osdmap full prune enabled
Oct  1 13:00:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e444 e444: 3 total, 3 up, 3 in
Oct  1 13:00:50 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e444: 3 total, 3 up, 3 in
Oct  1 13:00:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Oct  1 13:00:51 np0005464891 nova_compute[259907]: 2025-10-01 17:00:51.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:00:51 np0005464891 nova_compute[259907]: 2025-10-01 17:00:51.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:00:53 np0005464891 podman[301449]: 2025-10-01 17:00:53.006122886 +0000 UTC m=+0.115044303 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  1 13:00:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 2.0 KiB/s wr, 15 op/s
Oct  1 13:00:54 np0005464891 nova_compute[259907]: 2025-10-01 17:00:54.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 1.7 KiB/s wr, 19 op/s
Oct  1 13:00:56 np0005464891 nova_compute[259907]: 2025-10-01 17:00:56.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:57 np0005464891 podman[301469]: 2025-10-01 17:00:57.033602475 +0000 UTC m=+0.147732858 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  1 13:00:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 1.7 KiB/s wr, 19 op/s
Oct  1 13:00:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:00:59 np0005464891 nova_compute[259907]: 2025-10-01 17:00:59.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:00:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 272 KiB/s rd, 5.6 KiB/s wr, 32 op/s
Oct  1 13:01:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 186 KiB/s rd, 6.9 KiB/s wr, 28 op/s
Oct  1 13:01:01 np0005464891 nova_compute[259907]: 2025-10-01 17:01:01.611 2 DEBUG oslo_concurrency.lockutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:01:01 np0005464891 nova_compute[259907]: 2025-10-01 17:01:01.612 2 DEBUG oslo_concurrency.lockutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:01:01 np0005464891 nova_compute[259907]: 2025-10-01 17:01:01.682 2 DEBUG nova.compute.manager [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 13:01:01 np0005464891 nova_compute[259907]: 2025-10-01 17:01:01.880 2 DEBUG oslo_concurrency.lockutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:01:01 np0005464891 nova_compute[259907]: 2025-10-01 17:01:01.881 2 DEBUG oslo_concurrency.lockutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:01:01 np0005464891 nova_compute[259907]: 2025-10-01 17:01:01.888 2 DEBUG nova.virt.hardware [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 13:01:01 np0005464891 nova_compute[259907]: 2025-10-01 17:01:01.888 2 INFO nova.compute.claims [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 13:01:02 np0005464891 nova_compute[259907]: 2025-10-01 17:01:02.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:02 np0005464891 nova_compute[259907]: 2025-10-01 17:01:02.105 2 DEBUG oslo_concurrency.processutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:01:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:01:02 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/710191137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:01:02 np0005464891 nova_compute[259907]: 2025-10-01 17:01:02.525 2 DEBUG oslo_concurrency.processutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:01:02 np0005464891 nova_compute[259907]: 2025-10-01 17:01:02.531 2 DEBUG nova.compute.provider_tree [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:01:02 np0005464891 nova_compute[259907]: 2025-10-01 17:01:02.652 2 DEBUG nova.scheduler.client.report [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:01:02 np0005464891 nova_compute[259907]: 2025-10-01 17:01:02.836 2 DEBUG oslo_concurrency.lockutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:01:02 np0005464891 nova_compute[259907]: 2025-10-01 17:01:02.837 2 DEBUG nova.compute.manager [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 13:01:02 np0005464891 nova_compute[259907]: 2025-10-01 17:01:02.911 2 DEBUG nova.compute.manager [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 13:01:02 np0005464891 nova_compute[259907]: 2025-10-01 17:01:02.912 2 DEBUG nova.network.neutron [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 13:01:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.005 2 INFO nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.028 2 DEBUG nova.compute.manager [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.084 2 INFO nova.virt.block_device [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Booting with volume edea6e66-22de-4de0-a7d6-ac0cdf5d61ad at /dev/vda#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.207 2 DEBUG os_brick.utils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.209 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.227 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.227 741 DEBUG oslo.privsep.daemon [-] privsep: reply[d6134c8a-71bd-4b38-803b-5c03b72e015b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.229 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.237 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.238 741 DEBUG oslo.privsep.daemon [-] privsep: reply[b18e5b14-bea4-48a6-9a57-1e75a9718fd9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.239 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.249 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.249 741 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b1720c-b785-428c-b62e-d1143e14b56c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.251 741 DEBUG oslo.privsep.daemon [-] privsep: reply[dde35276-9102-40a3-80a8-e3b0abc86480]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.251 2 DEBUG oslo_concurrency.processutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:01:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1840: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 160 KiB/s rd, 6.0 KiB/s wr, 24 op/s
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.283 2 DEBUG oslo_concurrency.processutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.286 2 DEBUG os_brick.initiator.connectors.lightos [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.287 2 DEBUG os_brick.initiator.connectors.lightos [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.287 2 DEBUG os_brick.initiator.connectors.lightos [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.287 2 DEBUG os_brick.utils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.288 2 DEBUG nova.virt.block_device [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Updating existing volume attachment record: dc0222ef-41f1-40a5-aa56-618eadda1fd9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 13:01:03 np0005464891 nova_compute[259907]: 2025-10-01 17:01:03.357 2 DEBUG nova.policy [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1280014cdfb74333ae8d71c78116e646', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8318b65fa88942a99937a0d198a04a9c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 13:01:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:01:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3816788144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:01:04 np0005464891 nova_compute[259907]: 2025-10-01 17:01:04.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:04 np0005464891 nova_compute[259907]: 2025-10-01 17:01:04.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:04.295 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:01:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:04.297 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 13:01:04 np0005464891 nova_compute[259907]: 2025-10-01 17:01:04.386 2 DEBUG nova.network.neutron [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Successfully created port: f17b14b5-e93b-4f50-b43d-2137edec2647 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 13:01:04 np0005464891 nova_compute[259907]: 2025-10-01 17:01:04.498 2 DEBUG nova.compute.manager [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 13:01:04 np0005464891 nova_compute[259907]: 2025-10-01 17:01:04.500 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 13:01:04 np0005464891 nova_compute[259907]: 2025-10-01 17:01:04.500 2 INFO nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Creating image(s)#033[00m
Oct  1 13:01:04 np0005464891 nova_compute[259907]: 2025-10-01 17:01:04.501 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  1 13:01:04 np0005464891 nova_compute[259907]: 2025-10-01 17:01:04.501 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Ensure instance console log exists: /var/lib/nova/instances/2e2fb6e1-ace5-45d4-a1ea-b41c2b903193/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 13:01:04 np0005464891 nova_compute[259907]: 2025-10-01 17:01:04.501 2 DEBUG oslo_concurrency.lockutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:01:04 np0005464891 nova_compute[259907]: 2025-10-01 17:01:04.502 2 DEBUG oslo_concurrency.lockutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:01:04 np0005464891 nova_compute[259907]: 2025-10-01 17:01:04.502 2 DEBUG oslo_concurrency.lockutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:01:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 5.3 KiB/s wr, 14 op/s
Oct  1 13:01:05 np0005464891 nova_compute[259907]: 2025-10-01 17:01:05.379 2 DEBUG nova.network.neutron [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Successfully updated port: f17b14b5-e93b-4f50-b43d-2137edec2647 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 13:01:05 np0005464891 nova_compute[259907]: 2025-10-01 17:01:05.396 2 DEBUG oslo_concurrency.lockutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "refresh_cache-2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:01:05 np0005464891 nova_compute[259907]: 2025-10-01 17:01:05.396 2 DEBUG oslo_concurrency.lockutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquired lock "refresh_cache-2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:01:05 np0005464891 nova_compute[259907]: 2025-10-01 17:01:05.396 2 DEBUG nova.network.neutron [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 13:01:05 np0005464891 nova_compute[259907]: 2025-10-01 17:01:05.517 2 DEBUG nova.compute.manager [req-a333e0b2-0494-4522-ae0a-86bdce537c79 req-4fbe46cf-c0f8-437e-b3a6-923be1638e52 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Received event network-changed-f17b14b5-e93b-4f50-b43d-2137edec2647 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:01:05 np0005464891 nova_compute[259907]: 2025-10-01 17:01:05.518 2 DEBUG nova.compute.manager [req-a333e0b2-0494-4522-ae0a-86bdce537c79 req-4fbe46cf-c0f8-437e-b3a6-923be1638e52 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Refreshing instance network info cache due to event network-changed-f17b14b5-e93b-4f50-b43d-2137edec2647. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 13:01:05 np0005464891 nova_compute[259907]: 2025-10-01 17:01:05.518 2 DEBUG oslo_concurrency.lockutils [req-a333e0b2-0494-4522-ae0a-86bdce537c79 req-4fbe46cf-c0f8-437e-b3a6-923be1638e52 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:01:05 np0005464891 nova_compute[259907]: 2025-10-01 17:01:05.575 2 DEBUG nova.network.neutron [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.328 2 DEBUG nova.network.neutron [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Updating instance_info_cache with network_info: [{"id": "f17b14b5-e93b-4f50-b43d-2137edec2647", "address": "fa:16:3e:4b:c1:ec", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17b14b5-e9", "ovs_interfaceid": "f17b14b5-e93b-4f50-b43d-2137edec2647", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.351 2 DEBUG oslo_concurrency.lockutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Releasing lock "refresh_cache-2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.351 2 DEBUG nova.compute.manager [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Instance network_info: |[{"id": "f17b14b5-e93b-4f50-b43d-2137edec2647", "address": "fa:16:3e:4b:c1:ec", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17b14b5-e9", "ovs_interfaceid": "f17b14b5-e93b-4f50-b43d-2137edec2647", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.352 2 DEBUG oslo_concurrency.lockutils [req-a333e0b2-0494-4522-ae0a-86bdce537c79 req-4fbe46cf-c0f8-437e-b3a6-923be1638e52 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.352 2 DEBUG nova.network.neutron [req-a333e0b2-0494-4522-ae0a-86bdce537c79 req-4fbe46cf-c0f8-437e-b3a6-923be1638e52 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Refreshing network info cache for port f17b14b5-e93b-4f50-b43d-2137edec2647 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.358 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Start _get_guest_xml network_info=[{"id": "f17b14b5-e93b-4f50-b43d-2137edec2647", "address": "fa:16:3e:4b:c1:ec", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17b14b5-e9", "ovs_interfaceid": "f17b14b5-e93b-4f50-b43d-2137edec2647", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'mount_device': '/dev/vda', 'attachment_id': 'dc0222ef-41f1-40a5-aa56-618eadda1fd9', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-edea6e66-22de-4de0-a7d6-ac0cdf5d61ad', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'edea6e66-22de-4de0-a7d6-ac0cdf5d61ad', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '2e2fb6e1-ace5-45d4-a1ea-b41c2b903193', 'attached_at': '', 'detached_at': '', 'volume_id': 'edea6e66-22de-4de0-a7d6-ac0cdf5d61ad', 'serial': 'edea6e66-22de-4de0-a7d6-ac0cdf5d61ad'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.363 2 WARNING nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.369 2 DEBUG nova.virt.libvirt.host [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.369 2 DEBUG nova.virt.libvirt.host [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.373 2 DEBUG nova.virt.libvirt.host [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.373 2 DEBUG nova.virt.libvirt.host [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.374 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.374 2 DEBUG nova.virt.hardware [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.375 2 DEBUG nova.virt.hardware [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.375 2 DEBUG nova.virt.hardware [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.375 2 DEBUG nova.virt.hardware [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.376 2 DEBUG nova.virt.hardware [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.376 2 DEBUG nova.virt.hardware [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.376 2 DEBUG nova.virt.hardware [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.377 2 DEBUG nova.virt.hardware [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.377 2 DEBUG nova.virt.hardware [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.377 2 DEBUG nova.virt.hardware [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.377 2 DEBUG nova.virt.hardware [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.409 2 DEBUG nova.storage.rbd_utils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.415 2 DEBUG oslo_concurrency.processutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:01:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:01:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/12851962' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:01:06 np0005464891 nova_compute[259907]: 2025-10-01 17:01:06.897 2 DEBUG oslo_concurrency.processutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.017 2 DEBUG nova.virt.libvirt.vif [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T17:00:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-232245827',display_name='tempest-TestVolumeBootPattern-server-232245827',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-232245827',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClGQJy6/uYcm9xjql2YyXBlCsn5OCvS8WdRyMV8X4KRzO9nDb+WS/fT0IKQE/81aKxS5QGeI9sR/4PyJ3PdLDqhc6ZZs5CYH0MiYsApL0NE4Z3MyQ9wrVPiCTRct79ckg==',key_name='tempest-TestVolumeBootPattern-477590145',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-syvo7n89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T17:01:03Z,user_data=None,user_id='1280014cdfb74333ae8d71c78116e646',uuid=2e2fb6e1-ace5-45d4-a1ea-b41c2b903193,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f17b14b5-e93b-4f50-b43d-2137edec2647", "address": "fa:16:3e:4b:c1:ec", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17b14b5-e9", "ovs_interfaceid": "f17b14b5-e93b-4f50-b43d-2137edec2647", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.018 2 DEBUG nova.network.os_vif_util [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "f17b14b5-e93b-4f50-b43d-2137edec2647", "address": "fa:16:3e:4b:c1:ec", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17b14b5-e9", "ovs_interfaceid": "f17b14b5-e93b-4f50-b43d-2137edec2647", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.019 2 DEBUG nova.network.os_vif_util [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=f17b14b5-e93b-4f50-b43d-2137edec2647,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf17b14b5-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.021 2 DEBUG nova.objects.instance [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lazy-loading 'pci_devices' on Instance uuid 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.114 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] End _get_guest_xml xml=<domain type="kvm">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  <uuid>2e2fb6e1-ace5-45d4-a1ea-b41c2b903193</uuid>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  <name>instance-00000019</name>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <nova:name>tempest-TestVolumeBootPattern-server-232245827</nova:name>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 17:01:06</nova:creationTime>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:        <nova:user uuid="1280014cdfb74333ae8d71c78116e646">tempest-TestVolumeBootPattern-582136054-project-member</nova:user>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:        <nova:project uuid="8318b65fa88942a99937a0d198a04a9c">tempest-TestVolumeBootPattern-582136054</nova:project>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:        <nova:port uuid="f17b14b5-e93b-4f50-b43d-2137edec2647">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <system>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <entry name="serial">2e2fb6e1-ace5-45d4-a1ea-b41c2b903193</entry>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <entry name="uuid">2e2fb6e1-ace5-45d4-a1ea-b41c2b903193</entry>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    </system>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  <os>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  </os>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  <features>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  </features>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  </clock>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  <devices>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/2e2fb6e1-ace5-45d4-a1ea-b41c2b903193_disk.config">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      </source>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      </auth>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    </disk>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="volumes/volume-edea6e66-22de-4de0-a7d6-ac0cdf5d61ad">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      </source>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      </auth>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <serial>edea6e66-22de-4de0-a7d6-ac0cdf5d61ad</serial>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    </disk>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:4b:c1:ec"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <target dev="tapf17b14b5-e9"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    </interface>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/2e2fb6e1-ace5-45d4-a1ea-b41c2b903193/console.log" append="off"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    </serial>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <video>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    </video>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    </rng>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 13:01:07 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 13:01:07 np0005464891 nova_compute[259907]:  </devices>
Oct  1 13:01:07 np0005464891 nova_compute[259907]: </domain>
Oct  1 13:01:07 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.115 2 DEBUG nova.compute.manager [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Preparing to wait for external event network-vif-plugged-f17b14b5-e93b-4f50-b43d-2137edec2647 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.115 2 DEBUG oslo_concurrency.lockutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.115 2 DEBUG oslo_concurrency.lockutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.116 2 DEBUG oslo_concurrency.lockutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.117 2 DEBUG nova.virt.libvirt.vif [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T17:00:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-232245827',display_name='tempest-TestVolumeBootPattern-server-232245827',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-232245827',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClGQJy6/uYcm9xjql2YyXBlCsn5OCvS8WdRyMV8X4KRzO9nDb+WS/fT0IKQE/81aKxS5QGeI9sR/4PyJ3PdLDqhc6ZZs5CYH0MiYsApL0NE4Z3MyQ9wrVPiCTRct79ckg==',key_name='tempest-TestVolumeBootPattern-477590145',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-syvo7n89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T17:01:03Z,user_data=None,user_id='1280014cdfb74333ae8d71c78116e646',uuid=2e2fb6e1-ace5-45d4-a1ea-b41c2b903193,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f17b14b5-e93b-4f50-b43d-2137edec2647", "address": "fa:16:3e:4b:c1:ec", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17b14b5-e9", "ovs_interfaceid": "f17b14b5-e93b-4f50-b43d-2137edec2647", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.117 2 DEBUG nova.network.os_vif_util [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "f17b14b5-e93b-4f50-b43d-2137edec2647", "address": "fa:16:3e:4b:c1:ec", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17b14b5-e9", "ovs_interfaceid": "f17b14b5-e93b-4f50-b43d-2137edec2647", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.118 2 DEBUG nova.network.os_vif_util [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=f17b14b5-e93b-4f50-b43d-2137edec2647,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf17b14b5-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.119 2 DEBUG os_vif [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=f17b14b5-e93b-4f50-b43d-2137edec2647,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf17b14b5-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.120 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.121 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.125 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf17b14b5-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.126 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf17b14b5-e9, col_values=(('external_ids', {'iface-id': 'f17b14b5-e93b-4f50-b43d-2137edec2647', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:c1:ec', 'vm-uuid': '2e2fb6e1-ace5-45d4-a1ea-b41c2b903193'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:07 np0005464891 NetworkManager[44940]: <info>  [1759338067.1296] manager: (tapf17b14b5-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.141 2 INFO os_vif [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=f17b14b5-e93b-4f50-b43d-2137edec2647,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf17b14b5-e9')#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.196 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.197 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.197 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] No VIF found with MAC fa:16:3e:4b:c1:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.197 2 INFO nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Using config drive#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.216 2 DEBUG nova.storage.rbd_utils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:01:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 4.6 KiB/s wr, 10 op/s
Oct  1 13:01:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:07.300 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.825 2 INFO nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Creating config drive at /var/lib/nova/instances/2e2fb6e1-ace5-45d4-a1ea-b41c2b903193/disk.config#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.832 2 DEBUG oslo_concurrency.processutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2e2fb6e1-ace5-45d4-a1ea-b41c2b903193/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgq8yh_2h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.854 2 DEBUG nova.network.neutron [req-a333e0b2-0494-4522-ae0a-86bdce537c79 req-4fbe46cf-c0f8-437e-b3a6-923be1638e52 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Updated VIF entry in instance network info cache for port f17b14b5-e93b-4f50-b43d-2137edec2647. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.855 2 DEBUG nova.network.neutron [req-a333e0b2-0494-4522-ae0a-86bdce537c79 req-4fbe46cf-c0f8-437e-b3a6-923be1638e52 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Updating instance_info_cache with network_info: [{"id": "f17b14b5-e93b-4f50-b43d-2137edec2647", "address": "fa:16:3e:4b:c1:ec", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17b14b5-e9", "ovs_interfaceid": "f17b14b5-e93b-4f50-b43d-2137edec2647", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.880 2 DEBUG oslo_concurrency.lockutils [req-a333e0b2-0494-4522-ae0a-86bdce537c79 req-4fbe46cf-c0f8-437e-b3a6-923be1638e52 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:01:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:01:07 np0005464891 podman[301593]: 2025-10-01 17:01:07.951517299 +0000 UTC m=+0.061353912 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.961 2 DEBUG oslo_concurrency.processutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2e2fb6e1-ace5-45d4-a1ea-b41c2b903193/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgq8yh_2h" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.989 2 DEBUG nova.storage.rbd_utils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] rbd image 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:01:07 np0005464891 nova_compute[259907]: 2025-10-01 17:01:07.993 2 DEBUG oslo_concurrency.processutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2e2fb6e1-ace5-45d4-a1ea-b41c2b903193/disk.config 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:01:08 np0005464891 nova_compute[259907]: 2025-10-01 17:01:08.679 2 DEBUG oslo_concurrency.processutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2e2fb6e1-ace5-45d4-a1ea-b41c2b903193/disk.config 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:01:08 np0005464891 nova_compute[259907]: 2025-10-01 17:01:08.681 2 INFO nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Deleting local config drive /var/lib/nova/instances/2e2fb6e1-ace5-45d4-a1ea-b41c2b903193/disk.config because it was imported into RBD.#033[00m
Oct  1 13:01:08 np0005464891 kernel: tapf17b14b5-e9: entered promiscuous mode
Oct  1 13:01:08 np0005464891 ovn_controller[152409]: 2025-10-01T17:01:08Z|00229|binding|INFO|Claiming lport f17b14b5-e93b-4f50-b43d-2137edec2647 for this chassis.
Oct  1 13:01:08 np0005464891 ovn_controller[152409]: 2025-10-01T17:01:08Z|00230|binding|INFO|f17b14b5-e93b-4f50-b43d-2137edec2647: Claiming fa:16:3e:4b:c1:ec 10.100.0.8
Oct  1 13:01:08 np0005464891 NetworkManager[44940]: <info>  [1759338068.7528] manager: (tapf17b14b5-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/128)
Oct  1 13:01:08 np0005464891 nova_compute[259907]: 2025-10-01 17:01:08.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:08.760 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:c1:ec 10.100.0.8'], port_security=['fa:16:3e:4b:c1:ec 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2e2fb6e1-ace5-45d4-a1ea-b41c2b903193', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce1e1062-6685-441b-8278-667224375e38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8318b65fa88942a99937a0d198a04a9c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fda2d0b4-8d53-4a87-93c6-2f62b1be0cd1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4cdcf07-f310-4572-944c-43bd8e74f763, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=f17b14b5-e93b-4f50-b43d-2137edec2647) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:01:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:08.763 162546 INFO neutron.agent.ovn.metadata.agent [-] Port f17b14b5-e93b-4f50-b43d-2137edec2647 in datapath ce1e1062-6685-441b-8278-667224375e38 bound to our chassis#033[00m
Oct  1 13:01:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:08.765 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce1e1062-6685-441b-8278-667224375e38#033[00m
Oct  1 13:01:08 np0005464891 ovn_controller[152409]: 2025-10-01T17:01:08Z|00231|binding|INFO|Setting lport f17b14b5-e93b-4f50-b43d-2137edec2647 ovn-installed in OVS
Oct  1 13:01:08 np0005464891 ovn_controller[152409]: 2025-10-01T17:01:08Z|00232|binding|INFO|Setting lport f17b14b5-e93b-4f50-b43d-2137edec2647 up in Southbound
Oct  1 13:01:08 np0005464891 nova_compute[259907]: 2025-10-01 17:01:08.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:08 np0005464891 nova_compute[259907]: 2025-10-01 17:01:08.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:08.791 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[dfdb1239-d2b4-41e9-afce-830425c7a52b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:01:08 np0005464891 systemd-machined[214891]: New machine qemu-25-instance-00000019.
Oct  1 13:01:08 np0005464891 systemd-udevd[301665]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 13:01:08 np0005464891 systemd[1]: Started Virtual Machine qemu-25-instance-00000019.
Oct  1 13:01:08 np0005464891 NetworkManager[44940]: <info>  [1759338068.8258] device (tapf17b14b5-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 13:01:08 np0005464891 NetworkManager[44940]: <info>  [1759338068.8266] device (tapf17b14b5-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 13:01:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:08.830 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[c54a83ac-4910-45ed-991e-a50dd29cd7e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:01:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:08.834 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[740fb5a2-7e52-4db1-b9a3-5038358e67a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:01:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:08.871 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[47cd7003-a29c-43ea-ad02-47b91883b036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:01:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:08.893 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[93bf28d9-f5ba-4b43-bcd2-8bfef4c0e686]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce1e1062-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:87:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 846, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 846, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495780, 'reachable_time': 37017, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 7, 'inoctets': 664, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 664, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301672, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:01:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:08.917 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[282ceced-ac12-40b1-85af-1e6423ba9fcc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce1e1062-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495792, 'tstamp': 495792}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301676, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce1e1062-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495795, 'tstamp': 495795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301676, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:01:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:08.920 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce1e1062-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:01:08 np0005464891 nova_compute[259907]: 2025-10-01 17:01:08.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:08 np0005464891 nova_compute[259907]: 2025-10-01 17:01:08.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:08.924 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce1e1062-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:01:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:08.925 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:01:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:08.925 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce1e1062-60, col_values=(('external_ids', {'iface-id': 'd971881d-8d8b-44dc-b0b0-4cd0065c0105'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:01:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:08.926 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.096 2 DEBUG nova.compute.manager [req-c59e1a3f-3f47-4ab7-9a76-c2d5a617289a req-e6dc3dd9-437e-4c1f-b7fa-43d9b07f3f75 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Received event network-vif-plugged-f17b14b5-e93b-4f50-b43d-2137edec2647 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.096 2 DEBUG oslo_concurrency.lockutils [req-c59e1a3f-3f47-4ab7-9a76-c2d5a617289a req-e6dc3dd9-437e-4c1f-b7fa-43d9b07f3f75 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.097 2 DEBUG oslo_concurrency.lockutils [req-c59e1a3f-3f47-4ab7-9a76-c2d5a617289a req-e6dc3dd9-437e-4c1f-b7fa-43d9b07f3f75 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.097 2 DEBUG oslo_concurrency.lockutils [req-c59e1a3f-3f47-4ab7-9a76-c2d5a617289a req-e6dc3dd9-437e-4c1f-b7fa-43d9b07f3f75 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.097 2 DEBUG nova.compute.manager [req-c59e1a3f-3f47-4ab7-9a76-c2d5a617289a req-e6dc3dd9-437e-4c1f-b7fa-43d9b07f3f75 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Processing event network-vif-plugged-f17b14b5-e93b-4f50-b43d-2137edec2647 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 13:01:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 4.6 KiB/s wr, 10 op/s
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.716 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338069.715766, 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.716 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] VM Started (Lifecycle Event)#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.718 2 DEBUG nova.compute.manager [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.721 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.725 2 INFO nova.virt.libvirt.driver [-] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Instance spawned successfully.#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.725 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.739 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.742 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.755 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.755 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.756 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.756 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.756 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.757 2 DEBUG nova.virt.libvirt.driver [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.769 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.770 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338069.7159188, 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.770 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] VM Paused (Lifecycle Event)#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.805 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.809 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338069.720559, 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.810 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] VM Resumed (Lifecycle Event)#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.845 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.849 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.861 2 INFO nova.compute.manager [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Took 5.36 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.862 2 DEBUG nova.compute.manager [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.884 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.946 2 INFO nova.compute.manager [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Took 8.09 seconds to build instance.#033[00m
Oct  1 13:01:09 np0005464891 nova_compute[259907]: 2025-10-01 17:01:09.974 2 DEBUG oslo_concurrency.lockutils [None req-fd888241-0e83-41a5-bef4-faf52ad9d2c4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:01:11 np0005464891 nova_compute[259907]: 2025-10-01 17:01:11.164 2 DEBUG nova.compute.manager [req-a13019bd-3762-409b-8b79-eea888ba8885 req-db75ae54-5434-47ae-ba18-80a45c18184b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Received event network-vif-plugged-f17b14b5-e93b-4f50-b43d-2137edec2647 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:01:11 np0005464891 nova_compute[259907]: 2025-10-01 17:01:11.165 2 DEBUG oslo_concurrency.lockutils [req-a13019bd-3762-409b-8b79-eea888ba8885 req-db75ae54-5434-47ae-ba18-80a45c18184b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:01:11 np0005464891 nova_compute[259907]: 2025-10-01 17:01:11.165 2 DEBUG oslo_concurrency.lockutils [req-a13019bd-3762-409b-8b79-eea888ba8885 req-db75ae54-5434-47ae-ba18-80a45c18184b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:01:11 np0005464891 nova_compute[259907]: 2025-10-01 17:01:11.165 2 DEBUG oslo_concurrency.lockutils [req-a13019bd-3762-409b-8b79-eea888ba8885 req-db75ae54-5434-47ae-ba18-80a45c18184b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:01:11 np0005464891 nova_compute[259907]: 2025-10-01 17:01:11.165 2 DEBUG nova.compute.manager [req-a13019bd-3762-409b-8b79-eea888ba8885 req-db75ae54-5434-47ae-ba18-80a45c18184b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] No waiting events found dispatching network-vif-plugged-f17b14b5-e93b-4f50-b43d-2137edec2647 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:01:11 np0005464891 nova_compute[259907]: 2025-10-01 17:01:11.166 2 WARNING nova.compute.manager [req-a13019bd-3762-409b-8b79-eea888ba8885 req-db75ae54-5434-47ae-ba18-80a45c18184b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Received unexpected event network-vif-plugged-f17b14b5-e93b-4f50-b43d-2137edec2647 for instance with vm_state active and task_state None.#033[00m
Oct  1 13:01:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 13 KiB/s wr, 5 op/s
Oct  1 13:01:12 np0005464891 nova_compute[259907]: 2025-10-01 17:01:12.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_17:01:12
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'volumes', 'backups', 'images', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 13:01:12 np0005464891 nova_compute[259907]: 2025-10-01 17:01:12.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:12.464 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:01:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:12.464 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:01:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:12.464 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:01:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:01:12 np0005464891 nova_compute[259907]: 2025-10-01 17:01:12.655 2 DEBUG nova.compute.manager [req-a54fa9fa-3c52-476d-b9ef-2aec1741ad66 req-7422ede9-6d81-4ba5-8764-3921571de4df af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Received event network-changed-f17b14b5-e93b-4f50-b43d-2137edec2647 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:01:12 np0005464891 nova_compute[259907]: 2025-10-01 17:01:12.655 2 DEBUG nova.compute.manager [req-a54fa9fa-3c52-476d-b9ef-2aec1741ad66 req-7422ede9-6d81-4ba5-8764-3921571de4df af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Refreshing instance network info cache due to event network-changed-f17b14b5-e93b-4f50-b43d-2137edec2647. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 13:01:12 np0005464891 nova_compute[259907]: 2025-10-01 17:01:12.655 2 DEBUG oslo_concurrency.lockutils [req-a54fa9fa-3c52-476d-b9ef-2aec1741ad66 req-7422ede9-6d81-4ba5-8764-3921571de4df af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:01:12 np0005464891 nova_compute[259907]: 2025-10-01 17:01:12.655 2 DEBUG oslo_concurrency.lockutils [req-a54fa9fa-3c52-476d-b9ef-2aec1741ad66 req-7422ede9-6d81-4ba5-8764-3921571de4df af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:01:12 np0005464891 nova_compute[259907]: 2025-10-01 17:01:12.656 2 DEBUG nova.network.neutron [req-a54fa9fa-3c52-476d-b9ef-2aec1741ad66 req-7422ede9-6d81-4ba5-8764-3921571de4df af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Refreshing network info cache for port f17b14b5-e93b-4f50-b43d-2137edec2647 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 13:01:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:01:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 487 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct  1 13:01:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1846: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct  1 13:01:15 np0005464891 nova_compute[259907]: 2025-10-01 17:01:15.340 2 DEBUG nova.network.neutron [req-a54fa9fa-3c52-476d-b9ef-2aec1741ad66 req-7422ede9-6d81-4ba5-8764-3921571de4df af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Updated VIF entry in instance network info cache for port f17b14b5-e93b-4f50-b43d-2137edec2647. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 13:01:15 np0005464891 nova_compute[259907]: 2025-10-01 17:01:15.340 2 DEBUG nova.network.neutron [req-a54fa9fa-3c52-476d-b9ef-2aec1741ad66 req-7422ede9-6d81-4ba5-8764-3921571de4df af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Updating instance_info_cache with network_info: [{"id": "f17b14b5-e93b-4f50-b43d-2137edec2647", "address": "fa:16:3e:4b:c1:ec", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17b14b5-e9", "ovs_interfaceid": "f17b14b5-e93b-4f50-b43d-2137edec2647", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:01:15 np0005464891 nova_compute[259907]: 2025-10-01 17:01:15.366 2 DEBUG oslo_concurrency.lockutils [req-a54fa9fa-3c52-476d-b9ef-2aec1741ad66 req-7422ede9-6d81-4ba5-8764-3921571de4df af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:01:17 np0005464891 nova_compute[259907]: 2025-10-01 17:01:17.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:17 np0005464891 nova_compute[259907]: 2025-10-01 17:01:17.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct  1 13:01:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:01:19 np0005464891 podman[301720]: 2025-10-01 17:01:19.018674312 +0000 UTC m=+0.132655555 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  1 13:01:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1848: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct  1 13:01:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct  1 13:01:21 np0005464891 ovn_controller[152409]: 2025-10-01T17:01:21Z|00056|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.8
Oct  1 13:01:21 np0005464891 ovn_controller[152409]: 2025-10-01T17:01:21Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:4b:c1:ec 10.100.0.8
Oct  1 13:01:22 np0005464891 nova_compute[259907]: 2025-10-01 17:01:22.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:22 np0005464891 nova_compute[259907]: 2025-10-01 17:01:22.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 4.896484502181415e-06 of space, bias 1.0, pg target 0.0014689453506544247 quantized to 32 (current 32)
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003653922071368106 of space, bias 1.0, pg target 1.0961766214104318 quantized to 32 (current 32)
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.9013621638340822e-05 quantized to 32 (current 32)
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.19918670028325844 quantized to 32 (current 32)
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:01:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct  1 13:01:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:01:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 361 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 325 KiB/s wr, 90 op/s
Oct  1 13:01:23 np0005464891 podman[301744]: 2025-10-01 17:01:23.958929809 +0000 UTC m=+0.071267994 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct  1 13:01:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1851: 321 pgs: 321 active+clean; 366 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 506 KiB/s wr, 100 op/s
Oct  1 13:01:26 np0005464891 ovn_controller[152409]: 2025-10-01T17:01:26Z|00058|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.8
Oct  1 13:01:26 np0005464891 ovn_controller[152409]: 2025-10-01T17:01:26Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:4b:c1:ec 10.100.0.8
Oct  1 13:01:26 np0005464891 ovn_controller[152409]: 2025-10-01T17:01:26Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:c1:ec 10.100.0.8
Oct  1 13:01:26 np0005464891 ovn_controller[152409]: 2025-10-01T17:01:26Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:c1:ec 10.100.0.8
Oct  1 13:01:27 np0005464891 nova_compute[259907]: 2025-10-01 17:01:27.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:27 np0005464891 nova_compute[259907]: 2025-10-01 17:01:27.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:01:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:01:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 13:01:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:01:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 13:01:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 366 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 506 KiB/s wr, 53 op/s
Oct  1 13:01:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:01:27 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 7d908dd1-df23-45fb-aee1-f6eb51077eb2 does not exist
Oct  1 13:01:27 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 2fa159fc-3148-4123-b13f-a919818ab42f does not exist
Oct  1 13:01:27 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev dca2dbcd-b3c4-49b8-aa6a-0424dc986bed does not exist
Oct  1 13:01:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 13:01:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 13:01:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 13:01:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:01:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:01:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:01:27 np0005464891 podman[301921]: 2025-10-01 17:01:27.498397209 +0000 UTC m=+0.075496340 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  1 13:01:27 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:01:27 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:01:27 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:01:28 np0005464891 podman[302056]: 2025-10-01 17:01:27.986951164 +0000 UTC m=+0.027045212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:01:28 np0005464891 podman[302056]: 2025-10-01 17:01:28.160387476 +0000 UTC m=+0.200481474 container create 86ae810fc7417eb06d42d4dc0afd534fc1cfdfeb9f9c6834d76a4566d339eef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:01:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:01:28 np0005464891 systemd[1]: Started libpod-conmon-86ae810fc7417eb06d42d4dc0afd534fc1cfdfeb9f9c6834d76a4566d339eef9.scope.
Oct  1 13:01:28 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:01:28 np0005464891 podman[302056]: 2025-10-01 17:01:28.472342762 +0000 UTC m=+0.512436860 container init 86ae810fc7417eb06d42d4dc0afd534fc1cfdfeb9f9c6834d76a4566d339eef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:01:28 np0005464891 podman[302056]: 2025-10-01 17:01:28.482126731 +0000 UTC m=+0.522220739 container start 86ae810fc7417eb06d42d4dc0afd534fc1cfdfeb9f9c6834d76a4566d339eef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 13:01:28 np0005464891 awesome_visvesvaraya[302073]: 167 167
Oct  1 13:01:28 np0005464891 systemd[1]: libpod-86ae810fc7417eb06d42d4dc0afd534fc1cfdfeb9f9c6834d76a4566d339eef9.scope: Deactivated successfully.
Oct  1 13:01:28 np0005464891 podman[302056]: 2025-10-01 17:01:28.523861743 +0000 UTC m=+0.563955831 container attach 86ae810fc7417eb06d42d4dc0afd534fc1cfdfeb9f9c6834d76a4566d339eef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 13:01:28 np0005464891 podman[302056]: 2025-10-01 17:01:28.524861751 +0000 UTC m=+0.564955799 container died 86ae810fc7417eb06d42d4dc0afd534fc1cfdfeb9f9c6834d76a4566d339eef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_visvesvaraya, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct  1 13:01:28 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c1d5162e5fc4bfd76fb4970103cf16261c5844b01f6be38569a3a9b2b8f190ff-merged.mount: Deactivated successfully.
Oct  1 13:01:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 366 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 517 KiB/s wr, 55 op/s
Oct  1 13:01:29 np0005464891 podman[302056]: 2025-10-01 17:01:29.367631441 +0000 UTC m=+1.407725479 container remove 86ae810fc7417eb06d42d4dc0afd534fc1cfdfeb9f9c6834d76a4566d339eef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_visvesvaraya, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:01:29 np0005464891 systemd[1]: libpod-conmon-86ae810fc7417eb06d42d4dc0afd534fc1cfdfeb9f9c6834d76a4566d339eef9.scope: Deactivated successfully.
Oct  1 13:01:29 np0005464891 podman[302099]: 2025-10-01 17:01:29.590602879 +0000 UTC m=+0.024762820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:01:29 np0005464891 podman[302099]: 2025-10-01 17:01:29.69214091 +0000 UTC m=+0.126300841 container create 2144854970fab5b4b5f39d5f4e8d1bdeb6eac4d8f1152384c8dcdefdc11165e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_nash, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:01:29 np0005464891 systemd[1]: Started libpod-conmon-2144854970fab5b4b5f39d5f4e8d1bdeb6eac4d8f1152384c8dcdefdc11165e1.scope.
Oct  1 13:01:29 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:01:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879b3590486c2e882f32009b3e056d00af98ab5034d617f803f96748bfbaa121/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:01:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879b3590486c2e882f32009b3e056d00af98ab5034d617f803f96748bfbaa121/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:01:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879b3590486c2e882f32009b3e056d00af98ab5034d617f803f96748bfbaa121/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:01:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879b3590486c2e882f32009b3e056d00af98ab5034d617f803f96748bfbaa121/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:01:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879b3590486c2e882f32009b3e056d00af98ab5034d617f803f96748bfbaa121/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 13:01:29 np0005464891 podman[302099]: 2025-10-01 17:01:29.879130223 +0000 UTC m=+0.313290124 container init 2144854970fab5b4b5f39d5f4e8d1bdeb6eac4d8f1152384c8dcdefdc11165e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 13:01:29 np0005464891 podman[302099]: 2025-10-01 17:01:29.888088519 +0000 UTC m=+0.322248410 container start 2144854970fab5b4b5f39d5f4e8d1bdeb6eac4d8f1152384c8dcdefdc11165e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 13:01:29 np0005464891 podman[302099]: 2025-10-01 17:01:29.950737656 +0000 UTC m=+0.384897577 container attach 2144854970fab5b4b5f39d5f4e8d1bdeb6eac4d8f1152384c8dcdefdc11165e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_nash, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 13:01:30 np0005464891 serene_nash[302115]: --> passed data devices: 0 physical, 3 LVM
Oct  1 13:01:30 np0005464891 serene_nash[302115]: --> relative data size: 1.0
Oct  1 13:01:30 np0005464891 serene_nash[302115]: --> All data devices are unavailable
Oct  1 13:01:31 np0005464891 systemd[1]: libpod-2144854970fab5b4b5f39d5f4e8d1bdeb6eac4d8f1152384c8dcdefdc11165e1.scope: Deactivated successfully.
Oct  1 13:01:31 np0005464891 systemd[1]: libpod-2144854970fab5b4b5f39d5f4e8d1bdeb6eac4d8f1152384c8dcdefdc11165e1.scope: Consumed 1.060s CPU time.
Oct  1 13:01:31 np0005464891 podman[302099]: 2025-10-01 17:01:31.008528875 +0000 UTC m=+1.442688786 container died 2144854970fab5b4b5f39d5f4e8d1bdeb6eac4d8f1152384c8dcdefdc11165e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_nash, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 13:01:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1854: 321 pgs: 321 active+clean; 370 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 587 KiB/s wr, 56 op/s
Oct  1 13:01:31 np0005464891 systemd[1]: var-lib-containers-storage-overlay-879b3590486c2e882f32009b3e056d00af98ab5034d617f803f96748bfbaa121-merged.mount: Deactivated successfully.
Oct  1 13:01:31 np0005464891 podman[302099]: 2025-10-01 17:01:31.780197687 +0000 UTC m=+2.214357608 container remove 2144854970fab5b4b5f39d5f4e8d1bdeb6eac4d8f1152384c8dcdefdc11165e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_nash, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 13:01:31 np0005464891 systemd[1]: libpod-conmon-2144854970fab5b4b5f39d5f4e8d1bdeb6eac4d8f1152384c8dcdefdc11165e1.scope: Deactivated successfully.
Oct  1 13:01:32 np0005464891 nova_compute[259907]: 2025-10-01 17:01:32.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:32 np0005464891 nova_compute[259907]: 2025-10-01 17:01:32.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:32 np0005464891 podman[302298]: 2025-10-01 17:01:32.511931514 +0000 UTC m=+0.025515290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:01:32 np0005464891 podman[302298]: 2025-10-01 17:01:32.690792864 +0000 UTC m=+0.204376610 container create 9713607eb984670a35e9c5767138a40698da81fb6264c9c291e2623a227064d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 13:01:32 np0005464891 systemd[1]: Started libpod-conmon-9713607eb984670a35e9c5767138a40698da81fb6264c9c291e2623a227064d4.scope.
Oct  1 13:01:32 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:01:33 np0005464891 podman[302298]: 2025-10-01 17:01:33.070201468 +0000 UTC m=+0.583785274 container init 9713607eb984670a35e9c5767138a40698da81fb6264c9c291e2623a227064d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 13:01:33 np0005464891 podman[302298]: 2025-10-01 17:01:33.077441667 +0000 UTC m=+0.591025413 container start 9713607eb984670a35e9c5767138a40698da81fb6264c9c291e2623a227064d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:01:33 np0005464891 happy_goldberg[302314]: 167 167
Oct  1 13:01:33 np0005464891 systemd[1]: libpod-9713607eb984670a35e9c5767138a40698da81fb6264c9c291e2623a227064d4.scope: Deactivated successfully.
Oct  1 13:01:33 np0005464891 podman[302298]: 2025-10-01 17:01:33.214706358 +0000 UTC m=+0.728290114 container attach 9713607eb984670a35e9c5767138a40698da81fb6264c9c291e2623a227064d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:01:33 np0005464891 podman[302298]: 2025-10-01 17:01:33.215074438 +0000 UTC m=+0.728658184 container died 9713607eb984670a35e9c5767138a40698da81fb6264c9c291e2623a227064d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:01:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:01:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 321 active+clean; 370 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 590 KiB/s wr, 56 op/s
Oct  1 13:01:33 np0005464891 systemd[1]: var-lib-containers-storage-overlay-77c7350596027f47d4b71b4d478314d210e12ec93e495d7730bf99b9ad0bc0be-merged.mount: Deactivated successfully.
Oct  1 13:01:34 np0005464891 podman[302298]: 2025-10-01 17:01:34.139519884 +0000 UTC m=+1.653103630 container remove 9713607eb984670a35e9c5767138a40698da81fb6264c9c291e2623a227064d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:01:34 np0005464891 systemd[1]: libpod-conmon-9713607eb984670a35e9c5767138a40698da81fb6264c9c291e2623a227064d4.scope: Deactivated successfully.
Oct  1 13:01:34 np0005464891 podman[302338]: 2025-10-01 17:01:34.335289318 +0000 UTC m=+0.025865509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:01:34 np0005464891 podman[302338]: 2025-10-01 17:01:34.466581745 +0000 UTC m=+0.157157926 container create ae45d1f44b5d79f5941f7718ff53b525416931d82209a54f560e19b5a9dba5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_franklin, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:01:34 np0005464891 systemd[1]: Started libpod-conmon-ae45d1f44b5d79f5941f7718ff53b525416931d82209a54f560e19b5a9dba5ec.scope.
Oct  1 13:01:34 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:01:34 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503d8295931a9c3ba1178ff7d59d1042e8209d182c09a0a20c8c136877e937c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:01:34 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503d8295931a9c3ba1178ff7d59d1042e8209d182c09a0a20c8c136877e937c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:01:34 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503d8295931a9c3ba1178ff7d59d1042e8209d182c09a0a20c8c136877e937c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:01:34 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503d8295931a9c3ba1178ff7d59d1042e8209d182c09a0a20c8c136877e937c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:01:34 np0005464891 podman[302338]: 2025-10-01 17:01:34.781008539 +0000 UTC m=+0.471584770 container init ae45d1f44b5d79f5941f7718ff53b525416931d82209a54f560e19b5a9dba5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_franklin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:01:34 np0005464891 podman[302338]: 2025-10-01 17:01:34.788075393 +0000 UTC m=+0.478651554 container start ae45d1f44b5d79f5941f7718ff53b525416931d82209a54f560e19b5a9dba5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_franklin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:01:34 np0005464891 podman[302338]: 2025-10-01 17:01:34.813405277 +0000 UTC m=+0.503981468 container attach ae45d1f44b5d79f5941f7718ff53b525416931d82209a54f560e19b5a9dba5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_franklin, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 13:01:35 np0005464891 nova_compute[259907]: 2025-10-01 17:01:35.034 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:01:35 np0005464891 nova_compute[259907]: 2025-10-01 17:01:35.035 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:01:35 np0005464891 nova_compute[259907]: 2025-10-01 17:01:35.228 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:01:35 np0005464891 nova_compute[259907]: 2025-10-01 17:01:35.228 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:01:35 np0005464891 nova_compute[259907]: 2025-10-01 17:01:35.228 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:01:35 np0005464891 nova_compute[259907]: 2025-10-01 17:01:35.228 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 13:01:35 np0005464891 nova_compute[259907]: 2025-10-01 17:01:35.228 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:01:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 370 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 700 KiB/s rd, 266 KiB/s wr, 37 op/s
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]: {
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:    "0": [
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:        {
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "devices": [
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "/dev/loop3"
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            ],
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "lv_name": "ceph_lv0",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "lv_size": "21470642176",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "name": "ceph_lv0",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "tags": {
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.cluster_name": "ceph",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.crush_device_class": "",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.encrypted": "0",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.osd_id": "0",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.type": "block",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.vdo": "0"
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            },
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "type": "block",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "vg_name": "ceph_vg0"
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:        }
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:    ],
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:    "1": [
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:        {
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "devices": [
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "/dev/loop4"
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            ],
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "lv_name": "ceph_lv1",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "lv_size": "21470642176",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "name": "ceph_lv1",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "tags": {
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.cluster_name": "ceph",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.crush_device_class": "",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.encrypted": "0",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.osd_id": "1",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.type": "block",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.vdo": "0"
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            },
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "type": "block",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "vg_name": "ceph_vg1"
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:        }
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:    ],
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:    "2": [
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:        {
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "devices": [
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "/dev/loop5"
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            ],
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "lv_name": "ceph_lv2",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "lv_size": "21470642176",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "name": "ceph_lv2",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "tags": {
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.cluster_name": "ceph",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.crush_device_class": "",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.encrypted": "0",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.osd_id": "2",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.type": "block",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:                "ceph.vdo": "0"
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            },
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "type": "block",
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:            "vg_name": "ceph_vg2"
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:        }
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]:    ]
Oct  1 13:01:35 np0005464891 priceless_franklin[302354]: }
Oct  1 13:01:35 np0005464891 systemd[1]: libpod-ae45d1f44b5d79f5941f7718ff53b525416931d82209a54f560e19b5a9dba5ec.scope: Deactivated successfully.
Oct  1 13:01:35 np0005464891 podman[302338]: 2025-10-01 17:01:35.560572886 +0000 UTC m=+1.251149087 container died ae45d1f44b5d79f5941f7718ff53b525416931d82209a54f560e19b5a9dba5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_franklin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:01:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:01:35 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569077520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:01:35 np0005464891 nova_compute[259907]: 2025-10-01 17:01:35.666 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:01:36 np0005464891 nova_compute[259907]: 2025-10-01 17:01:36.043 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 13:01:36 np0005464891 nova_compute[259907]: 2025-10-01 17:01:36.043 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 13:01:36 np0005464891 systemd[1]: var-lib-containers-storage-overlay-503d8295931a9c3ba1178ff7d59d1042e8209d182c09a0a20c8c136877e937c9-merged.mount: Deactivated successfully.
Oct  1 13:01:36 np0005464891 nova_compute[259907]: 2025-10-01 17:01:36.049 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 13:01:36 np0005464891 nova_compute[259907]: 2025-10-01 17:01:36.050 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 13:01:36 np0005464891 nova_compute[259907]: 2025-10-01 17:01:36.289 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:01:36 np0005464891 nova_compute[259907]: 2025-10-01 17:01:36.291 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3942MB free_disk=59.98798751831055GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 13:01:36 np0005464891 nova_compute[259907]: 2025-10-01 17:01:36.291 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:01:36 np0005464891 nova_compute[259907]: 2025-10-01 17:01:36.291 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:01:36 np0005464891 nova_compute[259907]: 2025-10-01 17:01:36.650 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 1affd3fe-8ee0-455e-bcef-79fe7bcb283d actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 13:01:36 np0005464891 nova_compute[259907]: 2025-10-01 17:01:36.651 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 13:01:36 np0005464891 nova_compute[259907]: 2025-10-01 17:01:36.651 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 13:01:36 np0005464891 nova_compute[259907]: 2025-10-01 17:01:36.652 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 13:01:36 np0005464891 nova_compute[259907]: 2025-10-01 17:01:36.737 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:01:37 np0005464891 nova_compute[259907]: 2025-10-01 17:01:37.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:37 np0005464891 nova_compute[259907]: 2025-10-01 17:01:37.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 370 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 85 KiB/s wr, 3 op/s
Oct  1 13:01:37 np0005464891 podman[302338]: 2025-10-01 17:01:37.429304583 +0000 UTC m=+3.119880774 container remove ae45d1f44b5d79f5941f7718ff53b525416931d82209a54f560e19b5a9dba5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 13:01:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:01:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/137272659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:01:37 np0005464891 systemd[1]: libpod-conmon-ae45d1f44b5d79f5941f7718ff53b525416931d82209a54f560e19b5a9dba5ec.scope: Deactivated successfully.
Oct  1 13:01:37 np0005464891 nova_compute[259907]: 2025-10-01 17:01:37.471 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.734s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:01:37 np0005464891 nova_compute[259907]: 2025-10-01 17:01:37.494 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:01:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:01:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3886406325' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:01:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:01:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3886406325' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:01:37 np0005464891 nova_compute[259907]: 2025-10-01 17:01:37.665 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:01:38 np0005464891 podman[302558]: 2025-10-01 17:01:38.068581138 +0000 UTC m=+0.020321588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:01:38 np0005464891 podman[302558]: 2025-10-01 17:01:38.220532761 +0000 UTC m=+0.172273201 container create d8152ba2eea3f64ac157f038562ed39058acc73a0b1f3166a21c5d8f16482b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mclaren, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:01:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:01:38 np0005464891 systemd[1]: Started libpod-conmon-d8152ba2eea3f64ac157f038562ed39058acc73a0b1f3166a21c5d8f16482b85.scope.
Oct  1 13:01:38 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:01:38 np0005464891 podman[302558]: 2025-10-01 17:01:38.369576934 +0000 UTC m=+0.321317374 container init d8152ba2eea3f64ac157f038562ed39058acc73a0b1f3166a21c5d8f16482b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mclaren, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 13:01:38 np0005464891 podman[302572]: 2025-10-01 17:01:38.378593961 +0000 UTC m=+0.110292623 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 13:01:38 np0005464891 podman[302558]: 2025-10-01 17:01:38.38255495 +0000 UTC m=+0.334295390 container start d8152ba2eea3f64ac157f038562ed39058acc73a0b1f3166a21c5d8f16482b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mclaren, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:01:38 np0005464891 sweet_mclaren[302575]: 167 167
Oct  1 13:01:38 np0005464891 systemd[1]: libpod-d8152ba2eea3f64ac157f038562ed39058acc73a0b1f3166a21c5d8f16482b85.scope: Deactivated successfully.
Oct  1 13:01:38 np0005464891 podman[302558]: 2025-10-01 17:01:38.517755764 +0000 UTC m=+0.469496214 container attach d8152ba2eea3f64ac157f038562ed39058acc73a0b1f3166a21c5d8f16482b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mclaren, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct  1 13:01:38 np0005464891 podman[302558]: 2025-10-01 17:01:38.518793532 +0000 UTC m=+0.470533982 container died d8152ba2eea3f64ac157f038562ed39058acc73a0b1f3166a21c5d8f16482b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:01:38 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6019d224a7e2c772539886d410b49c3c4a10bb9acd4ad556cbf05ec1e5d38b29-merged.mount: Deactivated successfully.
Oct  1 13:01:38 np0005464891 nova_compute[259907]: 2025-10-01 17:01:38.978 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 13:01:38 np0005464891 nova_compute[259907]: 2025-10-01 17:01:38.979 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:01:39 np0005464891 podman[302558]: 2025-10-01 17:01:39.120065715 +0000 UTC m=+1.071806155 container remove d8152ba2eea3f64ac157f038562ed39058acc73a0b1f3166a21c5d8f16482b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 13:01:39 np0005464891 systemd[1]: libpod-conmon-d8152ba2eea3f64ac157f038562ed39058acc73a0b1f3166a21c5d8f16482b85.scope: Deactivated successfully.
Oct  1 13:01:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 374 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 104 KiB/s rd, 127 KiB/s wr, 5 op/s
Oct  1 13:01:39 np0005464891 podman[302617]: 2025-10-01 17:01:39.330482939 +0000 UTC m=+0.040449649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:01:39 np0005464891 podman[302617]: 2025-10-01 17:01:39.503654424 +0000 UTC m=+0.213621034 container create 7648c060d9aba706d27dcb1d603b93bb871bc19364c4f41361080525591d4531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:01:39 np0005464891 systemd[1]: Started libpod-conmon-7648c060d9aba706d27dcb1d603b93bb871bc19364c4f41361080525591d4531.scope.
Oct  1 13:01:39 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:01:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7750637be17a6128aba11d159289a7643aeaedd9eb2381b80abc8e7129b898/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:01:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7750637be17a6128aba11d159289a7643aeaedd9eb2381b80abc8e7129b898/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:01:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7750637be17a6128aba11d159289a7643aeaedd9eb2381b80abc8e7129b898/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:01:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7750637be17a6128aba11d159289a7643aeaedd9eb2381b80abc8e7129b898/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:01:39 np0005464891 podman[302617]: 2025-10-01 17:01:39.688878598 +0000 UTC m=+0.398845218 container init 7648c060d9aba706d27dcb1d603b93bb871bc19364c4f41361080525591d4531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 13:01:39 np0005464891 podman[302617]: 2025-10-01 17:01:39.697468603 +0000 UTC m=+0.407435203 container start 7648c060d9aba706d27dcb1d603b93bb871bc19364c4f41361080525591d4531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:01:39 np0005464891 podman[302617]: 2025-10-01 17:01:39.766215937 +0000 UTC m=+0.476182547 container attach 7648c060d9aba706d27dcb1d603b93bb871bc19364c4f41361080525591d4531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]: {
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:        "osd_id": 2,
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:        "type": "bluestore"
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:    },
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:        "osd_id": 0,
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:        "type": "bluestore"
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:    },
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:        "osd_id": 1,
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:        "type": "bluestore"
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]:    }
Oct  1 13:01:40 np0005464891 condescending_hermann[302634]: }
Oct  1 13:01:40 np0005464891 systemd[1]: libpod-7648c060d9aba706d27dcb1d603b93bb871bc19364c4f41361080525591d4531.scope: Deactivated successfully.
Oct  1 13:01:40 np0005464891 podman[302617]: 2025-10-01 17:01:40.708507002 +0000 UTC m=+1.418473612 container died 7648c060d9aba706d27dcb1d603b93bb871bc19364c4f41361080525591d4531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 13:01:40 np0005464891 systemd[1]: libpod-7648c060d9aba706d27dcb1d603b93bb871bc19364c4f41361080525591d4531.scope: Consumed 1.013s CPU time.
Oct  1 13:01:40 np0005464891 systemd[1]: var-lib-containers-storage-overlay-5e7750637be17a6128aba11d159289a7643aeaedd9eb2381b80abc8e7129b898-merged.mount: Deactivated successfully.
Oct  1 13:01:41 np0005464891 podman[302617]: 2025-10-01 17:01:41.181922742 +0000 UTC m=+1.891889382 container remove 7648c060d9aba706d27dcb1d603b93bb871bc19364c4f41361080525591d4531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 13:01:41 np0005464891 systemd[1]: libpod-conmon-7648c060d9aba706d27dcb1d603b93bb871bc19364c4f41361080525591d4531.scope: Deactivated successfully.
Oct  1 13:01:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:01:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:01:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:01:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:01:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1859: 321 pgs: 321 active+clean; 374 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 121 KiB/s wr, 5 op/s
Oct  1 13:01:41 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev a0478cc3-1594-4845-8582-c1fe81a33557 does not exist
Oct  1 13:01:41 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 0f6db1b8-7ac0-4f00-870a-015da93aab8d does not exist
Oct  1 13:01:41 np0005464891 nova_compute[259907]: 2025-10-01 17:01:41.745 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:01:41 np0005464891 nova_compute[259907]: 2025-10-01 17:01:41.746 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:01:41 np0005464891 nova_compute[259907]: 2025-10-01 17:01:41.856 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:01:41 np0005464891 nova_compute[259907]: 2025-10-01 17:01:41.857 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 13:01:41 np0005464891 nova_compute[259907]: 2025-10-01 17:01:41.857 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 13:01:42 np0005464891 nova_compute[259907]: 2025-10-01 17:01:42.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:01:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:01:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:01:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:01:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:01:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:01:42 np0005464891 nova_compute[259907]: 2025-10-01 17:01:42.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:42 np0005464891 ovn_controller[152409]: 2025-10-01T17:01:42Z|00233|memory_trim|INFO|Detected inactivity (last active 30023 ms ago): trimming memory
Oct  1 13:01:42 np0005464891 nova_compute[259907]: 2025-10-01 17:01:42.378 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:01:42 np0005464891 nova_compute[259907]: 2025-10-01 17:01:42.378 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquired lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:01:42 np0005464891 nova_compute[259907]: 2025-10-01 17:01:42.378 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  1 13:01:42 np0005464891 nova_compute[259907]: 2025-10-01 17:01:42.378 2 DEBUG nova.objects.instance [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1affd3fe-8ee0-455e-bcef-79fe7bcb283d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:01:42 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:01:42 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:01:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:01:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 374 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 52 KiB/s wr, 5 op/s
Oct  1 13:01:43 np0005464891 nova_compute[259907]: 2025-10-01 17:01:43.534 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Updating instance_info_cache with network_info: [{"id": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "address": "fa:16:3e:62:31:95", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee3f438c-5d", "ovs_interfaceid": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:01:43 np0005464891 nova_compute[259907]: 2025-10-01 17:01:43.584 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Releasing lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:01:43 np0005464891 nova_compute[259907]: 2025-10-01 17:01:43.584 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  1 13:01:43 np0005464891 nova_compute[259907]: 2025-10-01 17:01:43.584 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:01:43 np0005464891 nova_compute[259907]: 2025-10-01 17:01:43.585 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:01:43 np0005464891 nova_compute[259907]: 2025-10-01 17:01:43.585 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:01:43 np0005464891 nova_compute[259907]: 2025-10-01 17:01:43.585 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 13:01:43 np0005464891 nova_compute[259907]: 2025-10-01 17:01:43.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:01:44 np0005464891 nova_compute[259907]: 2025-10-01 17:01:44.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:01:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 374 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 48 KiB/s wr, 7 op/s
Oct  1 13:01:46 np0005464891 nova_compute[259907]: 2025-10-01 17:01:46.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:46 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:46.432 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:01:46 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:46.435 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 13:01:46 np0005464891 nova_compute[259907]: 2025-10-01 17:01:46.678 2 DEBUG nova.compute.manager [req-6c58dbf3-c26b-4465-acaa-ac265d7d1d63 req-39fce037-d1f0-42b5-89a7-65a8b5621fe0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Received event network-changed-f17b14b5-e93b-4f50-b43d-2137edec2647 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:01:46 np0005464891 nova_compute[259907]: 2025-10-01 17:01:46.678 2 DEBUG nova.compute.manager [req-6c58dbf3-c26b-4465-acaa-ac265d7d1d63 req-39fce037-d1f0-42b5-89a7-65a8b5621fe0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Refreshing instance network info cache due to event network-changed-f17b14b5-e93b-4f50-b43d-2137edec2647. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 13:01:46 np0005464891 nova_compute[259907]: 2025-10-01 17:01:46.679 2 DEBUG oslo_concurrency.lockutils [req-6c58dbf3-c26b-4465-acaa-ac265d7d1d63 req-39fce037-d1f0-42b5-89a7-65a8b5621fe0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:01:46 np0005464891 nova_compute[259907]: 2025-10-01 17:01:46.679 2 DEBUG oslo_concurrency.lockutils [req-6c58dbf3-c26b-4465-acaa-ac265d7d1d63 req-39fce037-d1f0-42b5-89a7-65a8b5621fe0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:01:46 np0005464891 nova_compute[259907]: 2025-10-01 17:01:46.679 2 DEBUG nova.network.neutron [req-6c58dbf3-c26b-4465-acaa-ac265d7d1d63 req-39fce037-d1f0-42b5-89a7-65a8b5621fe0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Refreshing network info cache for port f17b14b5-e93b-4f50-b43d-2137edec2647 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 13:01:46 np0005464891 nova_compute[259907]: 2025-10-01 17:01:46.889 2 DEBUG oslo_concurrency.lockutils [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:01:46 np0005464891 nova_compute[259907]: 2025-10-01 17:01:46.890 2 DEBUG oslo_concurrency.lockutils [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:01:46 np0005464891 nova_compute[259907]: 2025-10-01 17:01:46.890 2 DEBUG oslo_concurrency.lockutils [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:01:46 np0005464891 nova_compute[259907]: 2025-10-01 17:01:46.890 2 DEBUG oslo_concurrency.lockutils [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:01:46 np0005464891 nova_compute[259907]: 2025-10-01 17:01:46.891 2 DEBUG oslo_concurrency.lockutils [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:01:46 np0005464891 nova_compute[259907]: 2025-10-01 17:01:46.893 2 INFO nova.compute.manager [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Terminating instance#033[00m
Oct  1 13:01:46 np0005464891 nova_compute[259907]: 2025-10-01 17:01:46.894 2 DEBUG nova.compute.manager [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:47 np0005464891 kernel: tapf17b14b5-e9 (unregistering): left promiscuous mode
Oct  1 13:01:47 np0005464891 NetworkManager[44940]: <info>  [1759338107.0756] device (tapf17b14b5-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 13:01:47 np0005464891 ovn_controller[152409]: 2025-10-01T17:01:47Z|00234|binding|INFO|Releasing lport f17b14b5-e93b-4f50-b43d-2137edec2647 from this chassis (sb_readonly=0)
Oct  1 13:01:47 np0005464891 ovn_controller[152409]: 2025-10-01T17:01:47Z|00235|binding|INFO|Setting lport f17b14b5-e93b-4f50-b43d-2137edec2647 down in Southbound
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:47 np0005464891 ovn_controller[152409]: 2025-10-01T17:01:47Z|00236|binding|INFO|Removing iface tapf17b14b5-e9 ovn-installed in OVS
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:47.096 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:c1:ec 10.100.0.8'], port_security=['fa:16:3e:4b:c1:ec 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2e2fb6e1-ace5-45d4-a1ea-b41c2b903193', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce1e1062-6685-441b-8278-667224375e38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8318b65fa88942a99937a0d198a04a9c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fda2d0b4-8d53-4a87-93c6-2f62b1be0cd1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4cdcf07-f310-4572-944c-43bd8e74f763, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=f17b14b5-e93b-4f50-b43d-2137edec2647) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:01:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:47.098 162546 INFO neutron.agent.ovn.metadata.agent [-] Port f17b14b5-e93b-4f50-b43d-2137edec2647 in datapath ce1e1062-6685-441b-8278-667224375e38 unbound from our chassis#033[00m
Oct  1 13:01:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:47.101 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce1e1062-6685-441b-8278-667224375e38#033[00m
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:47.122 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[4150d8c7-509f-4e1c-afab-12bcae48e3f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:47 np0005464891 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Deactivated successfully.
Oct  1 13:01:47 np0005464891 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Consumed 14.337s CPU time.
Oct  1 13:01:47 np0005464891 systemd-machined[214891]: Machine qemu-25-instance-00000019 terminated.
Oct  1 13:01:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:47.161 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[88795509-9596-4149-a06c-6e1ef9aec0f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:01:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:47.165 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[1204ac7f-f855-4866-9c8d-a9f515912422]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:01:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:47.193 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[02a08213-6b1e-40b0-85dc-38eb523f673d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:01:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:47.212 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[7796770f-bdc3-4e53-bcdd-9adb082ce081]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce1e1062-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:87:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 930, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 930, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495780, 'reachable_time': 37017, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 7, 'inoctets': 664, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 664, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302742, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:01:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:47.230 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b71535c5-79ed-4bcd-b629-d6a83709d581]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce1e1062-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495792, 'tstamp': 495792}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302743, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce1e1062-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495795, 'tstamp': 495795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302743, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:01:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:47.233 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce1e1062-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:47.242 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce1e1062-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:47.242 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:01:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:47.243 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce1e1062-60, col_values=(('external_ids', {'iface-id': 'd971881d-8d8b-44dc-b0b0-4cd0065c0105'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:01:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:47.243 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:01:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 374 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 48 KiB/s wr, 7 op/s
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.341 2 INFO nova.virt.libvirt.driver [-] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Instance destroyed successfully.#033[00m
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.341 2 DEBUG nova.objects.instance [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lazy-loading 'resources' on Instance uuid 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.509 2 DEBUG nova.virt.libvirt.vif [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T17:00:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-232245827',display_name='tempest-TestVolumeBootPattern-server-232245827',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-232245827',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClGQJy6/uYcm9xjql2YyXBlCsn5OCvS8WdRyMV8X4KRzO9nDb+WS/fT0IKQE/81aKxS5QGeI9sR/4PyJ3PdLDqhc6ZZs5CYH0MiYsApL0NE4Z3MyQ9wrVPiCTRct79ckg==',key_name='tempest-TestVolumeBootPattern-477590145',keypairs=<?>,launch_index=0,launched_at=2025-10-01T17:01:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-syvo7n89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T17:01:09Z,user_data=None,user_id='1280014cdfb74333ae8d71c78116e646',uuid=2e2fb6e1-ace5-45d4-a1ea-b41c2b903193,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f17b14b5-e93b-4f50-b43d-2137edec2647", "address": "fa:16:3e:4b:c1:ec", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17b14b5-e9", "ovs_interfaceid": "f17b14b5-e93b-4f50-b43d-2137edec2647", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.510 2 DEBUG nova.network.os_vif_util [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "f17b14b5-e93b-4f50-b43d-2137edec2647", "address": "fa:16:3e:4b:c1:ec", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17b14b5-e9", "ovs_interfaceid": "f17b14b5-e93b-4f50-b43d-2137edec2647", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.511 2 DEBUG nova.network.os_vif_util [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4b:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=f17b14b5-e93b-4f50-b43d-2137edec2647,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf17b14b5-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.511 2 DEBUG os_vif [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=f17b14b5-e93b-4f50-b43d-2137edec2647,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf17b14b5-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf17b14b5-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 13:01:47 np0005464891 nova_compute[259907]: 2025-10-01 17:01:47.520 2 INFO os_vif [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=f17b14b5-e93b-4f50-b43d-2137edec2647,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf17b14b5-e9')#033[00m
Oct  1 13:01:47 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 13:01:47 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 13:01:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:01:48 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:01:48.437 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.778 2 DEBUG nova.compute.manager [req-3ff4ac2b-42a9-4e18-82fe-cf46ceebcd06 req-dc387507-71aa-477e-b47f-ed105de87d5a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Received event network-vif-unplugged-f17b14b5-e93b-4f50-b43d-2137edec2647 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.778 2 DEBUG oslo_concurrency.lockutils [req-3ff4ac2b-42a9-4e18-82fe-cf46ceebcd06 req-dc387507-71aa-477e-b47f-ed105de87d5a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.779 2 DEBUG oslo_concurrency.lockutils [req-3ff4ac2b-42a9-4e18-82fe-cf46ceebcd06 req-dc387507-71aa-477e-b47f-ed105de87d5a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.779 2 DEBUG oslo_concurrency.lockutils [req-3ff4ac2b-42a9-4e18-82fe-cf46ceebcd06 req-dc387507-71aa-477e-b47f-ed105de87d5a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.779 2 DEBUG nova.compute.manager [req-3ff4ac2b-42a9-4e18-82fe-cf46ceebcd06 req-dc387507-71aa-477e-b47f-ed105de87d5a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] No waiting events found dispatching network-vif-unplugged-f17b14b5-e93b-4f50-b43d-2137edec2647 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.779 2 DEBUG nova.compute.manager [req-3ff4ac2b-42a9-4e18-82fe-cf46ceebcd06 req-dc387507-71aa-477e-b47f-ed105de87d5a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Received event network-vif-unplugged-f17b14b5-e93b-4f50-b43d-2137edec2647 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.779 2 DEBUG nova.compute.manager [req-3ff4ac2b-42a9-4e18-82fe-cf46ceebcd06 req-dc387507-71aa-477e-b47f-ed105de87d5a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Received event network-vif-plugged-f17b14b5-e93b-4f50-b43d-2137edec2647 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.780 2 DEBUG oslo_concurrency.lockutils [req-3ff4ac2b-42a9-4e18-82fe-cf46ceebcd06 req-dc387507-71aa-477e-b47f-ed105de87d5a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.780 2 DEBUG oslo_concurrency.lockutils [req-3ff4ac2b-42a9-4e18-82fe-cf46ceebcd06 req-dc387507-71aa-477e-b47f-ed105de87d5a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.780 2 DEBUG oslo_concurrency.lockutils [req-3ff4ac2b-42a9-4e18-82fe-cf46ceebcd06 req-dc387507-71aa-477e-b47f-ed105de87d5a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.780 2 DEBUG nova.compute.manager [req-3ff4ac2b-42a9-4e18-82fe-cf46ceebcd06 req-dc387507-71aa-477e-b47f-ed105de87d5a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] No waiting events found dispatching network-vif-plugged-f17b14b5-e93b-4f50-b43d-2137edec2647 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.780 2 WARNING nova.compute.manager [req-3ff4ac2b-42a9-4e18-82fe-cf46ceebcd06 req-dc387507-71aa-477e-b47f-ed105de87d5a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Received unexpected event network-vif-plugged-f17b14b5-e93b-4f50-b43d-2137edec2647 for instance with vm_state active and task_state deleting.#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.783 2 DEBUG nova.network.neutron [req-6c58dbf3-c26b-4465-acaa-ac265d7d1d63 req-39fce037-d1f0-42b5-89a7-65a8b5621fe0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Updated VIF entry in instance network info cache for port f17b14b5-e93b-4f50-b43d-2137edec2647. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.784 2 DEBUG nova.network.neutron [req-6c58dbf3-c26b-4465-acaa-ac265d7d1d63 req-39fce037-d1f0-42b5-89a7-65a8b5621fe0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Updating instance_info_cache with network_info: [{"id": "f17b14b5-e93b-4f50-b43d-2137edec2647", "address": "fa:16:3e:4b:c1:ec", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17b14b5-e9", "ovs_interfaceid": "f17b14b5-e93b-4f50-b43d-2137edec2647", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.879 2 DEBUG oslo_concurrency.lockutils [req-6c58dbf3-c26b-4465-acaa-ac265d7d1d63 req-39fce037-d1f0-42b5-89a7-65a8b5621fe0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.971 2 INFO nova.virt.libvirt.driver [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Deleting instance files /var/lib/nova/instances/2e2fb6e1-ace5-45d4-a1ea-b41c2b903193_del#033[00m
Oct  1 13:01:48 np0005464891 nova_compute[259907]: 2025-10-01 17:01:48.972 2 INFO nova.virt.libvirt.driver [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Deletion of /var/lib/nova/instances/2e2fb6e1-ace5-45d4-a1ea-b41c2b903193_del complete#033[00m
Oct  1 13:01:49 np0005464891 nova_compute[259907]: 2025-10-01 17:01:49.062 2 INFO nova.compute.manager [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Took 2.17 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 13:01:49 np0005464891 nova_compute[259907]: 2025-10-01 17:01:49.064 2 DEBUG oslo.service.loopingcall [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 13:01:49 np0005464891 nova_compute[259907]: 2025-10-01 17:01:49.064 2 DEBUG nova.compute.manager [-] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 13:01:49 np0005464891 nova_compute[259907]: 2025-10-01 17:01:49.065 2 DEBUG nova.network.neutron [-] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 13:01:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1863: 321 pgs: 321 active+clean; 374 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 248 KiB/s rd, 48 KiB/s wr, 10 op/s
Oct  1 13:01:49 np0005464891 podman[302776]: 2025-10-01 17:01:49.974612871 +0000 UTC m=+0.085066722 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  1 13:01:50 np0005464891 nova_compute[259907]: 2025-10-01 17:01:50.424 2 DEBUG nova.network.neutron [-] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:01:50 np0005464891 nova_compute[259907]: 2025-10-01 17:01:50.615 2 INFO nova.compute.manager [-] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Took 1.55 seconds to deallocate network for instance.#033[00m
Oct  1 13:01:50 np0005464891 nova_compute[259907]: 2025-10-01 17:01:50.878 2 DEBUG nova.compute.manager [req-0a1c5821-f6f3-4db6-b9ce-851b2187cb79 req-fd9eb150-39d6-43e6-b1e0-dc82259c5827 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Received event network-vif-deleted-f17b14b5-e93b-4f50-b43d-2137edec2647 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:01:50 np0005464891 nova_compute[259907]: 2025-10-01 17:01:50.882 2 INFO nova.compute.manager [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Took 0.27 seconds to detach 1 volumes for instance.#033[00m
Oct  1 13:01:51 np0005464891 nova_compute[259907]: 2025-10-01 17:01:51.071 2 DEBUG oslo_concurrency.lockutils [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:01:51 np0005464891 nova_compute[259907]: 2025-10-01 17:01:51.072 2 DEBUG oslo_concurrency.lockutils [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:01:51 np0005464891 nova_compute[259907]: 2025-10-01 17:01:51.130 2 DEBUG oslo_concurrency.processutils [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:01:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1864: 321 pgs: 321 active+clean; 373 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 218 KiB/s rd, 6.7 KiB/s wr, 17 op/s
Oct  1 13:01:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:01:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3693838072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:01:51 np0005464891 nova_compute[259907]: 2025-10-01 17:01:51.643 2 DEBUG oslo_concurrency.processutils [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:01:51 np0005464891 nova_compute[259907]: 2025-10-01 17:01:51.649 2 DEBUG nova.compute.provider_tree [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:01:51 np0005464891 nova_compute[259907]: 2025-10-01 17:01:51.738 2 DEBUG nova.scheduler.client.report [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:01:51 np0005464891 nova_compute[259907]: 2025-10-01 17:01:51.793 2 DEBUG oslo_concurrency.lockutils [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:01:51 np0005464891 nova_compute[259907]: 2025-10-01 17:01:51.899 2 INFO nova.scheduler.client.report [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Deleted allocations for instance 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193#033[00m
Oct  1 13:01:52 np0005464891 nova_compute[259907]: 2025-10-01 17:01:52.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:52 np0005464891 nova_compute[259907]: 2025-10-01 17:01:52.221 2 DEBUG oslo_concurrency.lockutils [None req-1cbf54dc-06c0-4856-8bb8-48e99587de79 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "2e2fb6e1-ace5-45d4-a1ea-b41c2b903193" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:01:52 np0005464891 nova_compute[259907]: 2025-10-01 17:01:52.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:01:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 373 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 224 KiB/s rd, 1.6 KiB/s wr, 23 op/s
Oct  1 13:01:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:01:54 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/378932199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:01:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:01:54 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/378932199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:01:54 np0005464891 podman[302825]: 2025-10-01 17:01:54.955526822 +0000 UTC m=+0.057970669 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  1 13:01:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1866: 321 pgs: 321 active+clean; 364 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 225 KiB/s rd, 852 B/s wr, 24 op/s
Oct  1 13:01:55 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct  1 13:01:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e444 do_prune osdmap full prune enabled
Oct  1 13:01:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e445 e445: 3 total, 3 up, 3 in
Oct  1 13:01:55 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e445: 3 total, 3 up, 3 in
Oct  1 13:01:57 np0005464891 nova_compute[259907]: 2025-10-01 17:01:57.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 364 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 181 KiB/s rd, 1023 B/s wr, 25 op/s
Oct  1 13:01:57 np0005464891 nova_compute[259907]: 2025-10-01 17:01:57.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:01:57 np0005464891 podman[302845]: 2025-10-01 17:01:57.945318481 +0000 UTC m=+0.060487888 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  1 13:01:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e445 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:01:58 np0005464891 nova_compute[259907]: 2025-10-01 17:01:58.766 2 DEBUG nova.compute.manager [req-be33312b-7e8a-418c-a7a0-4faa050fc590 req-1fc6ed5a-1b4e-4e28-aead-f6c1e5d77c23 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Received event network-changed-ee3f438c-5db5-4c88-b0c0-51835235bc99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:01:58 np0005464891 nova_compute[259907]: 2025-10-01 17:01:58.766 2 DEBUG nova.compute.manager [req-be33312b-7e8a-418c-a7a0-4faa050fc590 req-1fc6ed5a-1b4e-4e28-aead-f6c1e5d77c23 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Refreshing instance network info cache due to event network-changed-ee3f438c-5db5-4c88-b0c0-51835235bc99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 13:01:58 np0005464891 nova_compute[259907]: 2025-10-01 17:01:58.766 2 DEBUG oslo_concurrency.lockutils [req-be33312b-7e8a-418c-a7a0-4faa050fc590 req-1fc6ed5a-1b4e-4e28-aead-f6c1e5d77c23 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:01:58 np0005464891 nova_compute[259907]: 2025-10-01 17:01:58.766 2 DEBUG oslo_concurrency.lockutils [req-be33312b-7e8a-418c-a7a0-4faa050fc590 req-1fc6ed5a-1b4e-4e28-aead-f6c1e5d77c23 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:01:58 np0005464891 nova_compute[259907]: 2025-10-01 17:01:58.766 2 DEBUG nova.network.neutron [req-be33312b-7e8a-418c-a7a0-4faa050fc590 req-1fc6ed5a-1b4e-4e28-aead-f6c1e5d77c23 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Refreshing network info cache for port ee3f438c-5db5-4c88-b0c0-51835235bc99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 13:01:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 321 active+clean; 352 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.2 KiB/s wr, 48 op/s
Oct  1 13:01:59 np0005464891 nova_compute[259907]: 2025-10-01 17:01:59.504 2 DEBUG oslo_concurrency.lockutils [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:01:59 np0005464891 nova_compute[259907]: 2025-10-01 17:01:59.505 2 DEBUG oslo_concurrency.lockutils [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:01:59 np0005464891 nova_compute[259907]: 2025-10-01 17:01:59.505 2 DEBUG oslo_concurrency.lockutils [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:01:59 np0005464891 nova_compute[259907]: 2025-10-01 17:01:59.505 2 DEBUG oslo_concurrency.lockutils [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:01:59 np0005464891 nova_compute[259907]: 2025-10-01 17:01:59.506 2 DEBUG oslo_concurrency.lockutils [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:01:59 np0005464891 nova_compute[259907]: 2025-10-01 17:01:59.507 2 INFO nova.compute.manager [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Terminating instance#033[00m
Oct  1 13:01:59 np0005464891 nova_compute[259907]: 2025-10-01 17:01:59.508 2 DEBUG nova.compute.manager [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 13:02:00 np0005464891 kernel: tapee3f438c-5d (unregistering): left promiscuous mode
Oct  1 13:02:00 np0005464891 NetworkManager[44940]: <info>  [1759338120.5851] device (tapee3f438c-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 13:02:00 np0005464891 ovn_controller[152409]: 2025-10-01T17:02:00Z|00237|binding|INFO|Releasing lport ee3f438c-5db5-4c88-b0c0-51835235bc99 from this chassis (sb_readonly=0)
Oct  1 13:02:00 np0005464891 ovn_controller[152409]: 2025-10-01T17:02:00Z|00238|binding|INFO|Setting lport ee3f438c-5db5-4c88-b0c0-51835235bc99 down in Southbound
Oct  1 13:02:00 np0005464891 ovn_controller[152409]: 2025-10-01T17:02:00Z|00239|binding|INFO|Removing iface tapee3f438c-5d ovn-installed in OVS
Oct  1 13:02:00 np0005464891 nova_compute[259907]: 2025-10-01 17:02:00.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:00 np0005464891 nova_compute[259907]: 2025-10-01 17:02:00.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:00 np0005464891 nova_compute[259907]: 2025-10-01 17:02:00.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:00.628 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:31:95 10.100.0.9'], port_security=['fa:16:3e:62:31:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1affd3fe-8ee0-455e-bcef-79fe7bcb283d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce1e1062-6685-441b-8278-667224375e38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8318b65fa88942a99937a0d198a04a9c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fda2d0b4-8d53-4a87-93c6-2f62b1be0cd1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4cdcf07-f310-4572-944c-43bd8e74f763, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=ee3f438c-5db5-4c88-b0c0-51835235bc99) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:02:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:00.631 162546 INFO neutron.agent.ovn.metadata.agent [-] Port ee3f438c-5db5-4c88-b0c0-51835235bc99 in datapath ce1e1062-6685-441b-8278-667224375e38 unbound from our chassis#033[00m
Oct  1 13:02:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:00.632 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce1e1062-6685-441b-8278-667224375e38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 13:02:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:00.633 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e47370fe-b2b8-427d-9515-6a8e8b251d5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:02:00 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:00.634 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce1e1062-6685-441b-8278-667224375e38 namespace which is not needed anymore#033[00m
Oct  1 13:02:00 np0005464891 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct  1 13:02:00 np0005464891 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Consumed 17.742s CPU time.
Oct  1 13:02:00 np0005464891 systemd-machined[214891]: Machine qemu-24-instance-00000018 terminated.
Oct  1 13:02:00 np0005464891 nova_compute[259907]: 2025-10-01 17:02:00.742 2 INFO nova.virt.libvirt.driver [-] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Instance destroyed successfully.#033[00m
Oct  1 13:02:00 np0005464891 nova_compute[259907]: 2025-10-01 17:02:00.744 2 DEBUG nova.objects.instance [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lazy-loading 'resources' on Instance uuid 1affd3fe-8ee0-455e-bcef-79fe7bcb283d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:02:00 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[299942]: [NOTICE]   (299946) : haproxy version is 2.8.14-c23fe91
Oct  1 13:02:00 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[299942]: [NOTICE]   (299946) : path to executable is /usr/sbin/haproxy
Oct  1 13:02:00 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[299942]: [WARNING]  (299946) : Exiting Master process...
Oct  1 13:02:00 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[299942]: [ALERT]    (299946) : Current worker (299948) exited with code 143 (Terminated)
Oct  1 13:02:00 np0005464891 neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38[299942]: [WARNING]  (299946) : All workers exited. Exiting... (0)
Oct  1 13:02:00 np0005464891 systemd[1]: libpod-1188e28848f76a76b937c58d829079c068e0c5aab34b2d6e0b1ac1f9589a9303.scope: Deactivated successfully.
Oct  1 13:02:00 np0005464891 podman[302890]: 2025-10-01 17:02:00.799488127 +0000 UTC m=+0.074112652 container died 1188e28848f76a76b937c58d829079c068e0c5aab34b2d6e0b1ac1f9589a9303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  1 13:02:00 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1188e28848f76a76b937c58d829079c068e0c5aab34b2d6e0b1ac1f9589a9303-userdata-shm.mount: Deactivated successfully.
Oct  1 13:02:00 np0005464891 systemd[1]: var-lib-containers-storage-overlay-cae930b0a08bfc282e4641ba26fb07dbb846f9a470c55d4ccf78c2975089063f-merged.mount: Deactivated successfully.
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.017 2 DEBUG nova.virt.libvirt.vif [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T17:00:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1755453090',display_name='tempest-TestVolumeBootPattern-server-1755453090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1755453090',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClGQJy6/uYcm9xjql2YyXBlCsn5OCvS8WdRyMV8X4KRzO9nDb+WS/fT0IKQE/81aKxS5QGeI9sR/4PyJ3PdLDqhc6ZZs5CYH0MiYsApL0NE4Z3MyQ9wrVPiCTRct79ckg==',key_name='tempest-TestVolumeBootPattern-477590145',keypairs=<?>,launch_index=0,launched_at=2025-10-01T17:00:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8318b65fa88942a99937a0d198a04a9c',ramdisk_id='',reservation_id='r-555c3v3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-582136054',owner_user_name='tempest-TestVolumeBootPattern-582136054-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T17:00:11Z,user_data=None,user_id='1280014cdfb74333ae8d71c78116e646',uuid=1affd3fe-8ee0-455e-bcef-79fe7bcb283d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "address": "fa:16:3e:62:31:95", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee3f438c-5d", "ovs_interfaceid": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.017 2 DEBUG nova.network.os_vif_util [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converting VIF {"id": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "address": "fa:16:3e:62:31:95", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee3f438c-5d", "ovs_interfaceid": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.018 2 DEBUG nova.network.os_vif_util [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:62:31:95,bridge_name='br-int',has_traffic_filtering=True,id=ee3f438c-5db5-4c88-b0c0-51835235bc99,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee3f438c-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.018 2 DEBUG os_vif [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:62:31:95,bridge_name='br-int',has_traffic_filtering=True,id=ee3f438c-5db5-4c88-b0c0-51835235bc99,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee3f438c-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.020 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee3f438c-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.026 2 INFO os_vif [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:62:31:95,bridge_name='br-int',has_traffic_filtering=True,id=ee3f438c-5db5-4c88-b0c0-51835235bc99,network=Network(ce1e1062-6685-441b-8278-667224375e38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee3f438c-5d')#033[00m
Oct  1 13:02:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 352 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.3 KiB/s wr, 39 op/s
Oct  1 13:02:01 np0005464891 podman[302890]: 2025-10-01 17:02:01.412775499 +0000 UTC m=+0.687400014 container cleanup 1188e28848f76a76b937c58d829079c068e0c5aab34b2d6e0b1ac1f9589a9303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.415 2 DEBUG nova.network.neutron [req-be33312b-7e8a-418c-a7a0-4faa050fc590 req-1fc6ed5a-1b4e-4e28-aead-f6c1e5d77c23 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Updated VIF entry in instance network info cache for port ee3f438c-5db5-4c88-b0c0-51835235bc99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.416 2 DEBUG nova.network.neutron [req-be33312b-7e8a-418c-a7a0-4faa050fc590 req-1fc6ed5a-1b4e-4e28-aead-f6c1e5d77c23 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Updating instance_info_cache with network_info: [{"id": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "address": "fa:16:3e:62:31:95", "network": {"id": "ce1e1062-6685-441b-8278-667224375e38", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2015114123-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318b65fa88942a99937a0d198a04a9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee3f438c-5d", "ovs_interfaceid": "ee3f438c-5db5-4c88-b0c0-51835235bc99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:02:01 np0005464891 systemd[1]: libpod-conmon-1188e28848f76a76b937c58d829079c068e0c5aab34b2d6e0b1ac1f9589a9303.scope: Deactivated successfully.
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.800 2 DEBUG nova.compute.manager [req-cba4629f-dc7f-4307-b00b-2540de7823af req-2d3aab33-97c5-44ca-8547-9837a83b16d3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Received event network-vif-unplugged-ee3f438c-5db5-4c88-b0c0-51835235bc99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.801 2 DEBUG oslo_concurrency.lockutils [req-cba4629f-dc7f-4307-b00b-2540de7823af req-2d3aab33-97c5-44ca-8547-9837a83b16d3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.801 2 DEBUG oslo_concurrency.lockutils [req-cba4629f-dc7f-4307-b00b-2540de7823af req-2d3aab33-97c5-44ca-8547-9837a83b16d3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.802 2 DEBUG oslo_concurrency.lockutils [req-cba4629f-dc7f-4307-b00b-2540de7823af req-2d3aab33-97c5-44ca-8547-9837a83b16d3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.802 2 DEBUG nova.compute.manager [req-cba4629f-dc7f-4307-b00b-2540de7823af req-2d3aab33-97c5-44ca-8547-9837a83b16d3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] No waiting events found dispatching network-vif-unplugged-ee3f438c-5db5-4c88-b0c0-51835235bc99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.803 2 DEBUG nova.compute.manager [req-cba4629f-dc7f-4307-b00b-2540de7823af req-2d3aab33-97c5-44ca-8547-9837a83b16d3 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Received event network-vif-unplugged-ee3f438c-5db5-4c88-b0c0-51835235bc99 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 13:02:01 np0005464891 nova_compute[259907]: 2025-10-01 17:02:01.888 2 DEBUG oslo_concurrency.lockutils [req-be33312b-7e8a-418c-a7a0-4faa050fc590 req-1fc6ed5a-1b4e-4e28-aead-f6c1e5d77c23 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-1affd3fe-8ee0-455e-bcef-79fe7bcb283d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:02:02 np0005464891 nova_compute[259907]: 2025-10-01 17:02:02.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:02 np0005464891 podman[302948]: 2025-10-01 17:02:02.068732639 +0000 UTC m=+0.630831133 container remove 1188e28848f76a76b937c58d829079c068e0c5aab34b2d6e0b1ac1f9589a9303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 13:02:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:02.074 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[283fd31a-0f95-48c5-b433-c53fb657fc0c]: (4, ('Wed Oct  1 05:02:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38 (1188e28848f76a76b937c58d829079c068e0c5aab34b2d6e0b1ac1f9589a9303)\n1188e28848f76a76b937c58d829079c068e0c5aab34b2d6e0b1ac1f9589a9303\nWed Oct  1 05:02:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ce1e1062-6685-441b-8278-667224375e38 (1188e28848f76a76b937c58d829079c068e0c5aab34b2d6e0b1ac1f9589a9303)\n1188e28848f76a76b937c58d829079c068e0c5aab34b2d6e0b1ac1f9589a9303\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:02:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:02.076 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ba5a331d-bb2f-4b0e-b608-965f129ca25c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:02:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:02.077 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce1e1062-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:02:02 np0005464891 kernel: tapce1e1062-60: left promiscuous mode
Oct  1 13:02:02 np0005464891 nova_compute[259907]: 2025-10-01 17:02:02.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:02 np0005464891 nova_compute[259907]: 2025-10-01 17:02:02.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:02.095 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[a4dbef97-1d76-414a-8db8-a3a3daf71bf3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:02:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:02.122 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[3937b332-e787-4ed1-a00a-0367f809b4f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:02:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:02.123 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[248d19e5-7eff-46d1-851f-52af81c468cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:02:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:02.138 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[62fcf8b6-764c-4346-906f-7ae2918a8ae3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495774, 'reachable_time': 18543, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302963, 'error': None, 'target': 'ovnmeta-ce1e1062-6685-441b-8278-667224375e38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:02:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:02.140 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce1e1062-6685-441b-8278-667224375e38 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 13:02:02 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:02.141 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[7793e000-7e6a-4d78-a747-1d753904f087]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:02:02 np0005464891 systemd[1]: run-netns-ovnmeta\x2dce1e1062\x2d6685\x2d441b\x2d8278\x2d667224375e38.mount: Deactivated successfully.
Oct  1 13:02:02 np0005464891 nova_compute[259907]: 2025-10-01 17:02:02.339 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759338107.3387694, 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:02:02 np0005464891 nova_compute[259907]: 2025-10-01 17:02:02.340 2 INFO nova.compute.manager [-] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] VM Stopped (Lifecycle Event)#033[00m
Oct  1 13:02:02 np0005464891 nova_compute[259907]: 2025-10-01 17:02:02.541 2 DEBUG nova.compute.manager [None req-728bea6c-ace8-47d0-8dad-54f88bb11683 - - - - - -] [instance: 2e2fb6e1-ace5-45d4-a1ea-b41c2b903193] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:02:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 352 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 37 op/s
Oct  1 13:02:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e445 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:02:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e445 do_prune osdmap full prune enabled
Oct  1 13:02:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e446 e446: 3 total, 3 up, 3 in
Oct  1 13:02:03 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e446: 3 total, 3 up, 3 in
Oct  1 13:02:04 np0005464891 nova_compute[259907]: 2025-10-01 17:02:04.002 2 DEBUG nova.compute.manager [req-42a7d143-b1e9-4668-b2c6-2e4486d648e1 req-edb4780f-9e08-43d7-b94f-c454f2d54204 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Received event network-vif-plugged-ee3f438c-5db5-4c88-b0c0-51835235bc99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:02:04 np0005464891 nova_compute[259907]: 2025-10-01 17:02:04.003 2 DEBUG oslo_concurrency.lockutils [req-42a7d143-b1e9-4668-b2c6-2e4486d648e1 req-edb4780f-9e08-43d7-b94f-c454f2d54204 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:02:04 np0005464891 nova_compute[259907]: 2025-10-01 17:02:04.003 2 DEBUG oslo_concurrency.lockutils [req-42a7d143-b1e9-4668-b2c6-2e4486d648e1 req-edb4780f-9e08-43d7-b94f-c454f2d54204 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:02:04 np0005464891 nova_compute[259907]: 2025-10-01 17:02:04.003 2 DEBUG oslo_concurrency.lockutils [req-42a7d143-b1e9-4668-b2c6-2e4486d648e1 req-edb4780f-9e08-43d7-b94f-c454f2d54204 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:02:04 np0005464891 nova_compute[259907]: 2025-10-01 17:02:04.003 2 DEBUG nova.compute.manager [req-42a7d143-b1e9-4668-b2c6-2e4486d648e1 req-edb4780f-9e08-43d7-b94f-c454f2d54204 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] No waiting events found dispatching network-vif-plugged-ee3f438c-5db5-4c88-b0c0-51835235bc99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:02:04 np0005464891 nova_compute[259907]: 2025-10-01 17:02:04.003 2 WARNING nova.compute.manager [req-42a7d143-b1e9-4668-b2c6-2e4486d648e1 req-edb4780f-9e08-43d7-b94f-c454f2d54204 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Received unexpected event network-vif-plugged-ee3f438c-5db5-4c88-b0c0-51835235bc99 for instance with vm_state active and task_state deleting.#033[00m
Oct  1 13:02:04 np0005464891 nova_compute[259907]: 2025-10-01 17:02:04.703 2 INFO nova.virt.libvirt.driver [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Deleting instance files /var/lib/nova/instances/1affd3fe-8ee0-455e-bcef-79fe7bcb283d_del#033[00m
Oct  1 13:02:04 np0005464891 nova_compute[259907]: 2025-10-01 17:02:04.704 2 INFO nova.virt.libvirt.driver [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Deletion of /var/lib/nova/instances/1affd3fe-8ee0-455e-bcef-79fe7bcb283d_del complete#033[00m
Oct  1 13:02:04 np0005464891 nova_compute[259907]: 2025-10-01 17:02:04.782 2 INFO nova.compute.manager [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Took 5.27 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 13:02:04 np0005464891 nova_compute[259907]: 2025-10-01 17:02:04.783 2 DEBUG oslo.service.loopingcall [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 13:02:04 np0005464891 nova_compute[259907]: 2025-10-01 17:02:04.783 2 DEBUG nova.compute.manager [-] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 13:02:04 np0005464891 nova_compute[259907]: 2025-10-01 17:02:04.783 2 DEBUG nova.network.neutron [-] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 13:02:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 352 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.4 KiB/s wr, 39 op/s
Oct  1 13:02:06 np0005464891 nova_compute[259907]: 2025-10-01 17:02:06.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:06 np0005464891 nova_compute[259907]: 2025-10-01 17:02:06.026 2 DEBUG nova.network.neutron [-] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:02:06 np0005464891 nova_compute[259907]: 2025-10-01 17:02:06.157 2 DEBUG nova.compute.manager [req-0570385e-fa22-45aa-813d-5afc57a2a0ad req-56abe6f1-38f4-4201-9394-ca092078fb1e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Received event network-vif-deleted-ee3f438c-5db5-4c88-b0c0-51835235bc99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:02:06 np0005464891 nova_compute[259907]: 2025-10-01 17:02:06.158 2 INFO nova.compute.manager [req-0570385e-fa22-45aa-813d-5afc57a2a0ad req-56abe6f1-38f4-4201-9394-ca092078fb1e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Neutron deleted interface ee3f438c-5db5-4c88-b0c0-51835235bc99; detaching it from the instance and deleting it from the info cache#033[00m
Oct  1 13:02:06 np0005464891 nova_compute[259907]: 2025-10-01 17:02:06.158 2 DEBUG nova.network.neutron [req-0570385e-fa22-45aa-813d-5afc57a2a0ad req-56abe6f1-38f4-4201-9394-ca092078fb1e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:02:06 np0005464891 nova_compute[259907]: 2025-10-01 17:02:06.160 2 INFO nova.compute.manager [-] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Took 1.38 seconds to deallocate network for instance.#033[00m
Oct  1 13:02:06 np0005464891 nova_compute[259907]: 2025-10-01 17:02:06.192 2 DEBUG nova.compute.manager [req-0570385e-fa22-45aa-813d-5afc57a2a0ad req-56abe6f1-38f4-4201-9394-ca092078fb1e af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Detach interface failed, port_id=ee3f438c-5db5-4c88-b0c0-51835235bc99, reason: Instance 1affd3fe-8ee0-455e-bcef-79fe7bcb283d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  1 13:02:06 np0005464891 nova_compute[259907]: 2025-10-01 17:02:06.366 2 INFO nova.compute.manager [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Took 0.20 seconds to detach 1 volumes for instance.#033[00m
Oct  1 13:02:06 np0005464891 nova_compute[259907]: 2025-10-01 17:02:06.488 2 DEBUG oslo_concurrency.lockutils [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:02:06 np0005464891 nova_compute[259907]: 2025-10-01 17:02:06.489 2 DEBUG oslo_concurrency.lockutils [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:02:06 np0005464891 nova_compute[259907]: 2025-10-01 17:02:06.543 2 DEBUG oslo_concurrency.processutils [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:02:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:02:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2740564394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:02:06 np0005464891 nova_compute[259907]: 2025-10-01 17:02:06.938 2 DEBUG oslo_concurrency.processutils [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:02:06 np0005464891 nova_compute[259907]: 2025-10-01 17:02:06.946 2 DEBUG nova.compute.provider_tree [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:02:06 np0005464891 nova_compute[259907]: 2025-10-01 17:02:06.976 2 DEBUG nova.scheduler.client.report [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:02:07 np0005464891 nova_compute[259907]: 2025-10-01 17:02:07.012 2 DEBUG oslo_concurrency.lockutils [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:02:07 np0005464891 nova_compute[259907]: 2025-10-01 17:02:07.040 2 INFO nova.scheduler.client.report [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Deleted allocations for instance 1affd3fe-8ee0-455e-bcef-79fe7bcb283d#033[00m
Oct  1 13:02:07 np0005464891 nova_compute[259907]: 2025-10-01 17:02:07.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:07 np0005464891 nova_compute[259907]: 2025-10-01 17:02:07.132 2 DEBUG oslo_concurrency.lockutils [None req-db3d07e3-3381-4e55-9607-e3b18a56d8f4 1280014cdfb74333ae8d71c78116e646 8318b65fa88942a99937a0d198a04a9c - - default default] Lock "1affd3fe-8ee0-455e-bcef-79fe7bcb283d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:02:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 352 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.4 KiB/s wr, 37 op/s
Oct  1 13:02:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:02:08 np0005464891 podman[302989]: 2025-10-01 17:02:08.946163137 +0000 UTC m=+0.055645165 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:02:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 352 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct  1 13:02:11 np0005464891 nova_compute[259907]: 2025-10-01 17:02:11.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 352 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 716 B/s wr, 15 op/s
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:02:12 np0005464891 nova_compute[259907]: 2025-10-01 17:02:12.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_17:02:12
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'volumes', 'default.rgw.control', '.rgw.root']
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 13:02:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:12.464 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:02:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:12.465 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:02:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:02:12.465 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:02:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:02:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:02:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2463423896' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:02:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:02:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2463423896' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:02:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 310 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 614 B/s wr, 22 op/s
Oct  1 13:02:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:02:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 271 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.1 KiB/s wr, 24 op/s
Oct  1 13:02:15 np0005464891 nova_compute[259907]: 2025-10-01 17:02:15.740 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759338120.7392187, 1affd3fe-8ee0-455e-bcef-79fe7bcb283d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:02:15 np0005464891 nova_compute[259907]: 2025-10-01 17:02:15.741 2 INFO nova.compute.manager [-] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] VM Stopped (Lifecycle Event)#033[00m
Oct  1 13:02:15 np0005464891 nova_compute[259907]: 2025-10-01 17:02:15.785 2 DEBUG nova.compute.manager [None req-0fbebd96-2856-4e5a-a806-813663cf9133 - - - - - -] [instance: 1affd3fe-8ee0-455e-bcef-79fe7bcb283d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:02:16 np0005464891 nova_compute[259907]: 2025-10-01 17:02:16.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:17 np0005464891 nova_compute[259907]: 2025-10-01 17:02:17.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:17 np0005464891 nova_compute[259907]: 2025-10-01 17:02:17.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:17 np0005464891 nova_compute[259907]: 2025-10-01 17:02:17.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 271 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.1 KiB/s wr, 22 op/s
Oct  1 13:02:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:02:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 271 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.1 KiB/s wr, 22 op/s
Oct  1 13:02:20 np0005464891 podman[303018]: 2025-10-01 17:02:20.996739403 +0000 UTC m=+0.111421603 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct  1 13:02:21 np0005464891 nova_compute[259907]: 2025-10-01 17:02:21.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 271 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 16 op/s
Oct  1 13:02:22 np0005464891 nova_compute[259907]: 2025-10-01 17:02:22.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:02:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 13:02:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 271 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 16 op/s
Oct  1 13:02:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:02:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 271 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 597 B/s wr, 4 op/s
Oct  1 13:02:25 np0005464891 podman[303046]: 2025-10-01 17:02:25.954852129 +0000 UTC m=+0.070222815 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  1 13:02:26 np0005464891 nova_compute[259907]: 2025-10-01 17:02:26.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:27 np0005464891 nova_compute[259907]: 2025-10-01 17:02:27.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1884: 321 pgs: 321 active+clean; 271 MiB data, 577 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:02:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:02:28 np0005464891 podman[303070]: 2025-10-01 17:02:28.97566358 +0000 UTC m=+0.082082601 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible)
Oct  1 13:02:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1885: 321 pgs: 321 active+clean; 271 MiB data, 577 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:02:31 np0005464891 nova_compute[259907]: 2025-10-01 17:02:31.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 271 MiB data, 577 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:02:32 np0005464891 nova_compute[259907]: 2025-10-01 17:02:32.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 271 MiB data, 577 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:02:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:02:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1888: 321 pgs: 321 active+clean; 271 MiB data, 577 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:02:35 np0005464891 nova_compute[259907]: 2025-10-01 17:02:35.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:02:35 np0005464891 nova_compute[259907]: 2025-10-01 17:02:35.864 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:02:35 np0005464891 nova_compute[259907]: 2025-10-01 17:02:35.865 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:02:35 np0005464891 nova_compute[259907]: 2025-10-01 17:02:35.865 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:02:35 np0005464891 nova_compute[259907]: 2025-10-01 17:02:35.865 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 13:02:35 np0005464891 nova_compute[259907]: 2025-10-01 17:02:35.866 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:02:36 np0005464891 nova_compute[259907]: 2025-10-01 17:02:36.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:02:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4231639307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:02:36 np0005464891 nova_compute[259907]: 2025-10-01 17:02:36.307 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:02:36 np0005464891 nova_compute[259907]: 2025-10-01 17:02:36.465 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:02:36 np0005464891 nova_compute[259907]: 2025-10-01 17:02:36.466 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4416MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 13:02:36 np0005464891 nova_compute[259907]: 2025-10-01 17:02:36.466 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:02:36 np0005464891 nova_compute[259907]: 2025-10-01 17:02:36.466 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:02:36 np0005464891 nova_compute[259907]: 2025-10-01 17:02:36.686 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 13:02:36 np0005464891 nova_compute[259907]: 2025-10-01 17:02:36.687 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 13:02:36 np0005464891 nova_compute[259907]: 2025-10-01 17:02:36.701 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing inventories for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 13:02:36 np0005464891 nova_compute[259907]: 2025-10-01 17:02:36.718 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating ProviderTree inventory for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 13:02:36 np0005464891 nova_compute[259907]: 2025-10-01 17:02:36.719 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating inventory in ProviderTree for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 13:02:36 np0005464891 nova_compute[259907]: 2025-10-01 17:02:36.732 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing aggregate associations for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 13:02:36 np0005464891 nova_compute[259907]: 2025-10-01 17:02:36.751 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing trait associations for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8, traits: HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 13:02:36 np0005464891 nova_compute[259907]: 2025-10-01 17:02:36.767 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:02:37 np0005464891 nova_compute[259907]: 2025-10-01 17:02:37.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:02:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1943031165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:02:37 np0005464891 nova_compute[259907]: 2025-10-01 17:02:37.222 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:02:37 np0005464891 nova_compute[259907]: 2025-10-01 17:02:37.230 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:02:37 np0005464891 nova_compute[259907]: 2025-10-01 17:02:37.245 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:02:37 np0005464891 nova_compute[259907]: 2025-10-01 17:02:37.267 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 13:02:37 np0005464891 nova_compute[259907]: 2025-10-01 17:02:37.267 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:02:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 271 MiB data, 577 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:02:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:02:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/622580446' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:02:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:02:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/622580446' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:02:38 np0005464891 nova_compute[259907]: 2025-10-01 17:02:38.268 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:02:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:02:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 271 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct  1 13:02:39 np0005464891 nova_compute[259907]: 2025-10-01 17:02:39.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:02:39 np0005464891 nova_compute[259907]: 2025-10-01 17:02:39.804 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 13:02:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:02:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3227531751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:02:39 np0005464891 podman[303140]: 2025-10-01 17:02:39.979239043 +0000 UTC m=+0.097216095 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  1 13:02:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e446 do_prune osdmap full prune enabled
Oct  1 13:02:40 np0005464891 nova_compute[259907]: 2025-10-01 17:02:40.799 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:02:40 np0005464891 nova_compute[259907]: 2025-10-01 17:02:40.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:02:40 np0005464891 nova_compute[259907]: 2025-10-01 17:02:40.804 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 13:02:40 np0005464891 nova_compute[259907]: 2025-10-01 17:02:40.861 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 13:02:40 np0005464891 nova_compute[259907]: 2025-10-01 17:02:40.861 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:02:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e447 e447: 3 total, 3 up, 3 in
Oct  1 13:02:41 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e447: 3 total, 3 up, 3 in
Oct  1 13:02:41 np0005464891 nova_compute[259907]: 2025-10-01 17:02:41.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 271 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 409 B/s wr, 1 op/s
Oct  1 13:02:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e447 do_prune osdmap full prune enabled
Oct  1 13:02:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e448 e448: 3 total, 3 up, 3 in
Oct  1 13:02:41 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e448: 3 total, 3 up, 3 in
Oct  1 13:02:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:02:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:02:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:02:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:02:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:02:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:02:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:02:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:02:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 13:02:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:02:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 13:02:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:02:42 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 9f93fe58-b44e-4ea1-924b-f71eed08a9ba does not exist
Oct  1 13:02:42 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 5c30d8f1-dbd7-49d6-bad8-c98616d04a3a does not exist
Oct  1 13:02:42 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev e36662fd-e5a9-440c-9280-37de94a2eab5 does not exist
Oct  1 13:02:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 13:02:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 13:02:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 13:02:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:02:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:02:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:02:42 np0005464891 nova_compute[259907]: 2025-10-01 17:02:42.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:42 np0005464891 podman[303431]: 2025-10-01 17:02:42.731872055 +0000 UTC m=+0.025053037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:02:42 np0005464891 podman[303431]: 2025-10-01 17:02:42.899981471 +0000 UTC m=+0.193162423 container create e5c7fdbe7105c3500072f0f9d48f9ef4faa4c236d911ec91809526fc7db90f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:02:42 np0005464891 systemd[1]: Started libpod-conmon-e5c7fdbe7105c3500072f0f9d48f9ef4faa4c236d911ec91809526fc7db90f5b.scope.
Oct  1 13:02:42 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:02:42 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:02:42 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:02:43 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:02:43 np0005464891 podman[303431]: 2025-10-01 17:02:43.126392024 +0000 UTC m=+0.419573006 container init e5c7fdbe7105c3500072f0f9d48f9ef4faa4c236d911ec91809526fc7db90f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:02:43 np0005464891 podman[303431]: 2025-10-01 17:02:43.132587523 +0000 UTC m=+0.425768475 container start e5c7fdbe7105c3500072f0f9d48f9ef4faa4c236d911ec91809526fc7db90f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 13:02:43 np0005464891 charming_brattain[303447]: 167 167
Oct  1 13:02:43 np0005464891 systemd[1]: libpod-e5c7fdbe7105c3500072f0f9d48f9ef4faa4c236d911ec91809526fc7db90f5b.scope: Deactivated successfully.
Oct  1 13:02:43 np0005464891 conmon[303447]: conmon e5c7fdbe7105c3500072 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5c7fdbe7105c3500072f0f9d48f9ef4faa4c236d911ec91809526fc7db90f5b.scope/container/memory.events
Oct  1 13:02:43 np0005464891 podman[303431]: 2025-10-01 17:02:43.201761189 +0000 UTC m=+0.494942141 container attach e5c7fdbe7105c3500072f0f9d48f9ef4faa4c236d911ec91809526fc7db90f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 13:02:43 np0005464891 podman[303431]: 2025-10-01 17:02:43.202152579 +0000 UTC m=+0.495333521 container died e5c7fdbe7105c3500072f0f9d48f9ef4faa4c236d911ec91809526fc7db90f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 13:02:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 271 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s
Oct  1 13:02:43 np0005464891 systemd[1]: var-lib-containers-storage-overlay-80a228fdf103340503b476a944bff548347cb2c5e75a74c0ceac1c3459fcc43c-merged.mount: Deactivated successfully.
Oct  1 13:02:43 np0005464891 podman[303431]: 2025-10-01 17:02:43.440856889 +0000 UTC m=+0.734037841 container remove e5c7fdbe7105c3500072f0f9d48f9ef4faa4c236d911ec91809526fc7db90f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:02:43 np0005464891 systemd[1]: libpod-conmon-e5c7fdbe7105c3500072f0f9d48f9ef4faa4c236d911ec91809526fc7db90f5b.scope: Deactivated successfully.
Oct  1 13:02:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:02:43 np0005464891 podman[303470]: 2025-10-01 17:02:43.635306297 +0000 UTC m=+0.085443493 container create abe745d9cb3eab208d31270c50ad2228c158563bbe67fdf22baf21079fc9fdfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:02:43 np0005464891 podman[303470]: 2025-10-01 17:02:43.573230786 +0000 UTC m=+0.023367992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:02:43 np0005464891 systemd[1]: Started libpod-conmon-abe745d9cb3eab208d31270c50ad2228c158563bbe67fdf22baf21079fc9fdfe.scope.
Oct  1 13:02:43 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:02:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4a55fd602761ad9bb06ef7659c3f25b8788bcee149f834ede82a4fc82011b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:02:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4a55fd602761ad9bb06ef7659c3f25b8788bcee149f834ede82a4fc82011b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:02:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4a55fd602761ad9bb06ef7659c3f25b8788bcee149f834ede82a4fc82011b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:02:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4a55fd602761ad9bb06ef7659c3f25b8788bcee149f834ede82a4fc82011b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:02:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4a55fd602761ad9bb06ef7659c3f25b8788bcee149f834ede82a4fc82011b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 13:02:43 np0005464891 podman[303470]: 2025-10-01 17:02:43.795797343 +0000 UTC m=+0.245934539 container init abe745d9cb3eab208d31270c50ad2228c158563bbe67fdf22baf21079fc9fdfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:02:43 np0005464891 podman[303470]: 2025-10-01 17:02:43.803049542 +0000 UTC m=+0.253186718 container start abe745d9cb3eab208d31270c50ad2228c158563bbe67fdf22baf21079fc9fdfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct  1 13:02:43 np0005464891 nova_compute[259907]: 2025-10-01 17:02:43.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:02:43 np0005464891 podman[303470]: 2025-10-01 17:02:43.819179193 +0000 UTC m=+0.269316359 container attach abe745d9cb3eab208d31270c50ad2228c158563bbe67fdf22baf21079fc9fdfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bhabha, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:02:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:02:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3269895668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:02:44 np0005464891 competent_bhabha[303488]: --> passed data devices: 0 physical, 3 LVM
Oct  1 13:02:44 np0005464891 competent_bhabha[303488]: --> relative data size: 1.0
Oct  1 13:02:44 np0005464891 competent_bhabha[303488]: --> All data devices are unavailable
Oct  1 13:02:44 np0005464891 systemd[1]: libpod-abe745d9cb3eab208d31270c50ad2228c158563bbe67fdf22baf21079fc9fdfe.scope: Deactivated successfully.
Oct  1 13:02:44 np0005464891 podman[303470]: 2025-10-01 17:02:44.79799122 +0000 UTC m=+1.248128396 container died abe745d9cb3eab208d31270c50ad2228c158563bbe67fdf22baf21079fc9fdfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bhabha, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 13:02:44 np0005464891 nova_compute[259907]: 2025-10-01 17:02:44.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:02:44 np0005464891 nova_compute[259907]: 2025-10-01 17:02:44.806 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:02:44 np0005464891 systemd[1]: var-lib-containers-storage-overlay-1a4a55fd602761ad9bb06ef7659c3f25b8788bcee149f834ede82a4fc82011b4-merged.mount: Deactivated successfully.
Oct  1 13:02:45 np0005464891 podman[303470]: 2025-10-01 17:02:45.023901509 +0000 UTC m=+1.474038685 container remove abe745d9cb3eab208d31270c50ad2228c158563bbe67fdf22baf21079fc9fdfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:02:45 np0005464891 systemd[1]: libpod-conmon-abe745d9cb3eab208d31270c50ad2228c158563bbe67fdf22baf21079fc9fdfe.scope: Deactivated successfully.
Oct  1 13:02:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 271 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 KiB/s wr, 27 op/s
Oct  1 13:02:45 np0005464891 podman[303670]: 2025-10-01 17:02:45.661142398 +0000 UTC m=+0.030839766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:02:45 np0005464891 podman[303670]: 2025-10-01 17:02:45.767531772 +0000 UTC m=+0.137229080 container create ddf12e00802cb283490a4dd6440b60cc57b189d8cbac414407b9f95caf6983cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chebyshev, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 13:02:45 np0005464891 systemd[1]: Started libpod-conmon-ddf12e00802cb283490a4dd6440b60cc57b189d8cbac414407b9f95caf6983cf.scope.
Oct  1 13:02:45 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:02:46 np0005464891 podman[303670]: 2025-10-01 17:02:46.012978436 +0000 UTC m=+0.382675814 container init ddf12e00802cb283490a4dd6440b60cc57b189d8cbac414407b9f95caf6983cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:02:46 np0005464891 podman[303670]: 2025-10-01 17:02:46.021960223 +0000 UTC m=+0.391657521 container start ddf12e00802cb283490a4dd6440b60cc57b189d8cbac414407b9f95caf6983cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chebyshev, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 13:02:46 np0005464891 magical_chebyshev[303686]: 167 167
Oct  1 13:02:46 np0005464891 systemd[1]: libpod-ddf12e00802cb283490a4dd6440b60cc57b189d8cbac414407b9f95caf6983cf.scope: Deactivated successfully.
Oct  1 13:02:46 np0005464891 nova_compute[259907]: 2025-10-01 17:02:46.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:46 np0005464891 podman[303670]: 2025-10-01 17:02:46.132751888 +0000 UTC m=+0.502449206 container attach ddf12e00802cb283490a4dd6440b60cc57b189d8cbac414407b9f95caf6983cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 13:02:46 np0005464891 podman[303670]: 2025-10-01 17:02:46.133529629 +0000 UTC m=+0.503226917 container died ddf12e00802cb283490a4dd6440b60cc57b189d8cbac414407b9f95caf6983cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 13:02:46 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6057ca03d7656d777468ddcbe9e83e27160ddca43f848b018b5ea16484a6fc2b-merged.mount: Deactivated successfully.
Oct  1 13:02:46 np0005464891 podman[303670]: 2025-10-01 17:02:46.427062291 +0000 UTC m=+0.796759569 container remove ddf12e00802cb283490a4dd6440b60cc57b189d8cbac414407b9f95caf6983cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 13:02:46 np0005464891 systemd[1]: libpod-conmon-ddf12e00802cb283490a4dd6440b60cc57b189d8cbac414407b9f95caf6983cf.scope: Deactivated successfully.
Oct  1 13:02:46 np0005464891 podman[303712]: 2025-10-01 17:02:46.573707999 +0000 UTC m=+0.023984578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:02:46 np0005464891 podman[303712]: 2025-10-01 17:02:46.804349838 +0000 UTC m=+0.254626397 container create ff0bdf116064305f122b14d644bfbbe4da344b52c8b3d3063c5cb6145cf5e440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_carson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 13:02:46 np0005464891 systemd[1]: Started libpod-conmon-ff0bdf116064305f122b14d644bfbbe4da344b52c8b3d3063c5cb6145cf5e440.scope.
Oct  1 13:02:47 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:02:47 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8f5452d403531d7c1daa6af29472808b5bc7994a74cf7397abe459ef7ef094/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:02:47 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8f5452d403531d7c1daa6af29472808b5bc7994a74cf7397abe459ef7ef094/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:02:47 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8f5452d403531d7c1daa6af29472808b5bc7994a74cf7397abe459ef7ef094/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:02:47 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8f5452d403531d7c1daa6af29472808b5bc7994a74cf7397abe459ef7ef094/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:02:47 np0005464891 podman[303712]: 2025-10-01 17:02:47.123991595 +0000 UTC m=+0.574268204 container init ff0bdf116064305f122b14d644bfbbe4da344b52c8b3d3063c5cb6145cf5e440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_carson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct  1 13:02:47 np0005464891 podman[303712]: 2025-10-01 17:02:47.135373677 +0000 UTC m=+0.585650276 container start ff0bdf116064305f122b14d644bfbbe4da344b52c8b3d3063c5cb6145cf5e440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_carson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:02:47 np0005464891 nova_compute[259907]: 2025-10-01 17:02:47.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:47 np0005464891 podman[303712]: 2025-10-01 17:02:47.218436543 +0000 UTC m=+0.668713122 container attach ff0bdf116064305f122b14d644bfbbe4da344b52c8b3d3063c5cb6145cf5e440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_carson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:02:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 271 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 27 op/s
Oct  1 13:02:47 np0005464891 epic_carson[303729]: {
Oct  1 13:02:47 np0005464891 epic_carson[303729]:    "0": [
Oct  1 13:02:47 np0005464891 epic_carson[303729]:        {
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "devices": [
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "/dev/loop3"
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            ],
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "lv_name": "ceph_lv0",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "lv_size": "21470642176",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "name": "ceph_lv0",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "tags": {
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.cluster_name": "ceph",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.crush_device_class": "",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.encrypted": "0",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.osd_id": "0",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.type": "block",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.vdo": "0"
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            },
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "type": "block",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "vg_name": "ceph_vg0"
Oct  1 13:02:47 np0005464891 epic_carson[303729]:        }
Oct  1 13:02:47 np0005464891 epic_carson[303729]:    ],
Oct  1 13:02:47 np0005464891 epic_carson[303729]:    "1": [
Oct  1 13:02:47 np0005464891 epic_carson[303729]:        {
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "devices": [
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "/dev/loop4"
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            ],
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "lv_name": "ceph_lv1",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "lv_size": "21470642176",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "name": "ceph_lv1",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "tags": {
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.cluster_name": "ceph",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.crush_device_class": "",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.encrypted": "0",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.osd_id": "1",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.type": "block",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.vdo": "0"
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            },
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "type": "block",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "vg_name": "ceph_vg1"
Oct  1 13:02:47 np0005464891 epic_carson[303729]:        }
Oct  1 13:02:47 np0005464891 epic_carson[303729]:    ],
Oct  1 13:02:47 np0005464891 epic_carson[303729]:    "2": [
Oct  1 13:02:47 np0005464891 epic_carson[303729]:        {
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "devices": [
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "/dev/loop5"
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            ],
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "lv_name": "ceph_lv2",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "lv_size": "21470642176",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "name": "ceph_lv2",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "tags": {
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.cluster_name": "ceph",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.crush_device_class": "",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.encrypted": "0",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.osd_id": "2",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.type": "block",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:                "ceph.vdo": "0"
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            },
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "type": "block",
Oct  1 13:02:47 np0005464891 epic_carson[303729]:            "vg_name": "ceph_vg2"
Oct  1 13:02:47 np0005464891 epic_carson[303729]:        }
Oct  1 13:02:47 np0005464891 epic_carson[303729]:    ]
Oct  1 13:02:47 np0005464891 epic_carson[303729]: }
Oct  1 13:02:48 np0005464891 podman[303712]: 2025-10-01 17:02:48.013433793 +0000 UTC m=+1.463710392 container died ff0bdf116064305f122b14d644bfbbe4da344b52c8b3d3063c5cb6145cf5e440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:02:48 np0005464891 systemd[1]: libpod-ff0bdf116064305f122b14d644bfbbe4da344b52c8b3d3063c5cb6145cf5e440.scope: Deactivated successfully.
Oct  1 13:02:48 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6e8f5452d403531d7c1daa6af29472808b5bc7994a74cf7397abe459ef7ef094-merged.mount: Deactivated successfully.
Oct  1 13:02:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:02:48 np0005464891 podman[303712]: 2025-10-01 17:02:48.654414953 +0000 UTC m=+2.104691522 container remove ff0bdf116064305f122b14d644bfbbe4da344b52c8b3d3063c5cb6145cf5e440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 13:02:48 np0005464891 systemd[1]: libpod-conmon-ff0bdf116064305f122b14d644bfbbe4da344b52c8b3d3063c5cb6145cf5e440.scope: Deactivated successfully.
Oct  1 13:02:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 395 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 15 MiB/s wr, 29 op/s
Oct  1 13:02:49 np0005464891 podman[303892]: 2025-10-01 17:02:49.328005668 +0000 UTC m=+0.019579987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:02:49 np0005464891 podman[303892]: 2025-10-01 17:02:49.556131468 +0000 UTC m=+0.247705807 container create cad8a8fbd47a39fa2b197a850c27d6d737d828c9c51e174ed5fc5b6ca8d66318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kare, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  1 13:02:49 np0005464891 systemd[1]: Started libpod-conmon-cad8a8fbd47a39fa2b197a850c27d6d737d828c9c51e174ed5fc5b6ca8d66318.scope.
Oct  1 13:02:49 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:02:49 np0005464891 podman[303892]: 2025-10-01 17:02:49.843513651 +0000 UTC m=+0.535088020 container init cad8a8fbd47a39fa2b197a850c27d6d737d828c9c51e174ed5fc5b6ca8d66318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:02:49 np0005464891 podman[303892]: 2025-10-01 17:02:49.860341742 +0000 UTC m=+0.551916081 container start cad8a8fbd47a39fa2b197a850c27d6d737d828c9c51e174ed5fc5b6ca8d66318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:02:49 np0005464891 wizardly_kare[303908]: 167 167
Oct  1 13:02:49 np0005464891 systemd[1]: libpod-cad8a8fbd47a39fa2b197a850c27d6d737d828c9c51e174ed5fc5b6ca8d66318.scope: Deactivated successfully.
Oct  1 13:02:50 np0005464891 podman[303892]: 2025-10-01 17:02:50.097285103 +0000 UTC m=+0.788859442 container attach cad8a8fbd47a39fa2b197a850c27d6d737d828c9c51e174ed5fc5b6ca8d66318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:02:50 np0005464891 podman[303892]: 2025-10-01 17:02:50.098290472 +0000 UTC m=+0.789864791 container died cad8a8fbd47a39fa2b197a850c27d6d737d828c9c51e174ed5fc5b6ca8d66318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kare, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:02:50 np0005464891 systemd[1]: var-lib-containers-storage-overlay-09a76a2511ce7f32315fa909f85e6f290d8a6685cbd9db0c85526ad9d2cdbe45-merged.mount: Deactivated successfully.
Oct  1 13:02:50 np0005464891 podman[303892]: 2025-10-01 17:02:50.892419838 +0000 UTC m=+1.583994157 container remove cad8a8fbd47a39fa2b197a850c27d6d737d828c9c51e174ed5fc5b6ca8d66318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:02:50 np0005464891 systemd[1]: libpod-conmon-cad8a8fbd47a39fa2b197a850c27d6d737d828c9c51e174ed5fc5b6ca8d66318.scope: Deactivated successfully.
Oct  1 13:02:51 np0005464891 nova_compute[259907]: 2025-10-01 17:02:51.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:51 np0005464891 podman[303933]: 2025-10-01 17:02:51.048642728 +0000 UTC m=+0.023023751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:02:51 np0005464891 podman[303933]: 2025-10-01 17:02:51.185917049 +0000 UTC m=+0.160298092 container create e2beef4c2c149329828209136a0dee3fedb9c75bbc181a9cae4bb5096c90d07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 13:02:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 483 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 21 MiB/s wr, 95 op/s
Oct  1 13:02:51 np0005464891 systemd[1]: Started libpod-conmon-e2beef4c2c149329828209136a0dee3fedb9c75bbc181a9cae4bb5096c90d07a.scope.
Oct  1 13:02:51 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:02:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e308c0c1788f0980f56ad706a7a721fa2153af6d05aebe4cfe9a88027b658d63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:02:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e308c0c1788f0980f56ad706a7a721fa2153af6d05aebe4cfe9a88027b658d63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:02:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e308c0c1788f0980f56ad706a7a721fa2153af6d05aebe4cfe9a88027b658d63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:02:51 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e308c0c1788f0980f56ad706a7a721fa2153af6d05aebe4cfe9a88027b658d63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:02:51 np0005464891 podman[303933]: 2025-10-01 17:02:51.717868113 +0000 UTC m=+0.692249226 container init e2beef4c2c149329828209136a0dee3fedb9c75bbc181a9cae4bb5096c90d07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cartwright, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 13:02:51 np0005464891 podman[303933]: 2025-10-01 17:02:51.733431839 +0000 UTC m=+0.707812892 container start e2beef4c2c149329828209136a0dee3fedb9c75bbc181a9cae4bb5096c90d07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cartwright, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:02:51 np0005464891 podman[303933]: 2025-10-01 17:02:51.8542673 +0000 UTC m=+0.828648313 container attach e2beef4c2c149329828209136a0dee3fedb9c75bbc181a9cae4bb5096c90d07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:02:51 np0005464891 podman[303948]: 2025-10-01 17:02:51.877600149 +0000 UTC m=+0.634312370 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  1 13:02:52 np0005464891 nova_compute[259907]: 2025-10-01 17:02:52.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]: {
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:        "osd_id": 2,
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:        "type": "bluestore"
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:    },
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:        "osd_id": 0,
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:        "type": "bluestore"
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:    },
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:        "osd_id": 1,
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:        "type": "bluestore"
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]:    }
Oct  1 13:02:52 np0005464891 relaxed_cartwright[303970]: }
Oct  1 13:02:52 np0005464891 systemd[1]: libpod-e2beef4c2c149329828209136a0dee3fedb9c75bbc181a9cae4bb5096c90d07a.scope: Deactivated successfully.
Oct  1 13:02:52 np0005464891 podman[303933]: 2025-10-01 17:02:52.86933506 +0000 UTC m=+1.843716103 container died e2beef4c2c149329828209136a0dee3fedb9c75bbc181a9cae4bb5096c90d07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 13:02:52 np0005464891 systemd[1]: libpod-e2beef4c2c149329828209136a0dee3fedb9c75bbc181a9cae4bb5096c90d07a.scope: Consumed 1.106s CPU time.
Oct  1 13:02:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 603 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 29 MiB/s wr, 86 op/s
Oct  1 13:02:53 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e308c0c1788f0980f56ad706a7a721fa2153af6d05aebe4cfe9a88027b658d63-merged.mount: Deactivated successfully.
Oct  1 13:02:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:02:54 np0005464891 podman[303933]: 2025-10-01 17:02:54.245978375 +0000 UTC m=+3.220359398 container remove e2beef4c2c149329828209136a0dee3fedb9c75bbc181a9cae4bb5096c90d07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cartwright, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 13:02:54 np0005464891 systemd[1]: libpod-conmon-e2beef4c2c149329828209136a0dee3fedb9c75bbc181a9cae4bb5096c90d07a.scope: Deactivated successfully.
Oct  1 13:02:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:02:54 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:02:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:02:55 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:02:55 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 5dfe322b-b582-4767-a061-8c07631861c9 does not exist
Oct  1 13:02:55 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 1dd1c13e-a55e-4ed8-8d86-9c894bb13140 does not exist
Oct  1 13:02:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1900: 321 pgs: 321 active+clean; 659 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 32 MiB/s wr, 102 op/s
Oct  1 13:02:55 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:02:55 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:02:56 np0005464891 nova_compute[259907]: 2025-10-01 17:02:56.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:56 np0005464891 podman[304074]: 2025-10-01 17:02:56.956738152 +0000 UTC m=+0.067464630 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  1 13:02:57 np0005464891 nova_compute[259907]: 2025-10-01 17:02:57.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:02:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 659 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 32 MiB/s wr, 86 op/s
Oct  1 13:02:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:02:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 783 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 43 MiB/s wr, 89 op/s
Oct  1 13:02:59 np0005464891 podman[304095]: 2025-10-01 17:02:59.961025647 +0000 UTC m=+0.068892628 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  1 13:03:01 np0005464891 nova_compute[259907]: 2025-10-01 17:03:01.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1903: 321 pgs: 321 active+clean; 907 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 43 MiB/s wr, 142 op/s
Oct  1 13:03:02 np0005464891 nova_compute[259907]: 2025-10-01 17:03:02.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:02 np0005464891 ovn_controller[152409]: 2025-10-01T17:03:02Z|00240|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct  1 13:03:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 58 MiB/s wr, 89 op/s
Oct  1 13:03:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:03:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e448 do_prune osdmap full prune enabled
Oct  1 13:03:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e449 e449: 3 total, 3 up, 3 in
Oct  1 13:03:03 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e449: 3 total, 3 up, 3 in
Oct  1 13:03:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 1.3 GiB data, 1.6 GiB used, 58 GiB / 60 GiB avail; 86 KiB/s rd, 64 MiB/s wr, 149 op/s
Oct  1 13:03:06 np0005464891 nova_compute[259907]: 2025-10-01 17:03:06.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:07 np0005464891 nova_compute[259907]: 2025-10-01 17:03:07.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 1.3 GiB data, 1.6 GiB used, 58 GiB / 60 GiB avail; 86 KiB/s rd, 64 MiB/s wr, 149 op/s
Oct  1 13:03:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e449 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:03:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:03:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/181450535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:03:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 1.3 GiB data, 1.6 GiB used, 58 GiB / 60 GiB avail; 89 KiB/s rd, 51 MiB/s wr, 151 op/s
Oct  1 13:03:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e449 do_prune osdmap full prune enabled
Oct  1 13:03:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e450 e450: 3 total, 3 up, 3 in
Oct  1 13:03:09 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e450: 3 total, 3 up, 3 in
Oct  1 13:03:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e450 do_prune osdmap full prune enabled
Oct  1 13:03:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e451 e451: 3 total, 3 up, 3 in
Oct  1 13:03:10 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e451: 3 total, 3 up, 3 in
Oct  1 13:03:10 np0005464891 podman[304116]: 2025-10-01 17:03:10.959200867 +0000 UTC m=+0.068445766 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 13:03:11 np0005464891 nova_compute[259907]: 2025-10-01 17:03:11.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 1.3 GiB data, 1.6 GiB used, 58 GiB / 60 GiB avail; 71 KiB/s rd, 15 MiB/s wr, 112 op/s
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_17:03:12
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'backups', 'images', '.rgw.root', 'vms', 'volumes']
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:03:12 np0005464891 nova_compute[259907]: 2025-10-01 17:03:12.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:12 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:03:12 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/759416279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:03:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:03:12.465 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:03:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:03:12.466 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:03:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:03:12.466 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:03:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:03:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 1.3 GiB data, 1.6 GiB used, 58 GiB / 60 GiB avail; 27 KiB/s rd, 1.7 KiB/s wr, 37 op/s
Oct  1 13:03:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e451 do_prune osdmap full prune enabled
Oct  1 13:03:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e452 e452: 3 total, 3 up, 3 in
Oct  1 13:03:13 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e452: 3 total, 3 up, 3 in
Oct  1 13:03:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e452 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:03:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e452 do_prune osdmap full prune enabled
Oct  1 13:03:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e453 e453: 3 total, 3 up, 3 in
Oct  1 13:03:13 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e453: 3 total, 3 up, 3 in
Oct  1 13:03:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e453 do_prune osdmap full prune enabled
Oct  1 13:03:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e454 e454: 3 total, 3 up, 3 in
Oct  1 13:03:14 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e454: 3 total, 3 up, 3 in
Oct  1 13:03:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 1.3 GiB data, 1.6 GiB used, 58 GiB / 60 GiB avail; 41 KiB/s rd, 2.7 KiB/s wr, 54 op/s
Oct  1 13:03:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:03:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/448138370' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:03:16 np0005464891 nova_compute[259907]: 2025-10-01 17:03:16.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:17 np0005464891 nova_compute[259907]: 2025-10-01 17:03:17.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 1.3 GiB data, 1.6 GiB used, 58 GiB / 60 GiB avail; 33 KiB/s rd, 2.2 KiB/s wr, 44 op/s
Oct  1 13:03:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e454 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:03:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 1.4 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 32 KiB/s rd, 15 MiB/s wr, 45 op/s
Oct  1 13:03:21 np0005464891 nova_compute[259907]: 2025-10-01 17:03:21.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 1.5 GiB data, 1.8 GiB used, 58 GiB / 60 GiB avail; 94 KiB/s rd, 29 MiB/s wr, 147 op/s
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 13:03:22 np0005464891 nova_compute[259907]: 2025-10-01 17:03:22.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.02347183898222309 of space, bias 1.0, pg target 7.041551694666927 quantized to 32 (current 32)
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013042454173992315 quantized to 32 (current 32)
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1951896427524907 quantized to 32 (current 32)
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0005962264765253629 quantized to 16 (current 32)
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.452830956567037e-05 quantized to 32 (current 32)
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006334906313081982 quantized to 32 (current 32)
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:03:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014905661913134074 quantized to 32 (current 32)
Oct  1 13:03:23 np0005464891 podman[304138]: 2025-10-01 17:03:23.015282263 +0000 UTC m=+0.119514215 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct  1 13:03:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 1.7 GiB data, 2.0 GiB used, 58 GiB / 60 GiB avail; 76 KiB/s rd, 48 MiB/s wr, 125 op/s
Oct  1 13:03:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e454 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:03:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 1.8 GiB data, 2.2 GiB used, 58 GiB / 60 GiB avail; 103 KiB/s rd, 56 MiB/s wr, 170 op/s
Oct  1 13:03:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 13:03:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 8411 writes, 38K keys, 8411 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 8410 writes, 8410 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1696 writes, 7640 keys, 1696 commit groups, 1.0 writes per commit group, ingest: 10.28 MB, 0.02 MB/s#012Interval WAL: 1695 writes, 1695 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     27.8      1.53              0.15        21    0.073       0      0       0.0       0.0#012  L6      1/0    9.97 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   3.8     52.3     43.5      3.72              0.63        20    0.186    106K    11K       0.0       0.0#012 Sum      1/0    9.97 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   4.8     37.0     38.9      5.25              0.78        41    0.128    106K    11K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.6     30.0     30.6      1.90              0.20        10    0.190     35K   3130       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0     52.3     43.5      3.72              0.63        20    0.186    106K    11K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     27.8      1.53              0.15        20    0.076       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.042, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.20 GB write, 0.07 MB/s write, 0.19 GB read, 0.06 MB/s read, 5.2 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bddc5951f0#2 capacity: 304.00 MB usage: 22.23 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000328 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1506,21.36 MB,7.02675%) FilterBlock(42,311.80 KB,0.100161%) IndexBlock(42,576.02 KB,0.185038%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 13:03:26 np0005464891 nova_compute[259907]: 2025-10-01 17:03:26.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:27 np0005464891 nova_compute[259907]: 2025-10-01 17:03:27.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 1.8 GiB data, 2.2 GiB used, 58 GiB / 60 GiB avail; 84 KiB/s rd, 49 MiB/s wr, 142 op/s
Oct  1 13:03:27 np0005464891 podman[304164]: 2025-10-01 17:03:27.943909422 +0000 UTC m=+0.059929432 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  1 13:03:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e454 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:03:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 2.0 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 84 KiB/s rd, 64 MiB/s wr, 146 op/s
Oct  1 13:03:30 np0005464891 podman[304184]: 2025-10-01 17:03:30.955751496 +0000 UTC m=+0.064153337 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  1 13:03:31 np0005464891 nova_compute[259907]: 2025-10-01 17:03:31.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 102 KiB/s rd, 64 MiB/s wr, 176 op/s
Oct  1 13:03:32 np0005464891 nova_compute[259907]: 2025-10-01 17:03:32.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 57 KiB/s rd, 60 MiB/s wr, 105 op/s
Oct  1 13:03:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e454 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:03:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 68 KiB/s rd, 45 MiB/s wr, 119 op/s
Oct  1 13:03:36 np0005464891 nova_compute[259907]: 2025-10-01 17:03:36.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:36 np0005464891 nova_compute[259907]: 2025-10-01 17:03:36.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:03:36 np0005464891 nova_compute[259907]: 2025-10-01 17:03:36.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:03:36 np0005464891 nova_compute[259907]: 2025-10-01 17:03:36.885 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:03:36 np0005464891 nova_compute[259907]: 2025-10-01 17:03:36.886 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:03:36 np0005464891 nova_compute[259907]: 2025-10-01 17:03:36.886 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:03:36 np0005464891 nova_compute[259907]: 2025-10-01 17:03:36.886 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 13:03:36 np0005464891 nova_compute[259907]: 2025-10-01 17:03:36.887 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:03:37 np0005464891 nova_compute[259907]: 2025-10-01 17:03:37.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 38 KiB/s rd, 34 MiB/s wr, 68 op/s
Oct  1 13:03:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:03:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/943260932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:03:37 np0005464891 nova_compute[259907]: 2025-10-01 17:03:37.370 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:03:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:03:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1754347990' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:03:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:03:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1754347990' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:03:37 np0005464891 nova_compute[259907]: 2025-10-01 17:03:37.590 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:03:37 np0005464891 nova_compute[259907]: 2025-10-01 17:03:37.591 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4396MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 13:03:37 np0005464891 nova_compute[259907]: 2025-10-01 17:03:37.591 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:03:37 np0005464891 nova_compute[259907]: 2025-10-01 17:03:37.592 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:03:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e454 do_prune osdmap full prune enabled
Oct  1 13:03:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e455 e455: 3 total, 3 up, 3 in
Oct  1 13:03:38 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e455: 3 total, 3 up, 3 in
Oct  1 13:03:38 np0005464891 nova_compute[259907]: 2025-10-01 17:03:38.073 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 13:03:38 np0005464891 nova_compute[259907]: 2025-10-01 17:03:38.074 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 13:03:38 np0005464891 nova_compute[259907]: 2025-10-01 17:03:38.118 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:03:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:03:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/970041349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:03:38 np0005464891 nova_compute[259907]: 2025-10-01 17:03:38.553 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:03:38 np0005464891 nova_compute[259907]: 2025-10-01 17:03:38.561 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:03:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e455 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:03:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 46 KiB/s rd, 25 MiB/s wr, 79 op/s
Oct  1 13:03:39 np0005464891 nova_compute[259907]: 2025-10-01 17:03:39.697 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:03:39 np0005464891 nova_compute[259907]: 2025-10-01 17:03:39.825 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 13:03:39 np0005464891 nova_compute[259907]: 2025-10-01 17:03:39.825 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:03:41 np0005464891 nova_compute[259907]: 2025-10-01 17:03:41.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 21 KiB/s rd, 17 MiB/s wr, 35 op/s
Oct  1 13:03:41 np0005464891 podman[304248]: 2025-10-01 17:03:41.942736702 +0000 UTC m=+0.055154341 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  1 13:03:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:03:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:03:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:03:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:03:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:03:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:03:42 np0005464891 nova_compute[259907]: 2025-10-01 17:03:42.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e455 do_prune osdmap full prune enabled
Oct  1 13:03:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e456 e456: 3 total, 3 up, 3 in
Oct  1 13:03:42 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e456: 3 total, 3 up, 3 in
Oct  1 13:03:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 10 KiB/s rd, 2.5 MiB/s wr, 15 op/s
Oct  1 13:03:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e456 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:03:43 np0005464891 nova_compute[259907]: 2025-10-01 17:03:43.821 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:03:43 np0005464891 nova_compute[259907]: 2025-10-01 17:03:43.822 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:03:43 np0005464891 nova_compute[259907]: 2025-10-01 17:03:43.892 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:03:43 np0005464891 nova_compute[259907]: 2025-10-01 17:03:43.892 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 13:03:43 np0005464891 nova_compute[259907]: 2025-10-01 17:03:43.892 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 13:03:43 np0005464891 nova_compute[259907]: 2025-10-01 17:03:43.994 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 13:03:43 np0005464891 nova_compute[259907]: 2025-10-01 17:03:43.995 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:03:43 np0005464891 nova_compute[259907]: 2025-10-01 17:03:43.995 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:03:43 np0005464891 nova_compute[259907]: 2025-10-01 17:03:43.996 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 13:03:44 np0005464891 nova_compute[259907]: 2025-10-01 17:03:44.806 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:03:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 20 KiB/s rd, 2.5 MiB/s wr, 27 op/s
Oct  1 13:03:45 np0005464891 nova_compute[259907]: 2025-10-01 17:03:45.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:03:46 np0005464891 nova_compute[259907]: 2025-10-01 17:03:46.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:46 np0005464891 nova_compute[259907]: 2025-10-01 17:03:46.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:03:47 np0005464891 nova_compute[259907]: 2025-10-01 17:03:47.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 16 KiB/s rd, 863 B/s wr, 21 op/s
Oct  1 13:03:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e456 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:03:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e456 do_prune osdmap full prune enabled
Oct  1 13:03:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e457 e457: 3 total, 3 up, 3 in
Oct  1 13:03:48 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e457: 3 total, 3 up, 3 in
Oct  1 13:03:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 13 KiB/s rd, 511 B/s wr, 18 op/s
Oct  1 13:03:51 np0005464891 nova_compute[259907]: 2025-10-01 17:03:51.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 19 KiB/s rd, 832 B/s wr, 25 op/s
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:03:51.641406) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338231641504, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2137, "num_deletes": 253, "total_data_size": 3457086, "memory_usage": 3513432, "flush_reason": "Manual Compaction"}
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338231777709, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3399382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36782, "largest_seqno": 38918, "table_properties": {"data_size": 3389425, "index_size": 6387, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20052, "raw_average_key_size": 20, "raw_value_size": 3369710, "raw_average_value_size": 3441, "num_data_blocks": 281, "num_entries": 979, "num_filter_entries": 979, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338017, "oldest_key_time": 1759338017, "file_creation_time": 1759338231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 136364 microseconds, and 8973 cpu microseconds.
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:03:51.777781) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3399382 bytes OK
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:03:51.777800) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:03:51.892606) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:03:51.892646) EVENT_LOG_v1 {"time_micros": 1759338231892636, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:03:51.892670) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3448066, prev total WAL file size 3448066, number of live WAL files 2.
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:03:51.894080) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3319KB)], [77(10209KB)]
Oct  1 13:03:51 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338231894150, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 13853740, "oldest_snapshot_seqno": -1}
Oct  1 13:03:52 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 7047 keys, 12086057 bytes, temperature: kUnknown
Oct  1 13:03:52 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338232094832, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12086057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12032142, "index_size": 35179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 177636, "raw_average_key_size": 25, "raw_value_size": 11898991, "raw_average_value_size": 1688, "num_data_blocks": 1405, "num_entries": 7047, "num_filter_entries": 7047, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759338231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:03:52 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:03:52 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:03:52.095124) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12086057 bytes
Oct  1 13:03:52 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:03:52.169596) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 69.0 rd, 60.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.0 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(7.6) write-amplify(3.6) OK, records in: 7568, records dropped: 521 output_compression: NoCompression
Oct  1 13:03:52 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:03:52.169634) EVENT_LOG_v1 {"time_micros": 1759338232169621, "job": 44, "event": "compaction_finished", "compaction_time_micros": 200779, "compaction_time_cpu_micros": 24580, "output_level": 6, "num_output_files": 1, "total_output_size": 12086057, "num_input_records": 7568, "num_output_records": 7047, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 13:03:52 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:03:52 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338232170629, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct  1 13:03:52 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:03:52 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338232172816, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct  1 13:03:52 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:03:51.893918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:03:52 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:03:52.172898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:03:52 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:03:52.172902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:03:52 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:03:52.172904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:03:52 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:03:52.172905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:03:52 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:03:52.172907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:03:52 np0005464891 nova_compute[259907]: 2025-10-01 17:03:52.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 16 KiB/s rd, 716 B/s wr, 21 op/s
Oct  1 13:03:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e457 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:03:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e457 do_prune osdmap full prune enabled
Oct  1 13:03:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e458 e458: 3 total, 3 up, 3 in
Oct  1 13:03:53 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e458: 3 total, 3 up, 3 in
Oct  1 13:03:53 np0005464891 podman[304267]: 2025-10-01 17:03:53.95907897 +0000 UTC m=+0.077309379 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct  1 13:03:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 578 KiB/s rd, 383 B/s wr, 13 op/s
Oct  1 13:03:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:03:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:03:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 13:03:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:03:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 13:03:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:03:56 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 22942077-a239-467f-aae2-0fd0cf029a86 does not exist
Oct  1 13:03:56 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 1965bd6e-5605-4df2-bee0-6f7d1a769eb8 does not exist
Oct  1 13:03:56 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 84edef4c-6501-48d6-babb-a5f526eb9eba does not exist
Oct  1 13:03:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 13:03:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 13:03:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 13:03:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:03:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:03:56 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:03:56 np0005464891 nova_compute[259907]: 2025-10-01 17:03:56.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:56 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:03:56 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:03:56 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:03:56 np0005464891 podman[304567]: 2025-10-01 17:03:56.796122435 +0000 UTC m=+0.117912281 container create 597ede8dd402275ccf30ca5014c8f5ecff530ac07698b0820923af8d16190722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 13:03:56 np0005464891 podman[304567]: 2025-10-01 17:03:56.705117082 +0000 UTC m=+0.026906908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:03:56 np0005464891 systemd[1]: Started libpod-conmon-597ede8dd402275ccf30ca5014c8f5ecff530ac07698b0820923af8d16190722.scope.
Oct  1 13:03:56 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:03:57 np0005464891 podman[304567]: 2025-10-01 17:03:57.013382017 +0000 UTC m=+0.335171893 container init 597ede8dd402275ccf30ca5014c8f5ecff530ac07698b0820923af8d16190722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gould, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:03:57 np0005464891 podman[304567]: 2025-10-01 17:03:57.020467732 +0000 UTC m=+0.342257538 container start 597ede8dd402275ccf30ca5014c8f5ecff530ac07698b0820923af8d16190722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gould, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 13:03:57 np0005464891 systemd[1]: libpod-597ede8dd402275ccf30ca5014c8f5ecff530ac07698b0820923af8d16190722.scope: Deactivated successfully.
Oct  1 13:03:57 np0005464891 great_gould[304584]: 167 167
Oct  1 13:03:57 np0005464891 conmon[304584]: conmon 597ede8dd402275ccf30 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-597ede8dd402275ccf30ca5014c8f5ecff530ac07698b0820923af8d16190722.scope/container/memory.events
Oct  1 13:03:57 np0005464891 podman[304567]: 2025-10-01 17:03:57.060737025 +0000 UTC m=+0.382526861 container attach 597ede8dd402275ccf30ca5014c8f5ecff530ac07698b0820923af8d16190722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gould, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:03:57 np0005464891 podman[304567]: 2025-10-01 17:03:57.061338281 +0000 UTC m=+0.383128147 container died 597ede8dd402275ccf30ca5014c8f5ecff530ac07698b0820923af8d16190722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:03:57 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b3fdbc049232aef2ebc0febbf6ceeddd797f5c6270a720fa0cefa7449a4b699e-merged.mount: Deactivated successfully.
Oct  1 13:03:57 np0005464891 nova_compute[259907]: 2025-10-01 17:03:57.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:03:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 547 KiB/s rd, 364 B/s wr, 9 op/s
Oct  1 13:03:57 np0005464891 podman[304567]: 2025-10-01 17:03:57.782880299 +0000 UTC m=+1.104670145 container remove 597ede8dd402275ccf30ca5014c8f5ecff530ac07698b0820923af8d16190722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gould, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:03:57 np0005464891 systemd[1]: libpod-conmon-597ede8dd402275ccf30ca5014c8f5ecff530ac07698b0820923af8d16190722.scope: Deactivated successfully.
Oct  1 13:03:58 np0005464891 podman[304609]: 2025-10-01 17:03:58.015105101 +0000 UTC m=+0.110796987 container create d67f8b0794b9011d955bab235f9a4bbe28b7087518f691684d91e449c1a5d8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:03:58 np0005464891 podman[304609]: 2025-10-01 17:03:57.932822446 +0000 UTC m=+0.028514352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:03:58 np0005464891 systemd[1]: Started libpod-conmon-d67f8b0794b9011d955bab235f9a4bbe28b7087518f691684d91e449c1a5d8f1.scope.
Oct  1 13:03:58 np0005464891 podman[304621]: 2025-10-01 17:03:58.127223653 +0000 UTC m=+0.076565869 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct  1 13:03:58 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:03:58 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af9a4eabceb2741b2547f2b4af2393531f6c563cbe6781e353e5e6ccc9c4baf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:03:58 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af9a4eabceb2741b2547f2b4af2393531f6c563cbe6781e353e5e6ccc9c4baf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:03:58 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af9a4eabceb2741b2547f2b4af2393531f6c563cbe6781e353e5e6ccc9c4baf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:03:58 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af9a4eabceb2741b2547f2b4af2393531f6c563cbe6781e353e5e6ccc9c4baf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:03:58 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af9a4eabceb2741b2547f2b4af2393531f6c563cbe6781e353e5e6ccc9c4baf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 13:03:58 np0005464891 podman[304609]: 2025-10-01 17:03:58.217012683 +0000 UTC m=+0.312704599 container init d67f8b0794b9011d955bab235f9a4bbe28b7087518f691684d91e449c1a5d8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 13:03:58 np0005464891 podman[304609]: 2025-10-01 17:03:58.224352453 +0000 UTC m=+0.320044349 container start d67f8b0794b9011d955bab235f9a4bbe28b7087518f691684d91e449c1a5d8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:03:58 np0005464891 podman[304609]: 2025-10-01 17:03:58.260562325 +0000 UTC m=+0.356254211 container attach d67f8b0794b9011d955bab235f9a4bbe28b7087518f691684d91e449c1a5d8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 13:03:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e458 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:03:59 np0005464891 stupefied_golick[304640]: --> passed data devices: 0 physical, 3 LVM
Oct  1 13:03:59 np0005464891 stupefied_golick[304640]: --> relative data size: 1.0
Oct  1 13:03:59 np0005464891 stupefied_golick[304640]: --> All data devices are unavailable
Oct  1 13:03:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1942: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.1 MiB/s rd, 716 B/s wr, 17 op/s
Oct  1 13:03:59 np0005464891 systemd[1]: libpod-d67f8b0794b9011d955bab235f9a4bbe28b7087518f691684d91e449c1a5d8f1.scope: Deactivated successfully.
Oct  1 13:03:59 np0005464891 podman[304609]: 2025-10-01 17:03:59.368676154 +0000 UTC m=+1.464368040 container died d67f8b0794b9011d955bab235f9a4bbe28b7087518f691684d91e449c1a5d8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 13:03:59 np0005464891 systemd[1]: libpod-d67f8b0794b9011d955bab235f9a4bbe28b7087518f691684d91e449c1a5d8f1.scope: Consumed 1.031s CPU time.
Oct  1 13:03:59 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4af9a4eabceb2741b2547f2b4af2393531f6c563cbe6781e353e5e6ccc9c4baf-merged.mount: Deactivated successfully.
Oct  1 13:04:00 np0005464891 podman[304609]: 2025-10-01 17:04:00.36575615 +0000 UTC m=+2.461448046 container remove d67f8b0794b9011d955bab235f9a4bbe28b7087518f691684d91e449c1a5d8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 13:04:00 np0005464891 systemd[1]: libpod-conmon-d67f8b0794b9011d955bab235f9a4bbe28b7087518f691684d91e449c1a5d8f1.scope: Deactivated successfully.
Oct  1 13:04:01 np0005464891 podman[304824]: 2025-10-01 17:04:01.128128857 +0000 UTC m=+0.047387669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:04:01 np0005464891 podman[304824]: 2025-10-01 17:04:01.257179402 +0000 UTC m=+0.176438194 container create d0355f85595dc479771980a0d468f9cec07c109e81e2bc37a49ebb583676942a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:04:01 np0005464891 nova_compute[259907]: 2025-10-01 17:04:01.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 511 B/s wr, 10 op/s
Oct  1 13:04:01 np0005464891 systemd[1]: Started libpod-conmon-d0355f85595dc479771980a0d468f9cec07c109e81e2bc37a49ebb583676942a.scope.
Oct  1 13:04:01 np0005464891 podman[304838]: 2025-10-01 17:04:01.381951991 +0000 UTC m=+0.073690570 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  1 13:04:01 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:04:01 np0005464891 podman[304824]: 2025-10-01 17:04:01.476899302 +0000 UTC m=+0.396158104 container init d0355f85595dc479771980a0d468f9cec07c109e81e2bc37a49ebb583676942a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 13:04:01 np0005464891 podman[304824]: 2025-10-01 17:04:01.487276646 +0000 UTC m=+0.406535438 container start d0355f85595dc479771980a0d468f9cec07c109e81e2bc37a49ebb583676942a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_davinci, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:04:01 np0005464891 unruffled_davinci[304859]: 167 167
Oct  1 13:04:01 np0005464891 systemd[1]: libpod-d0355f85595dc479771980a0d468f9cec07c109e81e2bc37a49ebb583676942a.scope: Deactivated successfully.
Oct  1 13:04:01 np0005464891 podman[304824]: 2025-10-01 17:04:01.509817934 +0000 UTC m=+0.429076736 container attach d0355f85595dc479771980a0d468f9cec07c109e81e2bc37a49ebb583676942a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 13:04:01 np0005464891 podman[304824]: 2025-10-01 17:04:01.510852363 +0000 UTC m=+0.430111195 container died d0355f85595dc479771980a0d468f9cec07c109e81e2bc37a49ebb583676942a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_davinci, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:04:01 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4a147d9d29e5833664898a354e1524862a500c397e54259d74103d12f59f91f0-merged.mount: Deactivated successfully.
Oct  1 13:04:01 np0005464891 podman[304824]: 2025-10-01 17:04:01.813918886 +0000 UTC m=+0.733177718 container remove d0355f85595dc479771980a0d468f9cec07c109e81e2bc37a49ebb583676942a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_davinci, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:04:01 np0005464891 systemd[1]: libpod-conmon-d0355f85595dc479771980a0d468f9cec07c109e81e2bc37a49ebb583676942a.scope: Deactivated successfully.
Oct  1 13:04:02 np0005464891 podman[304886]: 2025-10-01 17:04:02.015263252 +0000 UTC m=+0.038870176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:04:02 np0005464891 podman[304886]: 2025-10-01 17:04:02.147014161 +0000 UTC m=+0.170621035 container create 4cc7b7202835ab792c3c334cae27501c269d7d3cadd819f1b595dc4885b507e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yonath, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 13:04:02 np0005464891 systemd[1]: Started libpod-conmon-4cc7b7202835ab792c3c334cae27501c269d7d3cadd819f1b595dc4885b507e6.scope.
Oct  1 13:04:02 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:04:02 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d63b960c5a68d90406348dbeabe22ea1dd18f1dff3eb1a461e43b92a5925c3db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:04:02 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d63b960c5a68d90406348dbeabe22ea1dd18f1dff3eb1a461e43b92a5925c3db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:04:02 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d63b960c5a68d90406348dbeabe22ea1dd18f1dff3eb1a461e43b92a5925c3db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:04:02 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d63b960c5a68d90406348dbeabe22ea1dd18f1dff3eb1a461e43b92a5925c3db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:04:02 np0005464891 nova_compute[259907]: 2025-10-01 17:04:02.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:02 np0005464891 podman[304886]: 2025-10-01 17:04:02.463213414 +0000 UTC m=+0.486820288 container init 4cc7b7202835ab792c3c334cae27501c269d7d3cadd819f1b595dc4885b507e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:04:02 np0005464891 podman[304886]: 2025-10-01 17:04:02.477600408 +0000 UTC m=+0.501207282 container start 4cc7b7202835ab792c3c334cae27501c269d7d3cadd819f1b595dc4885b507e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yonath, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 13:04:02 np0005464891 podman[304886]: 2025-10-01 17:04:02.550226528 +0000 UTC m=+0.573833472 container attach 4cc7b7202835ab792c3c334cae27501c269d7d3cadd819f1b595dc4885b507e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]: {
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:    "0": [
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:        {
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "devices": [
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "/dev/loop3"
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            ],
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "lv_name": "ceph_lv0",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "lv_size": "21470642176",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "name": "ceph_lv0",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "tags": {
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.cluster_name": "ceph",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.crush_device_class": "",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.encrypted": "0",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.osd_id": "0",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.type": "block",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.vdo": "0"
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            },
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "type": "block",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "vg_name": "ceph_vg0"
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:        }
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:    ],
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:    "1": [
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:        {
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "devices": [
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "/dev/loop4"
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            ],
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "lv_name": "ceph_lv1",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "lv_size": "21470642176",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "name": "ceph_lv1",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "tags": {
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.cluster_name": "ceph",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.crush_device_class": "",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.encrypted": "0",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.osd_id": "1",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.type": "block",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.vdo": "0"
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            },
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "type": "block",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "vg_name": "ceph_vg1"
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:        }
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:    ],
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:    "2": [
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:        {
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "devices": [
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "/dev/loop5"
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            ],
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "lv_name": "ceph_lv2",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "lv_size": "21470642176",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "name": "ceph_lv2",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "tags": {
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.cluster_name": "ceph",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.crush_device_class": "",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.encrypted": "0",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.osd_id": "2",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.type": "block",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:                "ceph.vdo": "0"
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            },
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "type": "block",
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:            "vg_name": "ceph_vg2"
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:        }
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]:    ]
Oct  1 13:04:03 np0005464891 fervent_yonath[304902]: }
Oct  1 13:04:03 np0005464891 systemd[1]: libpod-4cc7b7202835ab792c3c334cae27501c269d7d3cadd819f1b595dc4885b507e6.scope: Deactivated successfully.
Oct  1 13:04:03 np0005464891 podman[304886]: 2025-10-01 17:04:03.298015875 +0000 UTC m=+1.321622749 container died 4cc7b7202835ab792c3c334cae27501c269d7d3cadd819f1b595dc4885b507e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yonath, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:04:03 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d63b960c5a68d90406348dbeabe22ea1dd18f1dff3eb1a461e43b92a5925c3db-merged.mount: Deactivated successfully.
Oct  1 13:04:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 40 op/s
Oct  1 13:04:03 np0005464891 podman[304886]: 2025-10-01 17:04:03.411898735 +0000 UTC m=+1.435505609 container remove 4cc7b7202835ab792c3c334cae27501c269d7d3cadd819f1b595dc4885b507e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yonath, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:04:03 np0005464891 systemd[1]: libpod-conmon-4cc7b7202835ab792c3c334cae27501c269d7d3cadd819f1b595dc4885b507e6.scope: Deactivated successfully.
Oct  1 13:04:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e458 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:04:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:04:03 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3648931296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:04:04 np0005464891 podman[305063]: 2025-10-01 17:04:04.023567323 +0000 UTC m=+0.052187001 container create 79bd6c82116bb4ca97606c493eb2745b77277951babb8e6c3d5b050aba285989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_morse, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:04:04 np0005464891 systemd[1]: Started libpod-conmon-79bd6c82116bb4ca97606c493eb2745b77277951babb8e6c3d5b050aba285989.scope.
Oct  1 13:04:04 np0005464891 podman[305063]: 2025-10-01 17:04:03.999567305 +0000 UTC m=+0.028187023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:04:04 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:04:04 np0005464891 podman[305063]: 2025-10-01 17:04:04.138256054 +0000 UTC m=+0.166875732 container init 79bd6c82116bb4ca97606c493eb2745b77277951babb8e6c3d5b050aba285989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_morse, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:04:04 np0005464891 podman[305063]: 2025-10-01 17:04:04.146728387 +0000 UTC m=+0.175348045 container start 79bd6c82116bb4ca97606c493eb2745b77277951babb8e6c3d5b050aba285989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_morse, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:04:04 np0005464891 wonderful_morse[305079]: 167 167
Oct  1 13:04:04 np0005464891 systemd[1]: libpod-79bd6c82116bb4ca97606c493eb2745b77277951babb8e6c3d5b050aba285989.scope: Deactivated successfully.
Oct  1 13:04:04 np0005464891 podman[305063]: 2025-10-01 17:04:04.173680305 +0000 UTC m=+0.202299983 container attach 79bd6c82116bb4ca97606c493eb2745b77277951babb8e6c3d5b050aba285989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_morse, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:04:04 np0005464891 podman[305063]: 2025-10-01 17:04:04.174818637 +0000 UTC m=+0.203438295 container died 79bd6c82116bb4ca97606c493eb2745b77277951babb8e6c3d5b050aba285989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_morse, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 13:04:04 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ae3d3fe690a2b5e1fec8a73a3316dde0f03c4991e247efe435093d5820b8575c-merged.mount: Deactivated successfully.
Oct  1 13:04:04 np0005464891 podman[305063]: 2025-10-01 17:04:04.29942162 +0000 UTC m=+0.328041278 container remove 79bd6c82116bb4ca97606c493eb2745b77277951babb8e6c3d5b050aba285989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:04:04 np0005464891 systemd[1]: libpod-conmon-79bd6c82116bb4ca97606c493eb2745b77277951babb8e6c3d5b050aba285989.scope: Deactivated successfully.
Oct  1 13:04:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e458 do_prune osdmap full prune enabled
Oct  1 13:04:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e459 e459: 3 total, 3 up, 3 in
Oct  1 13:04:04 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e459: 3 total, 3 up, 3 in
Oct  1 13:04:04 np0005464891 podman[305103]: 2025-10-01 17:04:04.491645827 +0000 UTC m=+0.053724854 container create f9db8f7579225e7af3350722303401194c211dbc408bca606bd3ebc0585cd236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:04:04 np0005464891 podman[305103]: 2025-10-01 17:04:04.463813214 +0000 UTC m=+0.025892261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:04:04 np0005464891 systemd[1]: Started libpod-conmon-f9db8f7579225e7af3350722303401194c211dbc408bca606bd3ebc0585cd236.scope.
Oct  1 13:04:04 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:04:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e86f9120fd97ca7e7d8f2e1589de18d77f6efbbf078c460edcbd64f7de91aed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:04:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e86f9120fd97ca7e7d8f2e1589de18d77f6efbbf078c460edcbd64f7de91aed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:04:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e86f9120fd97ca7e7d8f2e1589de18d77f6efbbf078c460edcbd64f7de91aed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:04:04 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e86f9120fd97ca7e7d8f2e1589de18d77f6efbbf078c460edcbd64f7de91aed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:04:04 np0005464891 podman[305103]: 2025-10-01 17:04:04.675813811 +0000 UTC m=+0.237892948 container init f9db8f7579225e7af3350722303401194c211dbc408bca606bd3ebc0585cd236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 13:04:04 np0005464891 podman[305103]: 2025-10-01 17:04:04.682287259 +0000 UTC m=+0.244366286 container start f9db8f7579225e7af3350722303401194c211dbc408bca606bd3ebc0585cd236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:04:04 np0005464891 podman[305103]: 2025-10-01 17:04:04.7067774 +0000 UTC m=+0.268856477 container attach f9db8f7579225e7af3350722303401194c211dbc408bca606bd3ebc0585cd236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:04:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct  1 13:04:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e459 do_prune osdmap full prune enabled
Oct  1 13:04:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 e460: 3 total, 3 up, 3 in
Oct  1 13:04:05 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e460: 3 total, 3 up, 3 in
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]: {
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:        "osd_id": 2,
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:        "type": "bluestore"
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:    },
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:        "osd_id": 0,
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:        "type": "bluestore"
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:    },
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:        "osd_id": 1,
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:        "type": "bluestore"
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]:    }
Oct  1 13:04:05 np0005464891 brave_bhaskara[305119]: }
Oct  1 13:04:05 np0005464891 systemd[1]: libpod-f9db8f7579225e7af3350722303401194c211dbc408bca606bd3ebc0585cd236.scope: Deactivated successfully.
Oct  1 13:04:05 np0005464891 podman[305103]: 2025-10-01 17:04:05.768799976 +0000 UTC m=+1.330879013 container died f9db8f7579225e7af3350722303401194c211dbc408bca606bd3ebc0585cd236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 13:04:05 np0005464891 systemd[1]: libpod-f9db8f7579225e7af3350722303401194c211dbc408bca606bd3ebc0585cd236.scope: Consumed 1.025s CPU time.
Oct  1 13:04:05 np0005464891 systemd[1]: var-lib-containers-storage-overlay-0e86f9120fd97ca7e7d8f2e1589de18d77f6efbbf078c460edcbd64f7de91aed-merged.mount: Deactivated successfully.
Oct  1 13:04:06 np0005464891 podman[305103]: 2025-10-01 17:04:06.33983013 +0000 UTC m=+1.901909157 container remove f9db8f7579225e7af3350722303401194c211dbc408bca606bd3ebc0585cd236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 13:04:06 np0005464891 nova_compute[259907]: 2025-10-01 17:04:06.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:06 np0005464891 systemd[1]: libpod-conmon-f9db8f7579225e7af3350722303401194c211dbc408bca606bd3ebc0585cd236.scope: Deactivated successfully.
Oct  1 13:04:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:04:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:04:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:04:06 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:04:06 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev f9243210-64a9-4a0a-8e40-ad7db7bb6c66 does not exist
Oct  1 13:04:06 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 4d6318e3-0f19-49e5-abd3-bd1fa79a5426 does not exist
Oct  1 13:04:06 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:04:06 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:04:07 np0005464891 nova_compute[259907]: 2025-10-01 17:04:07.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 368 KiB/s rd, 2.7 MiB/s wr, 59 op/s
Oct  1 13:04:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:04:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:08.868 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:04:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:08.869 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 13:04:08 np0005464891 nova_compute[259907]: 2025-10-01 17:04:08.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.9 MiB/s wr, 89 op/s
Oct  1 13:04:11 np0005464891 nova_compute[259907]: 2025-10-01 17:04:11.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.0 MiB/s wr, 55 op/s
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_17:04:12
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['vms', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', '.mgr']
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:04:12 np0005464891 nova_compute[259907]: 2025-10-01 17:04:12.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:12.466 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:04:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:12.466 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:04:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:12.466 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:04:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:04:12 np0005464891 podman[305214]: 2025-10-01 17:04:12.950519021 +0000 UTC m=+0.053725022 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  1 13:04:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 49 op/s
Oct  1 13:04:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:04:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:14.872 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:04:14 np0005464891 nova_compute[259907]: 2025-10-01 17:04:14.973 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Acquiring lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:04:14 np0005464891 nova_compute[259907]: 2025-10-01 17:04:14.974 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:04:14 np0005464891 nova_compute[259907]: 2025-10-01 17:04:14.993 2 DEBUG nova.compute.manager [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 13:04:15 np0005464891 nova_compute[259907]: 2025-10-01 17:04:15.089 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:04:15 np0005464891 nova_compute[259907]: 2025-10-01 17:04:15.089 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:04:15 np0005464891 nova_compute[259907]: 2025-10-01 17:04:15.103 2 DEBUG nova.virt.hardware [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 13:04:15 np0005464891 nova_compute[259907]: 2025-10-01 17:04:15.104 2 INFO nova.compute.claims [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 13:04:15 np0005464891 nova_compute[259907]: 2025-10-01 17:04:15.240 2 DEBUG oslo_concurrency.processutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:04:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 28 op/s
Oct  1 13:04:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:04:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2049459542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:04:15 np0005464891 nova_compute[259907]: 2025-10-01 17:04:15.691 2 DEBUG oslo_concurrency.processutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:04:15 np0005464891 nova_compute[259907]: 2025-10-01 17:04:15.698 2 DEBUG nova.compute.provider_tree [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:04:15 np0005464891 nova_compute[259907]: 2025-10-01 17:04:15.714 2 DEBUG nova.scheduler.client.report [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:04:15 np0005464891 nova_compute[259907]: 2025-10-01 17:04:15.739 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:04:15 np0005464891 nova_compute[259907]: 2025-10-01 17:04:15.740 2 DEBUG nova.compute.manager [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 13:04:15 np0005464891 nova_compute[259907]: 2025-10-01 17:04:15.786 2 DEBUG nova.compute.manager [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 13:04:15 np0005464891 nova_compute[259907]: 2025-10-01 17:04:15.786 2 DEBUG nova.network.neutron [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 13:04:15 np0005464891 nova_compute[259907]: 2025-10-01 17:04:15.826 2 INFO nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 13:04:15 np0005464891 nova_compute[259907]: 2025-10-01 17:04:15.940 2 DEBUG nova.compute.manager [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.250 2 INFO nova.virt.block_device [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Booting with volume bea69cb7-70cb-461a-a51a-58e52ebe4712 at /dev/vdb#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.450 2 DEBUG os_brick.utils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.451 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.464 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.464 741 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ddd4ed-e323-4cf6-9ffb-1976cf6b145c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.466 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.474 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.474 741 DEBUG oslo.privsep.daemon [-] privsep: reply[b3314cd0-deff-4fc0-b5f8-a661559569ce]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.476 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.479 2 DEBUG nova.policy [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1ccfcc45229e4430886117b04439c667', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2284b811c3654566ae3ff36625740c71', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.491 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.492 741 DEBUG oslo.privsep.daemon [-] privsep: reply[7cec7759-6f24-4380-99b5-9778f4350a33]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.493 741 DEBUG oslo.privsep.daemon [-] privsep: reply[3dd4a5ed-5c56-4a19-b8bd-76baad5b3d55]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.494 2 DEBUG oslo_concurrency.processutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.517 2 DEBUG oslo_concurrency.processutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.521 2 DEBUG os_brick.initiator.connectors.lightos [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.521 2 DEBUG os_brick.initiator.connectors.lightos [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.522 2 DEBUG os_brick.initiator.connectors.lightos [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.522 2 DEBUG os_brick.utils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 13:04:16 np0005464891 nova_compute[259907]: 2025-10-01 17:04:16.523 2 DEBUG nova.virt.block_device [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Updating existing volume attachment record: bf169cfe-971a-4c1c-9020-cddc6a291ed3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 13:04:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:04:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2797582684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:04:17 np0005464891 nova_compute[259907]: 2025-10-01 17:04:17.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct  1 13:04:17 np0005464891 nova_compute[259907]: 2025-10-01 17:04:17.554 2 DEBUG nova.compute.manager [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 13:04:17 np0005464891 nova_compute[259907]: 2025-10-01 17:04:17.555 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 13:04:17 np0005464891 nova_compute[259907]: 2025-10-01 17:04:17.555 2 INFO nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Creating image(s)#033[00m
Oct  1 13:04:17 np0005464891 nova_compute[259907]: 2025-10-01 17:04:17.579 2 DEBUG nova.storage.rbd_utils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] rbd image 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:04:17 np0005464891 nova_compute[259907]: 2025-10-01 17:04:17.603 2 DEBUG nova.storage.rbd_utils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] rbd image 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:04:17 np0005464891 nova_compute[259907]: 2025-10-01 17:04:17.632 2 DEBUG nova.storage.rbd_utils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] rbd image 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:04:17 np0005464891 nova_compute[259907]: 2025-10-01 17:04:17.638 2 DEBUG oslo_concurrency.processutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:04:17 np0005464891 nova_compute[259907]: 2025-10-01 17:04:17.668 2 DEBUG nova.network.neutron [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Successfully created port: 8f9e2444-b32a-47d1-aa86-3140a6eda6eb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 13:04:17 np0005464891 nova_compute[259907]: 2025-10-01 17:04:17.704 2 DEBUG oslo_concurrency.processutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:04:17 np0005464891 nova_compute[259907]: 2025-10-01 17:04:17.705 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Acquiring lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:04:17 np0005464891 nova_compute[259907]: 2025-10-01 17:04:17.706 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:04:17 np0005464891 nova_compute[259907]: 2025-10-01 17:04:17.706 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lock "d024f7a35ea45569f869f237e2b764bb5c5ddaaa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:04:17 np0005464891 nova_compute[259907]: 2025-10-01 17:04:17.733 2 DEBUG nova.storage.rbd_utils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] rbd image 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:04:17 np0005464891 nova_compute[259907]: 2025-10-01 17:04:17.736 2 DEBUG oslo_concurrency.processutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:04:18 np0005464891 nova_compute[259907]: 2025-10-01 17:04:18.057 2 DEBUG oslo_concurrency.processutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:04:18 np0005464891 nova_compute[259907]: 2025-10-01 17:04:18.118 2 DEBUG nova.storage.rbd_utils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] resizing rbd image 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  1 13:04:18 np0005464891 nova_compute[259907]: 2025-10-01 17:04:18.219 2 DEBUG nova.objects.instance [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lazy-loading 'migration_context' on Instance uuid 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:04:18 np0005464891 nova_compute[259907]: 2025-10-01 17:04:18.234 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  1 13:04:18 np0005464891 nova_compute[259907]: 2025-10-01 17:04:18.234 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Ensure instance console log exists: /var/lib/nova/instances/28515950-4a2a-4cf3-a0d0-7e1a9ae85a19/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 13:04:18 np0005464891 nova_compute[259907]: 2025-10-01 17:04:18.235 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:04:18 np0005464891 nova_compute[259907]: 2025-10-01 17:04:18.235 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:04:18 np0005464891 nova_compute[259907]: 2025-10-01 17:04:18.236 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:04:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:04:19 np0005464891 nova_compute[259907]: 2025-10-01 17:04:19.298 2 DEBUG nova.network.neutron [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Successfully updated port: 8f9e2444-b32a-47d1-aa86-3140a6eda6eb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 13:04:19 np0005464891 nova_compute[259907]: 2025-10-01 17:04:19.321 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Acquiring lock "refresh_cache-28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:04:19 np0005464891 nova_compute[259907]: 2025-10-01 17:04:19.321 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Acquired lock "refresh_cache-28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:04:19 np0005464891 nova_compute[259907]: 2025-10-01 17:04:19.322 2 DEBUG nova.network.neutron [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 13:04:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.4 MiB/s wr, 35 op/s
Oct  1 13:04:19 np0005464891 nova_compute[259907]: 2025-10-01 17:04:19.487 2 DEBUG nova.compute.manager [req-126859d1-e7a6-48b9-8950-29c3c19f012f req-e7b666e7-4615-41cd-9904-bee3b1e72b9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Received event network-changed-8f9e2444-b32a-47d1-aa86-3140a6eda6eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:04:19 np0005464891 nova_compute[259907]: 2025-10-01 17:04:19.487 2 DEBUG nova.compute.manager [req-126859d1-e7a6-48b9-8950-29c3c19f012f req-e7b666e7-4615-41cd-9904-bee3b1e72b9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Refreshing instance network info cache due to event network-changed-8f9e2444-b32a-47d1-aa86-3140a6eda6eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 13:04:19 np0005464891 nova_compute[259907]: 2025-10-01 17:04:19.488 2 DEBUG oslo_concurrency.lockutils [req-126859d1-e7a6-48b9-8950-29c3c19f012f req-e7b666e7-4615-41cd-9904-bee3b1e72b9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:04:20 np0005464891 nova_compute[259907]: 2025-10-01 17:04:20.068 2 DEBUG nova.network.neutron [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.023 2 DEBUG nova.network.neutron [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Updating instance_info_cache with network_info: [{"id": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "address": "fa:16:3e:51:7a:98", "network": {"id": "6ef0283d-7ab8-454f-a9d0-06f8650873a0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1206575136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2284b811c3654566ae3ff36625740c71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9e2444-b3", "ovs_interfaceid": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.082 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Releasing lock "refresh_cache-28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.083 2 DEBUG nova.compute.manager [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Instance network_info: |[{"id": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "address": "fa:16:3e:51:7a:98", "network": {"id": "6ef0283d-7ab8-454f-a9d0-06f8650873a0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1206575136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2284b811c3654566ae3ff36625740c71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9e2444-b3", "ovs_interfaceid": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.084 2 DEBUG oslo_concurrency.lockutils [req-126859d1-e7a6-48b9-8950-29c3c19f012f req-e7b666e7-4615-41cd-9904-bee3b1e72b9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.084 2 DEBUG nova.network.neutron [req-126859d1-e7a6-48b9-8950-29c3c19f012f req-e7b666e7-4615-41cd-9904-bee3b1e72b9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Refreshing network info cache for port 8f9e2444-b32a-47d1-aa86-3140a6eda6eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.089 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Start _get_guest_xml network_info=[{"id": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "address": "fa:16:3e:51:7a:98", "network": {"id": "6ef0283d-7ab8-454f-a9d0-06f8650873a0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1206575136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2284b811c3654566ae3ff36625740c71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9e2444-b3", "ovs_interfaceid": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'image_id': 'f01c1e7c-fea3-4433-a44a-d71153552c78'}], 'ephemerals': [], 'block_device_mapping': [{'boot_index': -1, 'device_type': 'disk', 'guest_format': None, 'mount_device': '/dev/vdb', 'attachment_id': 'bf169cfe-971a-4c1c-9020-cddc6a291ed3', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-bea69cb7-70cb-461a-a51a-58e52ebe4712', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'bea69cb7-70cb-461a-a51a-58e52ebe4712', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '28515950-4a2a-4cf3-a0d0-7e1a9ae85a19', 'attached_at': '', 'detached_at': '', 'volume_id': 'bea69cb7-70cb-461a-a51a-58e52ebe4712', 'serial': 'bea69cb7-70cb-461a-a51a-58e52ebe4712'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.094 2 WARNING nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.100 2 DEBUG nova.virt.libvirt.host [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.101 2 DEBUG nova.virt.libvirt.host [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.106 2 DEBUG nova.virt.libvirt.host [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.106 2 DEBUG nova.virt.libvirt.host [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.107 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.107 2 DEBUG nova.virt.hardware [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-01T16:40:54Z,direct_url=<?>,disk_format='qcow2',id=f01c1e7c-fea3-4433-a44a-d71153552c78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4db015ab6cd0401aa633ac43644724a0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-01T16:40:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.108 2 DEBUG nova.virt.hardware [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.109 2 DEBUG nova.virt.hardware [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.109 2 DEBUG nova.virt.hardware [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.109 2 DEBUG nova.virt.hardware [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.110 2 DEBUG nova.virt.hardware [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.110 2 DEBUG nova.virt.hardware [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.111 2 DEBUG nova.virt.hardware [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.111 2 DEBUG nova.virt.hardware [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.111 2 DEBUG nova.virt.hardware [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.112 2 DEBUG nova.virt.hardware [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.115 2 DEBUG oslo_concurrency.processutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:04:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 16 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:04:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4172762438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.675 2 DEBUG oslo_concurrency.processutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.699 2 DEBUG nova.storage.rbd_utils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] rbd image 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:04:21 np0005464891 nova_compute[259907]: 2025-10-01 17:04:21.702 2 DEBUG oslo_concurrency.processutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:04:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:04:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3983423244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.162 2 DEBUG oslo_concurrency.processutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00020031072963469428 of space, bias 1.0, pg target 0.060093218890408286 quantized to 32 (current 32)
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.036580745445887866 of space, bias 1.0, pg target 10.97422363376636 quantized to 32 (current 32)
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0003461242226671876 of space, bias 1.0, pg target 0.10037602457348441 quantized to 32 (current 32)
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.19319111398710687 quantized to 32 (current 32)
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0005901217685745913 quantized to 16 (current 32)
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.376522107182392e-05 quantized to 32 (current 32)
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006270043791105033 quantized to 32 (current 32)
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:04:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.305 2 DEBUG nova.virt.libvirt.vif [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T17:04:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-1935298639',display_name='tempest-instance-1935298639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1935298639',id=26,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHBDnI4DWuPr6YNdRMCa7jM1n8j3sQW4FlOn6l80s7J3IA5CfH3TlumjsYKyYsJ21Io+HjIR3XWT9TW8IabN4G08YyQTBo4qzR9D815pN4Lso+sKU7PWb9e+wC5NQeqdMA==',key_name='tempest-keypair-816340729',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2284b811c3654566ae3ff36625740c71',ramdisk_id='',reservation_id='r-oe75bchj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-2085916881',owner_user_name='tempest-VolumesBackupsTest-2085916881-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T17:04:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1ccfcc45229e4430886117b04439c667',uuid=28515950-4a2a-4cf3-a0d0-7e1a9ae85a19,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "address": "fa:16:3e:51:7a:98", "network": {"id": "6ef0283d-7ab8-454f-a9d0-06f8650873a0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1206575136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2284b811c3654566ae3ff36625740c71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9e2444-b3", "ovs_interfaceid": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.306 2 DEBUG nova.network.os_vif_util [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Converting VIF {"id": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "address": "fa:16:3e:51:7a:98", "network": {"id": "6ef0283d-7ab8-454f-a9d0-06f8650873a0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1206575136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2284b811c3654566ae3ff36625740c71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9e2444-b3", "ovs_interfaceid": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.307 2 DEBUG nova.network.os_vif_util [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:7a:98,bridge_name='br-int',has_traffic_filtering=True,id=8f9e2444-b32a-47d1-aa86-3140a6eda6eb,network=Network(6ef0283d-7ab8-454f-a9d0-06f8650873a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f9e2444-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.308 2 DEBUG nova.objects.instance [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lazy-loading 'pci_devices' on Instance uuid 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.439 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] End _get_guest_xml xml=<domain type="kvm">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  <uuid>28515950-4a2a-4cf3-a0d0-7e1a9ae85a19</uuid>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  <name>instance-0000001a</name>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <nova:name>tempest-instance-1935298639</nova:name>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 17:04:21</nova:creationTime>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:        <nova:user uuid="1ccfcc45229e4430886117b04439c667">tempest-VolumesBackupsTest-2085916881-project-member</nova:user>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:        <nova:project uuid="2284b811c3654566ae3ff36625740c71">tempest-VolumesBackupsTest-2085916881</nova:project>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <nova:root type="image" uuid="f01c1e7c-fea3-4433-a44a-d71153552c78"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:        <nova:port uuid="8f9e2444-b32a-47d1-aa86-3140a6eda6eb">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <system>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <entry name="serial">28515950-4a2a-4cf3-a0d0-7e1a9ae85a19</entry>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <entry name="uuid">28515950-4a2a-4cf3-a0d0-7e1a9ae85a19</entry>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    </system>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  <os>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  </os>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  <features>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  </features>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  </clock>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  <devices>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/28515950-4a2a-4cf3-a0d0-7e1a9ae85a19_disk">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      </source>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      </auth>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    </disk>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/28515950-4a2a-4cf3-a0d0-7e1a9ae85a19_disk.config">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      </source>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      </auth>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    </disk>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="volumes/volume-bea69cb7-70cb-461a-a51a-58e52ebe4712">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      </source>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      </auth>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <target dev="vdb" bus="virtio"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <serial>bea69cb7-70cb-461a-a51a-58e52ebe4712</serial>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    </disk>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:51:7a:98"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <target dev="tap8f9e2444-b3"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    </interface>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/28515950-4a2a-4cf3-a0d0-7e1a9ae85a19/console.log" append="off"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    </serial>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <video>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    </video>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    </rng>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 13:04:22 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 13:04:22 np0005464891 nova_compute[259907]:  </devices>
Oct  1 13:04:22 np0005464891 nova_compute[259907]: </domain>
Oct  1 13:04:22 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.441 2 DEBUG nova.compute.manager [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Preparing to wait for external event network-vif-plugged-8f9e2444-b32a-47d1-aa86-3140a6eda6eb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.442 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Acquiring lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.442 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.442 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.443 2 DEBUG nova.virt.libvirt.vif [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T17:04:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-1935298639',display_name='tempest-instance-1935298639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1935298639',id=26,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHBDnI4DWuPr6YNdRMCa7jM1n8j3sQW4FlOn6l80s7J3IA5CfH3TlumjsYKyYsJ21Io+HjIR3XWT9TW8IabN4G08YyQTBo4qzR9D815pN4Lso+sKU7PWb9e+wC5NQeqdMA==',key_name='tempest-keypair-816340729',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2284b811c3654566ae3ff36625740c71',ramdisk_id='',reservation_id='r-oe75bchj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-2085916881',owner_user_name='tempest-VolumesBackupsTest-2085916881-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T17:04:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1ccfcc45229e4430886117b04439c667',uuid=28515950-4a2a-4cf3-a0d0-7e1a9ae85a19,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "address": "fa:16:3e:51:7a:98", "network": {"id": "6ef0283d-7ab8-454f-a9d0-06f8650873a0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1206575136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2284b811c3654566ae3ff36625740c71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9e2444-b3", "ovs_interfaceid": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.444 2 DEBUG nova.network.os_vif_util [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Converting VIF {"id": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "address": "fa:16:3e:51:7a:98", "network": {"id": "6ef0283d-7ab8-454f-a9d0-06f8650873a0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1206575136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2284b811c3654566ae3ff36625740c71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9e2444-b3", "ovs_interfaceid": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.444 2 DEBUG nova.network.os_vif_util [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:7a:98,bridge_name='br-int',has_traffic_filtering=True,id=8f9e2444-b32a-47d1-aa86-3140a6eda6eb,network=Network(6ef0283d-7ab8-454f-a9d0-06f8650873a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f9e2444-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.445 2 DEBUG os_vif [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:7a:98,bridge_name='br-int',has_traffic_filtering=True,id=8f9e2444-b32a-47d1-aa86-3140a6eda6eb,network=Network(6ef0283d-7ab8-454f-a9d0-06f8650873a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f9e2444-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.450 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f9e2444-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.451 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f9e2444-b3, col_values=(('external_ids', {'iface-id': '8f9e2444-b32a-47d1-aa86-3140a6eda6eb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:51:7a:98', 'vm-uuid': '28515950-4a2a-4cf3-a0d0-7e1a9ae85a19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 13:04:22 np0005464891 NetworkManager[44940]: <info>  [1759338262.4891] manager: (tap8f9e2444-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.490 2 DEBUG nova.network.neutron [req-126859d1-e7a6-48b9-8950-29c3c19f012f req-e7b666e7-4615-41cd-9904-bee3b1e72b9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Updated VIF entry in instance network info cache for port 8f9e2444-b32a-47d1-aa86-3140a6eda6eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.514 2 DEBUG nova.network.neutron [req-126859d1-e7a6-48b9-8950-29c3c19f012f req-e7b666e7-4615-41cd-9904-bee3b1e72b9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Updating instance_info_cache with network_info: [{"id": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "address": "fa:16:3e:51:7a:98", "network": {"id": "6ef0283d-7ab8-454f-a9d0-06f8650873a0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1206575136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2284b811c3654566ae3ff36625740c71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9e2444-b3", "ovs_interfaceid": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.517 2 INFO os_vif [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:7a:98,bridge_name='br-int',has_traffic_filtering=True,id=8f9e2444-b32a-47d1-aa86-3140a6eda6eb,network=Network(6ef0283d-7ab8-454f-a9d0-06f8650873a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f9e2444-b3')#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.592 2 DEBUG oslo_concurrency.lockutils [req-126859d1-e7a6-48b9-8950-29c3c19f012f req-e7b666e7-4615-41cd-9904-bee3b1e72b9b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.630 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.631 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.631 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.631 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] No VIF found with MAC fa:16:3e:51:7a:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.632 2 INFO nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Using config drive#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.715 2 DEBUG nova.storage.rbd_utils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] rbd image 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.951 2 INFO nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Creating config drive at /var/lib/nova/instances/28515950-4a2a-4cf3-a0d0-7e1a9ae85a19/disk.config#033[00m
Oct  1 13:04:22 np0005464891 nova_compute[259907]: 2025-10-01 17:04:22.957 2 DEBUG oslo_concurrency.processutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/28515950-4a2a-4cf3-a0d0-7e1a9ae85a19/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0rf2w4d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:04:23 np0005464891 nova_compute[259907]: 2025-10-01 17:04:23.093 2 DEBUG oslo_concurrency.processutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/28515950-4a2a-4cf3-a0d0-7e1a9ae85a19/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0rf2w4d" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:04:23 np0005464891 nova_compute[259907]: 2025-10-01 17:04:23.122 2 DEBUG nova.storage.rbd_utils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] rbd image 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:04:23 np0005464891 nova_compute[259907]: 2025-10-01 17:04:23.126 2 DEBUG oslo_concurrency.processutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/28515950-4a2a-4cf3-a0d0-7e1a9ae85a19/disk.config 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:04:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  1 13:04:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:04:24 np0005464891 nova_compute[259907]: 2025-10-01 17:04:24.570 2 DEBUG oslo_concurrency.processutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/28515950-4a2a-4cf3-a0d0-7e1a9ae85a19/disk.config 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:04:24 np0005464891 nova_compute[259907]: 2025-10-01 17:04:24.570 2 INFO nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Deleting local config drive /var/lib/nova/instances/28515950-4a2a-4cf3-a0d0-7e1a9ae85a19/disk.config because it was imported into RBD.#033[00m
Oct  1 13:04:24 np0005464891 kernel: tap8f9e2444-b3: entered promiscuous mode
Oct  1 13:04:24 np0005464891 NetworkManager[44940]: <info>  [1759338264.6309] manager: (tap8f9e2444-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Oct  1 13:04:24 np0005464891 nova_compute[259907]: 2025-10-01 17:04:24.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:24 np0005464891 ovn_controller[152409]: 2025-10-01T17:04:24Z|00241|binding|INFO|Claiming lport 8f9e2444-b32a-47d1-aa86-3140a6eda6eb for this chassis.
Oct  1 13:04:24 np0005464891 ovn_controller[152409]: 2025-10-01T17:04:24Z|00242|binding|INFO|8f9e2444-b32a-47d1-aa86-3140a6eda6eb: Claiming fa:16:3e:51:7a:98 10.100.0.14
Oct  1 13:04:24 np0005464891 nova_compute[259907]: 2025-10-01 17:04:24.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:24 np0005464891 nova_compute[259907]: 2025-10-01 17:04:24.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:24 np0005464891 nova_compute[259907]: 2025-10-01 17:04:24.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:24 np0005464891 ovn_controller[152409]: 2025-10-01T17:04:24Z|00243|binding|INFO|Setting lport 8f9e2444-b32a-47d1-aa86-3140a6eda6eb ovn-installed in OVS
Oct  1 13:04:24 np0005464891 nova_compute[259907]: 2025-10-01 17:04:24.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:24 np0005464891 systemd-machined[214891]: New machine qemu-26-instance-0000001a.
Oct  1 13:04:24 np0005464891 systemd[1]: Started Virtual Machine qemu-26-instance-0000001a.
Oct  1 13:04:24 np0005464891 systemd-udevd[305574]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 13:04:24 np0005464891 NetworkManager[44940]: <info>  [1759338264.7642] device (tap8f9e2444-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 13:04:24 np0005464891 NetworkManager[44940]: <info>  [1759338264.7652] device (tap8f9e2444-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 13:04:24 np0005464891 ovn_controller[152409]: 2025-10-01T17:04:24Z|00244|binding|INFO|Setting lport 8f9e2444-b32a-47d1-aa86-3140a6eda6eb up in Southbound
Oct  1 13:04:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:24.793 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:7a:98 10.100.0.14'], port_security=['fa:16:3e:51:7a:98 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '28515950-4a2a-4cf3-a0d0-7e1a9ae85a19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ef0283d-7ab8-454f-a9d0-06f8650873a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2284b811c3654566ae3ff36625740c71', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7851ac58-e590-45d8-94c0-c842686aaf5e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5175c4e1-c8e2-41c8-b21b-c935cbeb71cb, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=8f9e2444-b32a-47d1-aa86-3140a6eda6eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:04:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:24.794 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 8f9e2444-b32a-47d1-aa86-3140a6eda6eb in datapath 6ef0283d-7ab8-454f-a9d0-06f8650873a0 bound to our chassis#033[00m
Oct  1 13:04:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:24.795 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ef0283d-7ab8-454f-a9d0-06f8650873a0#033[00m
Oct  1 13:04:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:24.809 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[057fd3a5-16fc-41e5-bc6f-115896221999]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:24.810 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ef0283d-71 in ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 13:04:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:24.812 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ef0283d-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 13:04:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:24.812 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[53be6e82-1c71-4475-8b16-6b06629cdfff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:24.813 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2b429c2e-8270-4071-9756-6b4a66ca6e5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:24.825 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a8066c-9a6e-49a9-b2fb-7b5eb3216b0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:24.874 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6e328e-0318-4e98-a48a-c37250ccf4b2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:24 np0005464891 podman[305563]: 2025-10-01 17:04:24.886081165 +0000 UTC m=+0.162997587 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  1 13:04:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:24.904 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[5fcf2499-1e61-412a-896a-a278bbd9ac4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:24 np0005464891 systemd-udevd[305589]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 13:04:24 np0005464891 NetworkManager[44940]: <info>  [1759338264.9104] manager: (tap6ef0283d-70): new Veth device (/org/freedesktop/NetworkManager/Devices/131)
Oct  1 13:04:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:24.910 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f66895-ba49-4d2f-9a65-471595da26a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:24.948 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7d80ea-78b3-404e-8b39-97b36760866b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:24.951 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[2424bdb6-1faa-4e83-8ff3-1e76f5f08047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:24 np0005464891 NetworkManager[44940]: <info>  [1759338264.9752] device (tap6ef0283d-70): carrier: link connected
Oct  1 13:04:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:24.980 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[a229d74b-700a-4442-85b7-dd73651b909d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:24 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:24.997 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[0d86e715-e088-403e-84f8-4958ab767124]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ef0283d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:7d:b0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521354, 'reachable_time': 44256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305623, 'error': None, 'target': 'ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:25.011 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[625b91e0-b324-4da3-a288-6c3b6cd92918]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:7db0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 521354, 'tstamp': 521354}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305624, 'error': None, 'target': 'ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:25.028 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[0c722766-e22a-4f98-a996-fb967c7d5686]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ef0283d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:7d:b0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521354, 'reachable_time': 44256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305625, 'error': None, 'target': 'ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:25.059 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8b591a-2ca6-4808-8c15-0002333e5d36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:25.114 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba14714-23e0-471c-a008-3653a0e791ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:25.115 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ef0283d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:25.115 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:25.116 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ef0283d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:04:25 np0005464891 kernel: tap6ef0283d-70: entered promiscuous mode
Oct  1 13:04:25 np0005464891 nova_compute[259907]: 2025-10-01 17:04:25.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:25 np0005464891 NetworkManager[44940]: <info>  [1759338265.1195] manager: (tap6ef0283d-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Oct  1 13:04:25 np0005464891 nova_compute[259907]: 2025-10-01 17:04:25.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:25.120 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ef0283d-70, col_values=(('external_ids', {'iface-id': 'b1d96ef1-0f7f-4bd1-90c1-7d584ed08c52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:04:25 np0005464891 ovn_controller[152409]: 2025-10-01T17:04:25Z|00245|binding|INFO|Releasing lport b1d96ef1-0f7f-4bd1-90c1-7d584ed08c52 from this chassis (sb_readonly=0)
Oct  1 13:04:25 np0005464891 nova_compute[259907]: 2025-10-01 17:04:25.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:25.134 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ef0283d-7ab8-454f-a9d0-06f8650873a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ef0283d-7ab8-454f-a9d0-06f8650873a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:25.135 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[753bd43e-3bf5-48c8-8176-9980e3933f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:25.135 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-6ef0283d-7ab8-454f-a9d0-06f8650873a0
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/6ef0283d-7ab8-454f-a9d0-06f8650873a0.pid.haproxy
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID 6ef0283d-7ab8-454f-a9d0-06f8650873a0
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 13:04:25 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:04:25.136 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0', 'env', 'PROCESS_TAG=haproxy-6ef0283d-7ab8-454f-a9d0-06f8650873a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ef0283d-7ab8-454f-a9d0-06f8650873a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 13:04:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  1 13:04:25 np0005464891 podman[305693]: 2025-10-01 17:04:25.445534011 +0000 UTC m=+0.023222257 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 13:04:25 np0005464891 podman[305693]: 2025-10-01 17:04:25.701634968 +0000 UTC m=+0.279323204 container create ac3f40946ae1a98ae81daec6ade43ed2496fc62398832178997088bbbc46bb5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  1 13:04:25 np0005464891 nova_compute[259907]: 2025-10-01 17:04:25.703 2 DEBUG nova.compute.manager [req-624350f7-328c-4ca2-832a-0817308221ae req-31142d25-4952-4f59-b42a-d97774ef3d30 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Received event network-vif-plugged-8f9e2444-b32a-47d1-aa86-3140a6eda6eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:04:25 np0005464891 nova_compute[259907]: 2025-10-01 17:04:25.706 2 DEBUG oslo_concurrency.lockutils [req-624350f7-328c-4ca2-832a-0817308221ae req-31142d25-4952-4f59-b42a-d97774ef3d30 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:04:25 np0005464891 nova_compute[259907]: 2025-10-01 17:04:25.706 2 DEBUG oslo_concurrency.lockutils [req-624350f7-328c-4ca2-832a-0817308221ae req-31142d25-4952-4f59-b42a-d97774ef3d30 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:04:25 np0005464891 nova_compute[259907]: 2025-10-01 17:04:25.706 2 DEBUG oslo_concurrency.lockutils [req-624350f7-328c-4ca2-832a-0817308221ae req-31142d25-4952-4f59-b42a-d97774ef3d30 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:04:25 np0005464891 nova_compute[259907]: 2025-10-01 17:04:25.707 2 DEBUG nova.compute.manager [req-624350f7-328c-4ca2-832a-0817308221ae req-31142d25-4952-4f59-b42a-d97774ef3d30 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Processing event network-vif-plugged-8f9e2444-b32a-47d1-aa86-3140a6eda6eb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 13:04:25 np0005464891 systemd[1]: Started libpod-conmon-ac3f40946ae1a98ae81daec6ade43ed2496fc62398832178997088bbbc46bb5e.scope.
Oct  1 13:04:25 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:04:25 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8086f095ae89bfc574df7c2b250f93be8305fb442228ffd47ee8bf5d2f886919/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 13:04:25 np0005464891 podman[305693]: 2025-10-01 17:04:25.994180433 +0000 UTC m=+0.571868689 container init ac3f40946ae1a98ae81daec6ade43ed2496fc62398832178997088bbbc46bb5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:04:26 np0005464891 podman[305693]: 2025-10-01 17:04:26.002391518 +0000 UTC m=+0.580079744 container start ac3f40946ae1a98ae81daec6ade43ed2496fc62398832178997088bbbc46bb5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  1 13:04:26 np0005464891 neutron-haproxy-ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0[305733]: [NOTICE]   (305737) : New worker (305739) forked
Oct  1 13:04:26 np0005464891 neutron-haproxy-ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0[305733]: [NOTICE]   (305737) : Loading success.
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.107 2 DEBUG nova.compute.manager [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.108 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338266.1079311, 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.108 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] VM Started (Lifecycle Event)#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.111 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.114 2 INFO nova.virt.libvirt.driver [-] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Instance spawned successfully.#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.114 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.192 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.196 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.205 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.205 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.206 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.206 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.206 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.207 2 DEBUG nova.virt.libvirt.driver [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.318 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.319 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338266.1086457, 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.319 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] VM Paused (Lifecycle Event)#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.576 2 INFO nova.compute.manager [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Took 9.02 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.576 2 DEBUG nova.compute.manager [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.615 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.620 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338266.110168, 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.620 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] VM Resumed (Lifecycle Event)#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.720 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.724 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.848 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.867 2 INFO nova.compute.manager [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Took 11.81 seconds to build instance.#033[00m
Oct  1 13:04:26 np0005464891 nova_compute[259907]: 2025-10-01 17:04:26.970 2 DEBUG oslo_concurrency.lockutils [None req-c3f955a1-e139-49be-a50e-6bf4d519cf29 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:04:27 np0005464891 nova_compute[259907]: 2025-10-01 17:04:27.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  1 13:04:27 np0005464891 nova_compute[259907]: 2025-10-01 17:04:27.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:27 np0005464891 nova_compute[259907]: 2025-10-01 17:04:27.787 2 DEBUG nova.compute.manager [req-db12da20-dad8-439a-b675-7e6684d0abbf req-60d00530-5521-437d-907d-bac074f7dce0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Received event network-vif-plugged-8f9e2444-b32a-47d1-aa86-3140a6eda6eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:04:27 np0005464891 nova_compute[259907]: 2025-10-01 17:04:27.788 2 DEBUG oslo_concurrency.lockutils [req-db12da20-dad8-439a-b675-7e6684d0abbf req-60d00530-5521-437d-907d-bac074f7dce0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:04:27 np0005464891 nova_compute[259907]: 2025-10-01 17:04:27.788 2 DEBUG oslo_concurrency.lockutils [req-db12da20-dad8-439a-b675-7e6684d0abbf req-60d00530-5521-437d-907d-bac074f7dce0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:04:27 np0005464891 nova_compute[259907]: 2025-10-01 17:04:27.788 2 DEBUG oslo_concurrency.lockutils [req-db12da20-dad8-439a-b675-7e6684d0abbf req-60d00530-5521-437d-907d-bac074f7dce0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:04:27 np0005464891 nova_compute[259907]: 2025-10-01 17:04:27.788 2 DEBUG nova.compute.manager [req-db12da20-dad8-439a-b675-7e6684d0abbf req-60d00530-5521-437d-907d-bac074f7dce0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] No waiting events found dispatching network-vif-plugged-8f9e2444-b32a-47d1-aa86-3140a6eda6eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:04:27 np0005464891 nova_compute[259907]: 2025-10-01 17:04:27.789 2 WARNING nova.compute.manager [req-db12da20-dad8-439a-b675-7e6684d0abbf req-60d00530-5521-437d-907d-bac074f7dce0 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Received unexpected event network-vif-plugged-8f9e2444-b32a-47d1-aa86-3140a6eda6eb for instance with vm_state active and task_state None.#033[00m
Oct  1 13:04:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:04:28 np0005464891 podman[305748]: 2025-10-01 17:04:28.959912324 +0000 UTC m=+0.072148298 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  1 13:04:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct  1 13:04:30 np0005464891 ovn_controller[152409]: 2025-10-01T17:04:30Z|00246|binding|INFO|Releasing lport b1d96ef1-0f7f-4bd1-90c1-7d584ed08c52 from this chassis (sb_readonly=0)
Oct  1 13:04:30 np0005464891 nova_compute[259907]: 2025-10-01 17:04:30.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:30 np0005464891 NetworkManager[44940]: <info>  [1759338270.2033] manager: (patch-br-int-to-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Oct  1 13:04:30 np0005464891 NetworkManager[44940]: <info>  [1759338270.2049] manager: (patch-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Oct  1 13:04:30 np0005464891 ovn_controller[152409]: 2025-10-01T17:04:30Z|00247|binding|INFO|Releasing lport b1d96ef1-0f7f-4bd1-90c1-7d584ed08c52 from this chassis (sb_readonly=0)
Oct  1 13:04:30 np0005464891 nova_compute[259907]: 2025-10-01 17:04:30.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:30 np0005464891 nova_compute[259907]: 2025-10-01 17:04:30.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:30 np0005464891 nova_compute[259907]: 2025-10-01 17:04:30.437 2 DEBUG nova.compute.manager [req-4a34fd2c-65ed-4a56-800c-913e2bf24e39 req-e1fd5e77-5c63-47ce-9af2-f36fc7b7edb6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Received event network-changed-8f9e2444-b32a-47d1-aa86-3140a6eda6eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:04:30 np0005464891 nova_compute[259907]: 2025-10-01 17:04:30.437 2 DEBUG nova.compute.manager [req-4a34fd2c-65ed-4a56-800c-913e2bf24e39 req-e1fd5e77-5c63-47ce-9af2-f36fc7b7edb6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Refreshing instance network info cache due to event network-changed-8f9e2444-b32a-47d1-aa86-3140a6eda6eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 13:04:30 np0005464891 nova_compute[259907]: 2025-10-01 17:04:30.438 2 DEBUG oslo_concurrency.lockutils [req-4a34fd2c-65ed-4a56-800c-913e2bf24e39 req-e1fd5e77-5c63-47ce-9af2-f36fc7b7edb6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:04:30 np0005464891 nova_compute[259907]: 2025-10-01 17:04:30.438 2 DEBUG oslo_concurrency.lockutils [req-4a34fd2c-65ed-4a56-800c-913e2bf24e39 req-e1fd5e77-5c63-47ce-9af2-f36fc7b7edb6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:04:30 np0005464891 nova_compute[259907]: 2025-10-01 17:04:30.438 2 DEBUG nova.network.neutron [req-4a34fd2c-65ed-4a56-800c-913e2bf24e39 req-e1fd5e77-5c63-47ce-9af2-f36fc7b7edb6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Refreshing network info cache for port 8f9e2444-b32a-47d1-aa86-3140a6eda6eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 13:04:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 91 op/s
Oct  1 13:04:31 np0005464891 podman[305769]: 2025-10-01 17:04:31.976110689 +0000 UTC m=+0.086128321 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  1 13:04:32 np0005464891 nova_compute[259907]: 2025-10-01 17:04:32.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:32 np0005464891 nova_compute[259907]: 2025-10-01 17:04:32.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:32 np0005464891 nova_compute[259907]: 2025-10-01 17:04:32.477 2 DEBUG nova.network.neutron [req-4a34fd2c-65ed-4a56-800c-913e2bf24e39 req-e1fd5e77-5c63-47ce-9af2-f36fc7b7edb6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Updated VIF entry in instance network info cache for port 8f9e2444-b32a-47d1-aa86-3140a6eda6eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 13:04:32 np0005464891 nova_compute[259907]: 2025-10-01 17:04:32.478 2 DEBUG nova.network.neutron [req-4a34fd2c-65ed-4a56-800c-913e2bf24e39 req-e1fd5e77-5c63-47ce-9af2-f36fc7b7edb6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Updating instance_info_cache with network_info: [{"id": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "address": "fa:16:3e:51:7a:98", "network": {"id": "6ef0283d-7ab8-454f-a9d0-06f8650873a0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1206575136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2284b811c3654566ae3ff36625740c71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9e2444-b3", "ovs_interfaceid": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:04:32 np0005464891 nova_compute[259907]: 2025-10-01 17:04:32.670 2 DEBUG oslo_concurrency.lockutils [req-4a34fd2c-65ed-4a56-800c-913e2bf24e39 req-e1fd5e77-5c63-47ce-9af2-f36fc7b7edb6 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:04:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.9 MiB/s rd, 778 KiB/s wr, 80 op/s
Oct  1 13:04:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:04:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct  1 13:04:36 np0005464891 nova_compute[259907]: 2025-10-01 17:04:36.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:04:37 np0005464891 nova_compute[259907]: 2025-10-01 17:04:37.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct  1 13:04:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:04:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2306582687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:04:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:04:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2306582687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:04:37 np0005464891 nova_compute[259907]: 2025-10-01 17:04:37.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:38 np0005464891 nova_compute[259907]: 2025-10-01 17:04:38.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:04:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:04:39 np0005464891 nova_compute[259907]: 2025-10-01 17:04:39.101 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:04:39 np0005464891 nova_compute[259907]: 2025-10-01 17:04:39.102 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:04:39 np0005464891 nova_compute[259907]: 2025-10-01 17:04:39.103 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:04:39 np0005464891 nova_compute[259907]: 2025-10-01 17:04:39.103 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 13:04:39 np0005464891 nova_compute[259907]: 2025-10-01 17:04:39.104 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:04:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 89 op/s
Oct  1 13:04:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:04:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/68917627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:04:39 np0005464891 nova_compute[259907]: 2025-10-01 17:04:39.612 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:04:39 np0005464891 nova_compute[259907]: 2025-10-01 17:04:39.737 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 13:04:39 np0005464891 nova_compute[259907]: 2025-10-01 17:04:39.738 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 13:04:39 np0005464891 nova_compute[259907]: 2025-10-01 17:04:39.738 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 13:04:39 np0005464891 nova_compute[259907]: 2025-10-01 17:04:39.902 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:04:39 np0005464891 nova_compute[259907]: 2025-10-01 17:04:39.903 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4218MB free_disk=59.96735763549805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 13:04:39 np0005464891 nova_compute[259907]: 2025-10-01 17:04:39.903 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:04:39 np0005464891 nova_compute[259907]: 2025-10-01 17:04:39.904 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:04:40 np0005464891 nova_compute[259907]: 2025-10-01 17:04:40.057 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 13:04:40 np0005464891 nova_compute[259907]: 2025-10-01 17:04:40.058 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 13:04:40 np0005464891 nova_compute[259907]: 2025-10-01 17:04:40.058 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 13:04:40 np0005464891 nova_compute[259907]: 2025-10-01 17:04:40.111 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:04:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:04:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/983818179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:04:40 np0005464891 nova_compute[259907]: 2025-10-01 17:04:40.572 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:04:40 np0005464891 nova_compute[259907]: 2025-10-01 17:04:40.578 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:04:40 np0005464891 nova_compute[259907]: 2025-10-01 17:04:40.631 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:04:40 np0005464891 ovn_controller[152409]: 2025-10-01T17:04:40Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:51:7a:98 10.100.0.14
Oct  1 13:04:40 np0005464891 ovn_controller[152409]: 2025-10-01T17:04:40Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:51:7a:98 10.100.0.14
Oct  1 13:04:40 np0005464891 nova_compute[259907]: 2025-10-01 17:04:40.952 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 13:04:40 np0005464891 nova_compute[259907]: 2025-10-01 17:04:40.953 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:04:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 494 KiB/s rd, 488 KiB/s wr, 33 op/s
Oct  1 13:04:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:04:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:04:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:04:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:04:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:04:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:04:42 np0005464891 nova_compute[259907]: 2025-10-01 17:04:42.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:42 np0005464891 nova_compute[259907]: 2025-10-01 17:04:42.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:42 np0005464891 nova_compute[259907]: 2025-10-01 17:04:42.949 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:04:42 np0005464891 nova_compute[259907]: 2025-10-01 17:04:42.950 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:04:42 np0005464891 nova_compute[259907]: 2025-10-01 17:04:42.950 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 13:04:42 np0005464891 nova_compute[259907]: 2025-10-01 17:04:42.950 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 13:04:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 302 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  1 13:04:43 np0005464891 nova_compute[259907]: 2025-10-01 17:04:43.454 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "refresh_cache-28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:04:43 np0005464891 nova_compute[259907]: 2025-10-01 17:04:43.454 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquired lock "refresh_cache-28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:04:43 np0005464891 nova_compute[259907]: 2025-10-01 17:04:43.455 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  1 13:04:43 np0005464891 nova_compute[259907]: 2025-10-01 17:04:43.455 2 DEBUG nova.objects.instance [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:04:44 np0005464891 podman[305835]: 2025-10-01 17:04:44.121652707 +0000 UTC m=+0.230107735 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  1 13:04:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:04:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 316 KiB/s rd, 2.0 MiB/s wr, 66 op/s
Oct  1 13:04:45 np0005464891 nova_compute[259907]: 2025-10-01 17:04:45.969 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Updating instance_info_cache with network_info: [{"id": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "address": "fa:16:3e:51:7a:98", "network": {"id": "6ef0283d-7ab8-454f-a9d0-06f8650873a0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1206575136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2284b811c3654566ae3ff36625740c71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9e2444-b3", "ovs_interfaceid": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:04:46 np0005464891 nova_compute[259907]: 2025-10-01 17:04:46.171 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Releasing lock "refresh_cache-28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:04:46 np0005464891 nova_compute[259907]: 2025-10-01 17:04:46.171 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  1 13:04:46 np0005464891 nova_compute[259907]: 2025-10-01 17:04:46.172 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:04:46 np0005464891 nova_compute[259907]: 2025-10-01 17:04:46.172 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:04:46 np0005464891 nova_compute[259907]: 2025-10-01 17:04:46.172 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:04:46 np0005464891 nova_compute[259907]: 2025-10-01 17:04:46.173 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 13:04:46 np0005464891 nova_compute[259907]: 2025-10-01 17:04:46.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:04:46 np0005464891 nova_compute[259907]: 2025-10-01 17:04:46.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:04:47 np0005464891 nova_compute[259907]: 2025-10-01 17:04:47.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 316 KiB/s rd, 2.0 MiB/s wr, 66 op/s
Oct  1 13:04:47 np0005464891 nova_compute[259907]: 2025-10-01 17:04:47.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct  1 13:04:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:04:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  1 13:04:52 np0005464891 nova_compute[259907]: 2025-10-01 17:04:52.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:52 np0005464891 nova_compute[259907]: 2025-10-01 17:04:52.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 376 KiB/s rd, 1.7 MiB/s wr, 70 op/s
Oct  1 13:04:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:04:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 116 KiB/s rd, 84 KiB/s wr, 21 op/s
Oct  1 13:04:55 np0005464891 podman[305854]: 2025-10-01 17:04:55.957990441 +0000 UTC m=+0.077956026 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  1 13:04:57 np0005464891 nova_compute[259907]: 2025-10-01 17:04:57.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 102 KiB/s rd, 84 KiB/s wr, 20 op/s
Oct  1 13:04:57 np0005464891 nova_compute[259907]: 2025-10-01 17:04:57.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:04:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 102 KiB/s rd, 95 KiB/s wr, 20 op/s
Oct  1 13:04:59 np0005464891 podman[305881]: 2025-10-01 17:04:59.959164011 +0000 UTC m=+0.072943610 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  1 13:04:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:05:00 np0005464891 ovn_controller[152409]: 2025-10-01T17:05:00Z|00248|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct  1 13:05:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 98 KiB/s rd, 20 KiB/s wr, 12 op/s
Oct  1 13:05:02 np0005464891 nova_compute[259907]: 2025-10-01 17:05:02.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:02 np0005464891 nova_compute[259907]: 2025-10-01 17:05:02.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:02 np0005464891 podman[305901]: 2025-10-01 17:05:02.966994924 +0000 UTC m=+0.072337291 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid)
Oct  1 13:05:03 np0005464891 nova_compute[259907]: 2025-10-01 17:05:03.099 2 DEBUG oslo_concurrency.lockutils [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Acquiring lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:05:03 np0005464891 nova_compute[259907]: 2025-10-01 17:05:03.100 2 DEBUG oslo_concurrency.lockutils [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:05:03 np0005464891 nova_compute[259907]: 2025-10-01 17:05:03.100 2 DEBUG oslo_concurrency.lockutils [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Acquiring lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:05:03 np0005464891 nova_compute[259907]: 2025-10-01 17:05:03.100 2 DEBUG oslo_concurrency.lockutils [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:05:03 np0005464891 nova_compute[259907]: 2025-10-01 17:05:03.100 2 DEBUG oslo_concurrency.lockutils [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:05:03 np0005464891 nova_compute[259907]: 2025-10-01 17:05:03.102 2 INFO nova.compute.manager [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Terminating instance#033[00m
Oct  1 13:05:03 np0005464891 nova_compute[259907]: 2025-10-01 17:05:03.103 2 DEBUG nova.compute.manager [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 13:05:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 93 KiB/s rd, 17 KiB/s wr, 12 op/s
Oct  1 13:05:04 np0005464891 kernel: tap8f9e2444-b3 (unregistering): left promiscuous mode
Oct  1 13:05:04 np0005464891 NetworkManager[44940]: <info>  [1759338304.0867] device (tap8f9e2444-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 13:05:04 np0005464891 ovn_controller[152409]: 2025-10-01T17:05:04Z|00249|binding|INFO|Releasing lport 8f9e2444-b32a-47d1-aa86-3140a6eda6eb from this chassis (sb_readonly=0)
Oct  1 13:05:04 np0005464891 ovn_controller[152409]: 2025-10-01T17:05:04Z|00250|binding|INFO|Setting lport 8f9e2444-b32a-47d1-aa86-3140a6eda6eb down in Southbound
Oct  1 13:05:04 np0005464891 ovn_controller[152409]: 2025-10-01T17:05:04Z|00251|binding|INFO|Removing iface tap8f9e2444-b3 ovn-installed in OVS
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:04.165 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:7a:98 10.100.0.14'], port_security=['fa:16:3e:51:7a:98 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '28515950-4a2a-4cf3-a0d0-7e1a9ae85a19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ef0283d-7ab8-454f-a9d0-06f8650873a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2284b811c3654566ae3ff36625740c71', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7851ac58-e590-45d8-94c0-c842686aaf5e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5175c4e1-c8e2-41c8-b21b-c935cbeb71cb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=8f9e2444-b32a-47d1-aa86-3140a6eda6eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:05:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:04.166 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 8f9e2444-b32a-47d1-aa86-3140a6eda6eb in datapath 6ef0283d-7ab8-454f-a9d0-06f8650873a0 unbound from our chassis#033[00m
Oct  1 13:05:04 np0005464891 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Oct  1 13:05:04 np0005464891 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Consumed 14.981s CPU time.
Oct  1 13:05:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:04.168 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ef0283d-7ab8-454f-a9d0-06f8650873a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 13:05:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:04.169 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e39fc7a2-26df-48f9-aac4-c498e7e4c14c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:05:04 np0005464891 systemd-machined[214891]: Machine qemu-26-instance-0000001a terminated.
Oct  1 13:05:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:04.170 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0 namespace which is not needed anymore#033[00m
Oct  1 13:05:04 np0005464891 kernel: tap8f9e2444-b3: entered promiscuous mode
Oct  1 13:05:04 np0005464891 systemd-udevd[305926]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 13:05:04 np0005464891 kernel: tap8f9e2444-b3 (unregistering): left promiscuous mode
Oct  1 13:05:04 np0005464891 NetworkManager[44940]: <info>  [1759338304.3297] manager: (tap8f9e2444-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Oct  1 13:05:04 np0005464891 ovn_controller[152409]: 2025-10-01T17:05:04Z|00252|binding|INFO|Claiming lport 8f9e2444-b32a-47d1-aa86-3140a6eda6eb for this chassis.
Oct  1 13:05:04 np0005464891 ovn_controller[152409]: 2025-10-01T17:05:04Z|00253|binding|INFO|8f9e2444-b32a-47d1-aa86-3140a6eda6eb: Claiming fa:16:3e:51:7a:98 10.100.0.14
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.351 2 INFO nova.virt.libvirt.driver [-] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Instance destroyed successfully.#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.351 2 DEBUG nova.objects.instance [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lazy-loading 'resources' on Instance uuid 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:05:04 np0005464891 ovn_controller[152409]: 2025-10-01T17:05:04Z|00254|binding|INFO|Setting lport 8f9e2444-b32a-47d1-aa86-3140a6eda6eb ovn-installed in OVS
Oct  1 13:05:04 np0005464891 ovn_controller[152409]: 2025-10-01T17:05:04Z|00255|if_status|INFO|Not setting lport 8f9e2444-b32a-47d1-aa86-3140a6eda6eb down as sb is readonly
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:04 np0005464891 ovn_controller[152409]: 2025-10-01T17:05:04Z|00256|binding|INFO|Releasing lport 8f9e2444-b32a-47d1-aa86-3140a6eda6eb from this chassis (sb_readonly=0)
Oct  1 13:05:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:04.394 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:7a:98 10.100.0.14'], port_security=['fa:16:3e:51:7a:98 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '28515950-4a2a-4cf3-a0d0-7e1a9ae85a19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ef0283d-7ab8-454f-a9d0-06f8650873a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2284b811c3654566ae3ff36625740c71', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7851ac58-e590-45d8-94c0-c842686aaf5e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5175c4e1-c8e2-41c8-b21b-c935cbeb71cb, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=8f9e2444-b32a-47d1-aa86-3140a6eda6eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:04 np0005464891 neutron-haproxy-ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0[305733]: [NOTICE]   (305737) : haproxy version is 2.8.14-c23fe91
Oct  1 13:05:04 np0005464891 neutron-haproxy-ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0[305733]: [NOTICE]   (305737) : path to executable is /usr/sbin/haproxy
Oct  1 13:05:04 np0005464891 neutron-haproxy-ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0[305733]: [WARNING]  (305737) : Exiting Master process...
Oct  1 13:05:04 np0005464891 neutron-haproxy-ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0[305733]: [ALERT]    (305737) : Current worker (305739) exited with code 143 (Terminated)
Oct  1 13:05:04 np0005464891 neutron-haproxy-ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0[305733]: [WARNING]  (305737) : All workers exited. Exiting... (0)
Oct  1 13:05:04 np0005464891 systemd[1]: libpod-ac3f40946ae1a98ae81daec6ade43ed2496fc62398832178997088bbbc46bb5e.scope: Deactivated successfully.
Oct  1 13:05:04 np0005464891 podman[305947]: 2025-10-01 17:05:04.422767628 +0000 UTC m=+0.161832665 container died ac3f40946ae1a98ae81daec6ade43ed2496fc62398832178997088bbbc46bb5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.500 2 DEBUG nova.virt.libvirt.vif [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T17:04:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-instance-1935298639',display_name='tempest-instance-1935298639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1935298639',id=26,image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHBDnI4DWuPr6YNdRMCa7jM1n8j3sQW4FlOn6l80s7J3IA5CfH3TlumjsYKyYsJ21Io+HjIR3XWT9TW8IabN4G08YyQTBo4qzR9D815pN4Lso+sKU7PWb9e+wC5NQeqdMA==',key_name='tempest-keypair-816340729',keypairs=<?>,launch_index=0,launched_at=2025-10-01T17:04:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2284b811c3654566ae3ff36625740c71',ramdisk_id='',reservation_id='r-oe75bchj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f01c1e7c-fea3-4433-a44a-d71153552c78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-2085916881',owner_user_name='tempest-VolumesBackupsTest-2085916881-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T17:04:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1ccfcc45229e4430886117b04439c667',uuid=28515950-4a2a-4cf3-a0d0-7e1a9ae85a19,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "address": "fa:16:3e:51:7a:98", "network": {"id": "6ef0283d-7ab8-454f-a9d0-06f8650873a0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1206575136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2284b811c3654566ae3ff36625740c71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9e2444-b3", "ovs_interfaceid": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.501 2 DEBUG nova.network.os_vif_util [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Converting VIF {"id": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "address": "fa:16:3e:51:7a:98", "network": {"id": "6ef0283d-7ab8-454f-a9d0-06f8650873a0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1206575136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2284b811c3654566ae3ff36625740c71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9e2444-b3", "ovs_interfaceid": "8f9e2444-b32a-47d1-aa86-3140a6eda6eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.501 2 DEBUG nova.network.os_vif_util [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:51:7a:98,bridge_name='br-int',has_traffic_filtering=True,id=8f9e2444-b32a-47d1-aa86-3140a6eda6eb,network=Network(6ef0283d-7ab8-454f-a9d0-06f8650873a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f9e2444-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.502 2 DEBUG os_vif [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:51:7a:98,bridge_name='br-int',has_traffic_filtering=True,id=8f9e2444-b32a-47d1-aa86-3140a6eda6eb,network=Network(6ef0283d-7ab8-454f-a9d0-06f8650873a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f9e2444-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.504 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f9e2444-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.510 2 INFO os_vif [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:51:7a:98,bridge_name='br-int',has_traffic_filtering=True,id=8f9e2444-b32a-47d1-aa86-3140a6eda6eb,network=Network(6ef0283d-7ab8-454f-a9d0-06f8650873a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f9e2444-b3')#033[00m
Oct  1 13:05:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:04.549 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:7a:98 10.100.0.14'], port_security=['fa:16:3e:51:7a:98 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '28515950-4a2a-4cf3-a0d0-7e1a9ae85a19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ef0283d-7ab8-454f-a9d0-06f8650873a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2284b811c3654566ae3ff36625740c71', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7851ac58-e590-45d8-94c0-c842686aaf5e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5175c4e1-c8e2-41c8-b21b-c935cbeb71cb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=8f9e2444-b32a-47d1-aa86-3140a6eda6eb) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:05:04 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac3f40946ae1a98ae81daec6ade43ed2496fc62398832178997088bbbc46bb5e-userdata-shm.mount: Deactivated successfully.
Oct  1 13:05:04 np0005464891 systemd[1]: var-lib-containers-storage-overlay-8086f095ae89bfc574df7c2b250f93be8305fb442228ffd47ee8bf5d2f886919-merged.mount: Deactivated successfully.
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.896 2 DEBUG nova.compute.manager [req-2d939b08-152f-4bbb-98f0-ffde32c9a7cf req-d7eb1bf4-f7eb-4e82-9a16-d6576bddc396 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Received event network-vif-unplugged-8f9e2444-b32a-47d1-aa86-3140a6eda6eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.897 2 DEBUG oslo_concurrency.lockutils [req-2d939b08-152f-4bbb-98f0-ffde32c9a7cf req-d7eb1bf4-f7eb-4e82-9a16-d6576bddc396 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.897 2 DEBUG oslo_concurrency.lockutils [req-2d939b08-152f-4bbb-98f0-ffde32c9a7cf req-d7eb1bf4-f7eb-4e82-9a16-d6576bddc396 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.897 2 DEBUG oslo_concurrency.lockutils [req-2d939b08-152f-4bbb-98f0-ffde32c9a7cf req-d7eb1bf4-f7eb-4e82-9a16-d6576bddc396 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.897 2 DEBUG nova.compute.manager [req-2d939b08-152f-4bbb-98f0-ffde32c9a7cf req-d7eb1bf4-f7eb-4e82-9a16-d6576bddc396 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] No waiting events found dispatching network-vif-unplugged-8f9e2444-b32a-47d1-aa86-3140a6eda6eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:05:04 np0005464891 nova_compute[259907]: 2025-10-01 17:05:04.898 2 DEBUG nova.compute.manager [req-2d939b08-152f-4bbb-98f0-ffde32c9a7cf req-d7eb1bf4-f7eb-4e82-9a16-d6576bddc396 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Received event network-vif-unplugged-8f9e2444-b32a-47d1-aa86-3140a6eda6eb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 13:05:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 341 B/s rd, 11 KiB/s wr, 0 op/s
Oct  1 13:05:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:05:06 np0005464891 podman[305947]: 2025-10-01 17:05:06.456824204 +0000 UTC m=+2.195889241 container cleanup ac3f40946ae1a98ae81daec6ade43ed2496fc62398832178997088bbbc46bb5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 13:05:06 np0005464891 systemd[1]: libpod-conmon-ac3f40946ae1a98ae81daec6ade43ed2496fc62398832178997088bbbc46bb5e.scope: Deactivated successfully.
Oct  1 13:05:07 np0005464891 nova_compute[259907]: 2025-10-01 17:05:07.140 2 DEBUG nova.compute.manager [req-129b019c-afc6-4be7-8355-ffb19824352b req-6e580127-981c-4379-bb4f-1e3547bb754a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Received event network-vif-plugged-8f9e2444-b32a-47d1-aa86-3140a6eda6eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:05:07 np0005464891 nova_compute[259907]: 2025-10-01 17:05:07.141 2 DEBUG oslo_concurrency.lockutils [req-129b019c-afc6-4be7-8355-ffb19824352b req-6e580127-981c-4379-bb4f-1e3547bb754a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:05:07 np0005464891 nova_compute[259907]: 2025-10-01 17:05:07.142 2 DEBUG oslo_concurrency.lockutils [req-129b019c-afc6-4be7-8355-ffb19824352b req-6e580127-981c-4379-bb4f-1e3547bb754a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:05:07 np0005464891 nova_compute[259907]: 2025-10-01 17:05:07.142 2 DEBUG oslo_concurrency.lockutils [req-129b019c-afc6-4be7-8355-ffb19824352b req-6e580127-981c-4379-bb4f-1e3547bb754a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:05:07 np0005464891 nova_compute[259907]: 2025-10-01 17:05:07.142 2 DEBUG nova.compute.manager [req-129b019c-afc6-4be7-8355-ffb19824352b req-6e580127-981c-4379-bb4f-1e3547bb754a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] No waiting events found dispatching network-vif-plugged-8f9e2444-b32a-47d1-aa86-3140a6eda6eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:05:07 np0005464891 nova_compute[259907]: 2025-10-01 17:05:07.142 2 WARNING nova.compute.manager [req-129b019c-afc6-4be7-8355-ffb19824352b req-6e580127-981c-4379-bb4f-1e3547bb754a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Received unexpected event network-vif-plugged-8f9e2444-b32a-47d1-aa86-3140a6eda6eb for instance with vm_state active and task_state deleting.#033[00m
Oct  1 13:05:07 np0005464891 podman[306000]: 2025-10-01 17:05:07.263588967 +0000 UTC m=+0.778592432 container remove ac3f40946ae1a98ae81daec6ade43ed2496fc62398832178997088bbbc46bb5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:05:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:07.272 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[9073cd18-5a1e-4718-8270-e191ec2dd3eb]: (4, ('Wed Oct  1 05:05:04 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0 (ac3f40946ae1a98ae81daec6ade43ed2496fc62398832178997088bbbc46bb5e)\nac3f40946ae1a98ae81daec6ade43ed2496fc62398832178997088bbbc46bb5e\nWed Oct  1 05:05:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0 (ac3f40946ae1a98ae81daec6ade43ed2496fc62398832178997088bbbc46bb5e)\nac3f40946ae1a98ae81daec6ade43ed2496fc62398832178997088bbbc46bb5e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:05:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:07.274 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[bb27a2e5-c1d0-40a5-9c31-5753e12edd2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:05:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:07.277 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ef0283d-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:05:07 np0005464891 nova_compute[259907]: 2025-10-01 17:05:07.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:07 np0005464891 kernel: tap6ef0283d-70: left promiscuous mode
Oct  1 13:05:07 np0005464891 nova_compute[259907]: 2025-10-01 17:05:07.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:07 np0005464891 nova_compute[259907]: 2025-10-01 17:05:07.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:07.320 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[206311dc-2d03-4415-8c1a-14e0d01bda9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:05:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:07.350 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[6e08e096-00de-4cb9-999c-f2eeb4104e69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:05:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:07.351 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[86110c07-81be-4878-ad15-002c2a71eff9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:05:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:07.377 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[69d9bd92-ab5e-492e-a52f-1f7a46c39193]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521347, 'reachable_time': 25580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306142, 'error': None, 'target': 'ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:05:07 np0005464891 systemd[1]: run-netns-ovnmeta\x2d6ef0283d\x2d7ab8\x2d454f\x2da9d0\x2d06f8650873a0.mount: Deactivated successfully.
Oct  1 13:05:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:07.380 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ef0283d-7ab8-454f-a9d0-06f8650873a0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 13:05:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:07.381 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f8d9f8-a04f-4e9c-9d85-a6a217b31acc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:05:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:07.384 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 8f9e2444-b32a-47d1-aa86-3140a6eda6eb in datapath 6ef0283d-7ab8-454f-a9d0-06f8650873a0 unbound from our chassis#033[00m
Oct  1 13:05:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:07.385 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ef0283d-7ab8-454f-a9d0-06f8650873a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 13:05:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:07.386 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[724b3a5f-15dc-4849-85ab-ab8e94d00731]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:05:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:07.387 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 8f9e2444-b32a-47d1-aa86-3140a6eda6eb in datapath 6ef0283d-7ab8-454f-a9d0-06f8650873a0 unbound from our chassis#033[00m
Oct  1 13:05:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:07.388 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ef0283d-7ab8-454f-a9d0-06f8650873a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 13:05:07 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:07.389 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3a1f6d-e630-46a9-99f0-a59f715f6c08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:05:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 341 B/s rd, 11 KiB/s wr, 0 op/s
Oct  1 13:05:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:05:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:05:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 13:05:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:05:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 13:05:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:05:07 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 7004cabd-acd1-4f88-8a94-b955ede33243 does not exist
Oct  1 13:05:07 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 46d30fab-92d4-4634-b427-cdefbac3c2dc does not exist
Oct  1 13:05:07 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 43c06905-f550-4c48-bfcf-ea05f102268d does not exist
Oct  1 13:05:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 13:05:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 13:05:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 13:05:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:05:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:05:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:05:07 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:05:08 np0005464891 podman[306291]: 2025-10-01 17:05:08.460427776 +0000 UTC m=+0.024405719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:05:08 np0005464891 podman[306291]: 2025-10-01 17:05:08.70548902 +0000 UTC m=+0.269466924 container create f03367a36ebd0e1c02e22cdca65b6fee15b2539a65743d2a1172bde6381b2f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mestorf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 13:05:08 np0005464891 systemd[1]: Started libpod-conmon-f03367a36ebd0e1c02e22cdca65b6fee15b2539a65743d2a1172bde6381b2f1c.scope.
Oct  1 13:05:08 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:05:09 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:05:09 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:05:09 np0005464891 podman[306291]: 2025-10-01 17:05:09.168106464 +0000 UTC m=+0.732084447 container init f03367a36ebd0e1c02e22cdca65b6fee15b2539a65743d2a1172bde6381b2f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mestorf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 13:05:09 np0005464891 podman[306291]: 2025-10-01 17:05:09.177983754 +0000 UTC m=+0.741961687 container start f03367a36ebd0e1c02e22cdca65b6fee15b2539a65743d2a1172bde6381b2f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mestorf, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:05:09 np0005464891 agitated_mestorf[306307]: 167 167
Oct  1 13:05:09 np0005464891 systemd[1]: libpod-f03367a36ebd0e1c02e22cdca65b6fee15b2539a65743d2a1172bde6381b2f1c.scope: Deactivated successfully.
Oct  1 13:05:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 5.1 KiB/s rd, 11 KiB/s wr, 6 op/s
Oct  1 13:05:09 np0005464891 podman[306291]: 2025-10-01 17:05:09.443587871 +0000 UTC m=+1.007565764 container attach f03367a36ebd0e1c02e22cdca65b6fee15b2539a65743d2a1172bde6381b2f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mestorf, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 13:05:09 np0005464891 podman[306291]: 2025-10-01 17:05:09.444386744 +0000 UTC m=+1.008364667 container died f03367a36ebd0e1c02e22cdca65b6fee15b2539a65743d2a1172bde6381b2f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mestorf, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 13:05:09 np0005464891 nova_compute[259907]: 2025-10-01 17:05:09.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:10 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d4439aaae2ce95f9658fa1e82ae6a5f6aa82c382c6fd09d5a208d5989249b9e3-merged.mount: Deactivated successfully.
Oct  1 13:05:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:05:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 6.0 KiB/s rd, 85 B/s wr, 7 op/s
Oct  1 13:05:11 np0005464891 podman[306291]: 2025-10-01 17:05:11.867098187 +0000 UTC m=+3.431076090 container remove f03367a36ebd0e1c02e22cdca65b6fee15b2539a65743d2a1172bde6381b2f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 13:05:11 np0005464891 systemd[1]: libpod-conmon-f03367a36ebd0e1c02e22cdca65b6fee15b2539a65743d2a1172bde6381b2f1c.scope: Deactivated successfully.
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:12.009751) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338312009790, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 921, "num_deletes": 253, "total_data_size": 1225906, "memory_usage": 1243792, "flush_reason": "Manual Compaction"}
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_17:05:12
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'default.rgw.control', 'backups', 'images', '.rgw.root', 'vms', 'cephfs.cephfs.meta']
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 13:05:12 np0005464891 podman[306331]: 2025-10-01 17:05:12.084303418 +0000 UTC m=+0.041780155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:05:12 np0005464891 nova_compute[259907]: 2025-10-01 17:05:12.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:12.467 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:05:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:12.468 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:05:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:12.468 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338312494063, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 779025, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38919, "largest_seqno": 39839, "table_properties": {"data_size": 775217, "index_size": 1460, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10247, "raw_average_key_size": 20, "raw_value_size": 767020, "raw_average_value_size": 1568, "num_data_blocks": 66, "num_entries": 489, "num_filter_entries": 489, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338232, "oldest_key_time": 1759338232, "file_creation_time": 1759338312, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 484388 microseconds, and 3070 cpu microseconds.
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:05:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:05:12 np0005464891 podman[306331]: 2025-10-01 17:05:12.69357038 +0000 UTC m=+0.651047067 container create 8fe23198dcf0fdcfc59a3f17169060c463eb81806f957dbd08a70b59b1f79ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:12.494131) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 779025 bytes OK
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:12.494161) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:12.887408) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:12.887504) EVENT_LOG_v1 {"time_micros": 1759338312887442, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:12.887531) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 1221402, prev total WAL file size 1222695, number of live WAL files 2.
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:12.888516) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353036' seq:0, type:0; will stop at (end)
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(760KB)], [80(11MB)]
Oct  1 13:05:12 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338312888571, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 12865082, "oldest_snapshot_seqno": -1}
Oct  1 13:05:12 np0005464891 systemd[1]: Started libpod-conmon-8fe23198dcf0fdcfc59a3f17169060c463eb81806f957dbd08a70b59b1f79ade.scope.
Oct  1 13:05:13 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:05:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2326c81a5f37b3f9dc3e10d6d1282fdb1426ec4d45756293d4049f641e1c117a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:05:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2326c81a5f37b3f9dc3e10d6d1282fdb1426ec4d45756293d4049f641e1c117a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:05:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2326c81a5f37b3f9dc3e10d6d1282fdb1426ec4d45756293d4049f641e1c117a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:05:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2326c81a5f37b3f9dc3e10d6d1282fdb1426ec4d45756293d4049f641e1c117a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:05:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2326c81a5f37b3f9dc3e10d6d1282fdb1426ec4d45756293d4049f641e1c117a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 13:05:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 8.5 KiB/s rd, 255 B/s wr, 11 op/s
Oct  1 13:05:14 np0005464891 podman[306331]: 2025-10-01 17:05:14.087610982 +0000 UTC m=+2.045087679 container init 8fe23198dcf0fdcfc59a3f17169060c463eb81806f957dbd08a70b59b1f79ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_northcutt, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:05:14 np0005464891 podman[306331]: 2025-10-01 17:05:14.099550779 +0000 UTC m=+2.057027486 container start 8fe23198dcf0fdcfc59a3f17169060c463eb81806f957dbd08a70b59b1f79ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_northcutt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 13:05:14 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 7046 keys, 9967347 bytes, temperature: kUnknown
Oct  1 13:05:14 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338314198716, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 9967347, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9917203, "index_size": 31408, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 177821, "raw_average_key_size": 25, "raw_value_size": 9787822, "raw_average_value_size": 1389, "num_data_blocks": 1250, "num_entries": 7046, "num_filter_entries": 7046, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759338312, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:05:14 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:05:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:14.199789) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 9967347 bytes
Oct  1 13:05:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:14.481052) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 9.8 rd, 7.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.5 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(29.3) write-amplify(12.8) OK, records in: 7536, records dropped: 490 output_compression: NoCompression
Oct  1 13:05:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:14.481093) EVENT_LOG_v1 {"time_micros": 1759338314481079, "job": 46, "event": "compaction_finished", "compaction_time_micros": 1310262, "compaction_time_cpu_micros": 50805, "output_level": 6, "num_output_files": 1, "total_output_size": 9967347, "num_input_records": 7536, "num_output_records": 7046, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 13:05:14 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:05:14 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338314481700, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct  1 13:05:14 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:05:14 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338314483992, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct  1 13:05:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:12.888388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:05:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:14.484095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:05:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:14.484100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:05:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:14.484102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:05:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:14.484104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:05:14 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:14.484106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:05:14 np0005464891 nova_compute[259907]: 2025-10-01 17:05:14.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:14 np0005464891 podman[306331]: 2025-10-01 17:05:14.520179843 +0000 UTC m=+2.477656530 container attach 8fe23198dcf0fdcfc59a3f17169060c463eb81806f957dbd08a70b59b1f79ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 13:05:14 np0005464891 podman[306353]: 2025-10-01 17:05:14.986617902 +0000 UTC m=+0.084173627 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  1 13:05:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 9.4 KiB/s rd, 597 B/s wr, 13 op/s
Oct  1 13:05:15 np0005464891 hopeful_northcutt[306347]: --> passed data devices: 0 physical, 3 LVM
Oct  1 13:05:15 np0005464891 hopeful_northcutt[306347]: --> relative data size: 1.0
Oct  1 13:05:15 np0005464891 hopeful_northcutt[306347]: --> All data devices are unavailable
Oct  1 13:05:15 np0005464891 systemd[1]: libpod-8fe23198dcf0fdcfc59a3f17169060c463eb81806f957dbd08a70b59b1f79ade.scope: Deactivated successfully.
Oct  1 13:05:15 np0005464891 systemd[1]: libpod-8fe23198dcf0fdcfc59a3f17169060c463eb81806f957dbd08a70b59b1f79ade.scope: Consumed 1.152s CPU time.
Oct  1 13:05:15 np0005464891 podman[306331]: 2025-10-01 17:05:15.661349438 +0000 UTC m=+3.618826165 container died 8fe23198dcf0fdcfc59a3f17169060c463eb81806f957dbd08a70b59b1f79ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_northcutt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 13:05:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:05:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 13:05:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 28K writes, 99K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 28K writes, 10K syncs, 2.78 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5962 writes, 23K keys, 5962 commit groups, 1.0 writes per commit group, ingest: 24.03 MB, 0.04 MB/s#012Interval WAL: 5962 writes, 2275 syncs, 2.62 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 13:05:16 np0005464891 systemd[1]: var-lib-containers-storage-overlay-2326c81a5f37b3f9dc3e10d6d1282fdb1426ec4d45756293d4049f641e1c117a-merged.mount: Deactivated successfully.
Oct  1 13:05:17 np0005464891 nova_compute[259907]: 2025-10-01 17:05:17.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 9.1 KiB/s rd, 597 B/s wr, 13 op/s
Oct  1 13:05:17 np0005464891 podman[306331]: 2025-10-01 17:05:17.851797789 +0000 UTC m=+5.809274496 container remove 8fe23198dcf0fdcfc59a3f17169060c463eb81806f957dbd08a70b59b1f79ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_northcutt, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:05:17 np0005464891 systemd[1]: libpod-conmon-8fe23198dcf0fdcfc59a3f17169060c463eb81806f957dbd08a70b59b1f79ade.scope: Deactivated successfully.
Oct  1 13:05:18 np0005464891 podman[306557]: 2025-10-01 17:05:18.645159514 +0000 UTC m=+0.039522064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:05:18 np0005464891 podman[306557]: 2025-10-01 17:05:18.833600956 +0000 UTC m=+0.227963526 container create 1b1de57a19fb9f616fdd4904fcff09fcd48efdd1ea5811b454290b43ca7416b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gates, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 13:05:19 np0005464891 systemd[1]: Started libpod-conmon-1b1de57a19fb9f616fdd4904fcff09fcd48efdd1ea5811b454290b43ca7416b5.scope.
Oct  1 13:05:19 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:05:19 np0005464891 podman[306557]: 2025-10-01 17:05:19.311433297 +0000 UTC m=+0.705795907 container init 1b1de57a19fb9f616fdd4904fcff09fcd48efdd1ea5811b454290b43ca7416b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 13:05:19 np0005464891 podman[306557]: 2025-10-01 17:05:19.31957036 +0000 UTC m=+0.713932890 container start 1b1de57a19fb9f616fdd4904fcff09fcd48efdd1ea5811b454290b43ca7416b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:05:19 np0005464891 adoring_gates[306574]: 167 167
Oct  1 13:05:19 np0005464891 systemd[1]: libpod-1b1de57a19fb9f616fdd4904fcff09fcd48efdd1ea5811b454290b43ca7416b5.scope: Deactivated successfully.
Oct  1 13:05:19 np0005464891 nova_compute[259907]: 2025-10-01 17:05:19.350 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759338304.349227, 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:05:19 np0005464891 nova_compute[259907]: 2025-10-01 17:05:19.351 2 INFO nova.compute.manager [-] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] VM Stopped (Lifecycle Event)#033[00m
Oct  1 13:05:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 14 KiB/s rd, 597 B/s wr, 18 op/s
Oct  1 13:05:19 np0005464891 podman[306557]: 2025-10-01 17:05:19.543146925 +0000 UTC m=+0.937509535 container attach 1b1de57a19fb9f616fdd4904fcff09fcd48efdd1ea5811b454290b43ca7416b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 13:05:19 np0005464891 podman[306557]: 2025-10-01 17:05:19.545074178 +0000 UTC m=+0.939436748 container died 1b1de57a19fb9f616fdd4904fcff09fcd48efdd1ea5811b454290b43ca7416b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:05:19 np0005464891 nova_compute[259907]: 2025-10-01 17:05:19.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:19 np0005464891 nova_compute[259907]: 2025-10-01 17:05:19.578 2 DEBUG nova.compute.manager [None req-167b174c-9f1b-40e8-a67b-577f869f44ed - - - - - -] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:05:19 np0005464891 nova_compute[259907]: 2025-10-01 17:05:19.583 2 DEBUG nova.compute.manager [None req-167b174c-9f1b-40e8-a67b-577f869f44ed - - - - - -] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 13:05:19 np0005464891 nova_compute[259907]: 2025-10-01 17:05:19.778 2 INFO nova.compute.manager [None req-167b174c-9f1b-40e8-a67b-577f869f44ed - - - - - -] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Oct  1 13:05:20 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b60a552e44f7991f80d4f7a5d07219d600a9ce38f0aeac15cae06e1534a64600-merged.mount: Deactivated successfully.
Oct  1 13:05:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:05:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 13:05:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 25K writes, 94K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 25K writes, 9178 syncs, 2.82 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4490 writes, 17K keys, 4490 commit groups, 1.0 writes per commit group, ingest: 13.91 MB, 0.02 MB/s#012Interval WAL: 4491 writes, 1813 syncs, 2.48 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 13:05:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 9.2 KiB/s rd, 511 B/s wr, 13 op/s
Oct  1 13:05:22 np0005464891 podman[306557]: 2025-10-01 17:05:22.151631799 +0000 UTC m=+3.545994369 container remove 1b1de57a19fb9f616fdd4904fcff09fcd48efdd1ea5811b454290b43ca7416b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 13:05:22 np0005464891 systemd[1]: libpod-conmon-1b1de57a19fb9f616fdd4904fcff09fcd48efdd1ea5811b454290b43ca7416b5.scope: Deactivated successfully.
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.797991144103666e-06 of space, bias 1.0, pg target 0.0008393973432310998 quantized to 32 (current 32)
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.03699351273035098 of space, bias 1.0, pg target 11.098053819105294 quantized to 32 (current 32)
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0003461242226671876 of space, bias 1.0, pg target 0.10002990035081721 quantized to 32 (current 32)
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1925249377319789 quantized to 32 (current 32)
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0005880868659243341 quantized to 16 (current 32)
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.351085824054176e-05 quantized to 32 (current 32)
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006248422950446051 quantized to 32 (current 32)
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:05:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014702171648108353 quantized to 32 (current 32)
Oct  1 13:05:22 np0005464891 nova_compute[259907]: 2025-10-01 17:05:22.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:22 np0005464891 podman[306597]: 2025-10-01 17:05:22.367206355 +0000 UTC m=+0.025315694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:05:22 np0005464891 podman[306597]: 2025-10-01 17:05:22.738177758 +0000 UTC m=+0.396287057 container create 9a99e24c1cbab84f56c5c7c43e78c39affafaeb42be3717a0cc439dacdce2344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_northcutt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:05:22 np0005464891 systemd[1]: Started libpod-conmon-9a99e24c1cbab84f56c5c7c43e78c39affafaeb42be3717a0cc439dacdce2344.scope.
Oct  1 13:05:23 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:05:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf90730057a75ad7932949d7d399ed7de74cfa6a4bb6e1780c7b33740383935/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:05:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf90730057a75ad7932949d7d399ed7de74cfa6a4bb6e1780c7b33740383935/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:05:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf90730057a75ad7932949d7d399ed7de74cfa6a4bb6e1780c7b33740383935/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:05:23 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf90730057a75ad7932949d7d399ed7de74cfa6a4bb6e1780c7b33740383935/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:05:23 np0005464891 podman[306597]: 2025-10-01 17:05:23.408561394 +0000 UTC m=+1.066670753 container init 9a99e24c1cbab84f56c5c7c43e78c39affafaeb42be3717a0cc439dacdce2344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct  1 13:05:23 np0005464891 podman[306597]: 2025-10-01 17:05:23.416586374 +0000 UTC m=+1.074695713 container start 9a99e24c1cbab84f56c5c7c43e78c39affafaeb42be3717a0cc439dacdce2344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 13:05:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 12 KiB/s rd, 1.1 KiB/s wr, 17 op/s
Oct  1 13:05:23 np0005464891 podman[306597]: 2025-10-01 17:05:23.715762121 +0000 UTC m=+1.373871450 container attach 9a99e24c1cbab84f56c5c7c43e78c39affafaeb42be3717a0cc439dacdce2344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]: {
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:    "0": [
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:        {
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "devices": [
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "/dev/loop3"
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            ],
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "lv_name": "ceph_lv0",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "lv_size": "21470642176",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "name": "ceph_lv0",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "tags": {
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.cluster_name": "ceph",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.crush_device_class": "",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.encrypted": "0",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.osd_id": "0",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.type": "block",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.vdo": "0"
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            },
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "type": "block",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "vg_name": "ceph_vg0"
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:        }
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:    ],
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:    "1": [
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:        {
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "devices": [
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "/dev/loop4"
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            ],
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "lv_name": "ceph_lv1",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "lv_size": "21470642176",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "name": "ceph_lv1",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "tags": {
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.cluster_name": "ceph",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.crush_device_class": "",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.encrypted": "0",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.osd_id": "1",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.type": "block",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.vdo": "0"
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            },
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "type": "block",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "vg_name": "ceph_vg1"
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:        }
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:    ],
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:    "2": [
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:        {
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "devices": [
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "/dev/loop5"
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            ],
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "lv_name": "ceph_lv2",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "lv_size": "21470642176",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "name": "ceph_lv2",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "tags": {
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.cluster_name": "ceph",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.crush_device_class": "",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.encrypted": "0",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.osd_id": "2",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.type": "block",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:                "ceph.vdo": "0"
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            },
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "type": "block",
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:            "vg_name": "ceph_vg2"
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:        }
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]:    ]
Oct  1 13:05:24 np0005464891 competent_northcutt[306614]: }
Oct  1 13:05:24 np0005464891 systemd[1]: libpod-9a99e24c1cbab84f56c5c7c43e78c39affafaeb42be3717a0cc439dacdce2344.scope: Deactivated successfully.
Oct  1 13:05:24 np0005464891 podman[306597]: 2025-10-01 17:05:24.416350414 +0000 UTC m=+2.074459723 container died 9a99e24c1cbab84f56c5c7c43e78c39affafaeb42be3717a0cc439dacdce2344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 13:05:24 np0005464891 nova_compute[259907]: 2025-10-01 17:05:24.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:24 np0005464891 nova_compute[259907]: 2025-10-01 17:05:24.761 2 INFO nova.virt.libvirt.driver [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Deleting instance files /var/lib/nova/instances/28515950-4a2a-4cf3-a0d0-7e1a9ae85a19_del#033[00m
Oct  1 13:05:24 np0005464891 nova_compute[259907]: 2025-10-01 17:05:24.763 2 INFO nova.virt.libvirt.driver [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Deletion of /var/lib/nova/instances/28515950-4a2a-4cf3-a0d0-7e1a9ae85a19_del complete#033[00m
Oct  1 13:05:24 np0005464891 systemd[1]: var-lib-containers-storage-overlay-baf90730057a75ad7932949d7d399ed7de74cfa6a4bb6e1780c7b33740383935-merged.mount: Deactivated successfully.
Oct  1 13:05:24 np0005464891 nova_compute[259907]: 2025-10-01 17:05:24.867 2 INFO nova.compute.manager [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Took 21.76 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 13:05:24 np0005464891 nova_compute[259907]: 2025-10-01 17:05:24.868 2 DEBUG oslo.service.loopingcall [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 13:05:24 np0005464891 nova_compute[259907]: 2025-10-01 17:05:24.869 2 DEBUG nova.compute.manager [-] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 13:05:24 np0005464891 nova_compute[259907]: 2025-10-01 17:05:24.869 2 DEBUG nova.network.neutron [-] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 13:05:25 np0005464891 podman[306597]: 2025-10-01 17:05:25.360087909 +0000 UTC m=+3.018197248 container remove 9a99e24c1cbab84f56c5c7c43e78c39affafaeb42be3717a0cc439dacdce2344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_northcutt, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 13:05:25 np0005464891 systemd[1]: libpod-conmon-9a99e24c1cbab84f56c5c7c43e78c39affafaeb42be3717a0cc439dacdce2344.scope: Deactivated successfully.
Oct  1 13:05:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 10 KiB/s rd, 938 B/s wr, 14 op/s
Oct  1 13:05:26 np0005464891 podman[306775]: 2025-10-01 17:05:26.137099666 +0000 UTC m=+0.039695928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:05:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:05:26 np0005464891 podman[306775]: 2025-10-01 17:05:26.358310346 +0000 UTC m=+0.260906568 container create a5cc1b3b7fccf465b07944875384e3ae6a7dccc8e98f6a37ee87984cf3592b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Oct  1 13:05:26 np0005464891 systemd[1]: Started libpod-conmon-a5cc1b3b7fccf465b07944875384e3ae6a7dccc8e98f6a37ee87984cf3592b1c.scope.
Oct  1 13:05:26 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:05:26 np0005464891 podman[306775]: 2025-10-01 17:05:26.628171481 +0000 UTC m=+0.530767793 container init a5cc1b3b7fccf465b07944875384e3ae6a7dccc8e98f6a37ee87984cf3592b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 13:05:26 np0005464891 podman[306789]: 2025-10-01 17:05:26.631654576 +0000 UTC m=+0.228333267 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  1 13:05:26 np0005464891 podman[306775]: 2025-10-01 17:05:26.643346546 +0000 UTC m=+0.545942798 container start a5cc1b3b7fccf465b07944875384e3ae6a7dccc8e98f6a37ee87984cf3592b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:05:26 np0005464891 keen_keldysh[306808]: 167 167
Oct  1 13:05:26 np0005464891 systemd[1]: libpod-a5cc1b3b7fccf465b07944875384e3ae6a7dccc8e98f6a37ee87984cf3592b1c.scope: Deactivated successfully.
Oct  1 13:05:26 np0005464891 podman[306775]: 2025-10-01 17:05:26.860445484 +0000 UTC m=+0.763041816 container attach a5cc1b3b7fccf465b07944875384e3ae6a7dccc8e98f6a37ee87984cf3592b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 13:05:26 np0005464891 podman[306775]: 2025-10-01 17:05:26.861345859 +0000 UTC m=+0.763942101 container died a5cc1b3b7fccf465b07944875384e3ae6a7dccc8e98f6a37ee87984cf3592b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 13:05:26 np0005464891 nova_compute[259907]: 2025-10-01 17:05:26.889 2 DEBUG nova.network.neutron [-] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:05:26 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:26.890 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:05:26 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:26.891 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 13:05:26 np0005464891 nova_compute[259907]: 2025-10-01 17:05:26.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:27 np0005464891 nova_compute[259907]: 2025-10-01 17:05:27.080 2 DEBUG nova.compute.manager [req-0c133d85-6b8d-4fa0-9755-c1752d82d7bb req-e7200aa8-1e00-4deb-bd73-845cdc18a18f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Received event network-vif-deleted-8f9e2444-b32a-47d1-aa86-3140a6eda6eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:05:27 np0005464891 nova_compute[259907]: 2025-10-01 17:05:27.081 2 INFO nova.compute.manager [req-0c133d85-6b8d-4fa0-9755-c1752d82d7bb req-e7200aa8-1e00-4deb-bd73-845cdc18a18f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Neutron deleted interface 8f9e2444-b32a-47d1-aa86-3140a6eda6eb; detaching it from the instance and deleting it from the info cache#033[00m
Oct  1 13:05:27 np0005464891 nova_compute[259907]: 2025-10-01 17:05:27.081 2 DEBUG nova.network.neutron [req-0c133d85-6b8d-4fa0-9755-c1752d82d7bb req-e7200aa8-1e00-4deb-bd73-845cdc18a18f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:05:27 np0005464891 nova_compute[259907]: 2025-10-01 17:05:27.124 2 INFO nova.compute.manager [-] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Took 2.25 seconds to deallocate network for instance.#033[00m
Oct  1 13:05:27 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 13:05:27 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.3 total, 600.0 interval#012Cumulative writes: 21K writes, 82K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 21K writes, 7225 syncs, 2.95 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3209 writes, 14K keys, 3209 commit groups, 1.0 writes per commit group, ingest: 11.63 MB, 0.02 MB/s#012Interval WAL: 3210 writes, 1274 syncs, 2.52 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 13:05:27 np0005464891 nova_compute[259907]: 2025-10-01 17:05:27.215 2 DEBUG nova.compute.manager [req-0c133d85-6b8d-4fa0-9755-c1752d82d7bb req-e7200aa8-1e00-4deb-bd73-845cdc18a18f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Detach interface failed, port_id=8f9e2444-b32a-47d1-aa86-3140a6eda6eb, reason: Instance 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  1 13:05:27 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4a749179de194165df2c4c69698afe6c05963b590fabafdec187ea34b96555db-merged.mount: Deactivated successfully.
Oct  1 13:05:27 np0005464891 nova_compute[259907]: 2025-10-01 17:05:27.323 2 INFO nova.compute.manager [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] [instance: 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19] Took 0.20 seconds to detach 1 volumes for instance.#033[00m
Oct  1 13:05:27 np0005464891 nova_compute[259907]: 2025-10-01 17:05:27.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 9.2 KiB/s rd, 597 B/s wr, 12 op/s
Oct  1 13:05:27 np0005464891 podman[306775]: 2025-10-01 17:05:27.436836995 +0000 UTC m=+1.339433207 container remove a5cc1b3b7fccf465b07944875384e3ae6a7dccc8e98f6a37ee87984cf3592b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 13:05:27 np0005464891 systemd[1]: libpod-conmon-a5cc1b3b7fccf465b07944875384e3ae6a7dccc8e98f6a37ee87984cf3592b1c.scope: Deactivated successfully.
Oct  1 13:05:27 np0005464891 nova_compute[259907]: 2025-10-01 17:05:27.474 2 DEBUG oslo_concurrency.lockutils [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:05:27 np0005464891 nova_compute[259907]: 2025-10-01 17:05:27.475 2 DEBUG oslo_concurrency.lockutils [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:05:27 np0005464891 nova_compute[259907]: 2025-10-01 17:05:27.525 2 DEBUG oslo_concurrency.processutils [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:05:27 np0005464891 podman[306842]: 2025-10-01 17:05:27.60785315 +0000 UTC m=+0.049511187 container create 9466b5b2e57ab1a9c7ffe5ea88e90a8033323753e3bdeec68f932ed5cd78dc5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:05:27 np0005464891 systemd[1]: Started libpod-conmon-9466b5b2e57ab1a9c7ffe5ea88e90a8033323753e3bdeec68f932ed5cd78dc5a.scope.
Oct  1 13:05:27 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:05:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e4be894180a2f05bfdff56a4381c055fe7796ab71b9451c7d71909f48ebb4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:05:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e4be894180a2f05bfdff56a4381c055fe7796ab71b9451c7d71909f48ebb4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:05:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e4be894180a2f05bfdff56a4381c055fe7796ab71b9451c7d71909f48ebb4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:05:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e4be894180a2f05bfdff56a4381c055fe7796ab71b9451c7d71909f48ebb4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:05:27 np0005464891 podman[306842]: 2025-10-01 17:05:27.679342809 +0000 UTC m=+0.121000846 container init 9466b5b2e57ab1a9c7ffe5ea88e90a8033323753e3bdeec68f932ed5cd78dc5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 13:05:27 np0005464891 podman[306842]: 2025-10-01 17:05:27.583706149 +0000 UTC m=+0.025364206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:05:27 np0005464891 podman[306842]: 2025-10-01 17:05:27.689192118 +0000 UTC m=+0.130850135 container start 9466b5b2e57ab1a9c7ffe5ea88e90a8033323753e3bdeec68f932ed5cd78dc5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 13:05:27 np0005464891 podman[306842]: 2025-10-01 17:05:27.699062469 +0000 UTC m=+0.140720486 container attach 9466b5b2e57ab1a9c7ffe5ea88e90a8033323753e3bdeec68f932ed5cd78dc5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:05:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:05:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/138519917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:05:27 np0005464891 nova_compute[259907]: 2025-10-01 17:05:27.963 2 DEBUG oslo_concurrency.processutils [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:05:27 np0005464891 nova_compute[259907]: 2025-10-01 17:05:27.970 2 DEBUG nova.compute.provider_tree [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:05:27 np0005464891 nova_compute[259907]: 2025-10-01 17:05:27.988 2 DEBUG nova.scheduler.client.report [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:05:28 np0005464891 nova_compute[259907]: 2025-10-01 17:05:28.014 2 DEBUG oslo_concurrency.lockutils [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:05:28 np0005464891 ceph-mgr[74592]: [devicehealth INFO root] Check health
Oct  1 13:05:28 np0005464891 nova_compute[259907]: 2025-10-01 17:05:28.050 2 INFO nova.scheduler.client.report [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Deleted allocations for instance 28515950-4a2a-4cf3-a0d0-7e1a9ae85a19#033[00m
Oct  1 13:05:28 np0005464891 nova_compute[259907]: 2025-10-01 17:05:28.138 2 DEBUG oslo_concurrency.lockutils [None req-27faa793-2f10-4b5c-9f6d-0fa03055f790 1ccfcc45229e4430886117b04439c667 2284b811c3654566ae3ff36625740c71 - - default default] Lock "28515950-4a2a-4cf3-a0d0-7e1a9ae85a19" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 25.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:05:28 np0005464891 busy_hawking[306878]: {
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:        "osd_id": 2,
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:        "type": "bluestore"
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:    },
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:        "osd_id": 0,
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:        "type": "bluestore"
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:    },
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:        "osd_id": 1,
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:        "type": "bluestore"
Oct  1 13:05:28 np0005464891 busy_hawking[306878]:    }
Oct  1 13:05:28 np0005464891 busy_hawking[306878]: }
Oct  1 13:05:28 np0005464891 systemd[1]: libpod-9466b5b2e57ab1a9c7ffe5ea88e90a8033323753e3bdeec68f932ed5cd78dc5a.scope: Deactivated successfully.
Oct  1 13:05:28 np0005464891 podman[306842]: 2025-10-01 17:05:28.704977338 +0000 UTC m=+1.146635375 container died 9466b5b2e57ab1a9c7ffe5ea88e90a8033323753e3bdeec68f932ed5cd78dc5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:05:28 np0005464891 systemd[1]: libpod-9466b5b2e57ab1a9c7ffe5ea88e90a8033323753e3bdeec68f932ed5cd78dc5a.scope: Consumed 1.021s CPU time.
Oct  1 13:05:28 np0005464891 systemd[1]: var-lib-containers-storage-overlay-39e4be894180a2f05bfdff56a4381c055fe7796ab71b9451c7d71909f48ebb4f-merged.mount: Deactivated successfully.
Oct  1 13:05:28 np0005464891 podman[306842]: 2025-10-01 17:05:28.759541232 +0000 UTC m=+1.201199249 container remove 9466b5b2e57ab1a9c7ffe5ea88e90a8033323753e3bdeec68f932ed5cd78dc5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct  1 13:05:28 np0005464891 systemd[1]: libpod-conmon-9466b5b2e57ab1a9c7ffe5ea88e90a8033323753e3bdeec68f932ed5cd78dc5a.scope: Deactivated successfully.
Oct  1 13:05:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:05:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:05:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2279522569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:05:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:05:28 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:05:28 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:05:28 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 90de1043-dbb4-4616-8164-930214d06e06 does not exist
Oct  1 13:05:28 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 4504d970-5716-4753-9919-30f2cd56914a does not exist
Oct  1 13:05:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 9.4 KiB/s rd, 597 B/s wr, 12 op/s
Oct  1 13:05:29 np0005464891 nova_compute[259907]: 2025-10-01 17:05:29.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:29 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:05:29 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:05:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e460 do_prune osdmap full prune enabled
Oct  1 13:05:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e461 e461: 3 total, 3 up, 3 in
Oct  1 13:05:30 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e461: 3 total, 3 up, 3 in
Oct  1 13:05:30 np0005464891 podman[306973]: 2025-10-01 17:05:30.947469821 +0000 UTC m=+0.057708994 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  1 13:05:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e461 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:05:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 8.4 KiB/s rd, 716 B/s wr, 11 op/s
Oct  1 13:05:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e461 do_prune osdmap full prune enabled
Oct  1 13:05:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e462 e462: 3 total, 3 up, 3 in
Oct  1 13:05:31 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e462: 3 total, 3 up, 3 in
Oct  1 13:05:32 np0005464891 nova_compute[259907]: 2025-10-01 17:05:32.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e462 do_prune osdmap full prune enabled
Oct  1 13:05:32 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e463 e463: 3 total, 3 up, 3 in
Oct  1 13:05:32 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e463: 3 total, 3 up, 3 in
Oct  1 13:05:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.0 MiB/s wr, 107 op/s
Oct  1 13:05:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:05:33.894 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:05:33 np0005464891 podman[306995]: 2025-10-01 17:05:33.933426526 +0000 UTC m=+0.050596668 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 13:05:34 np0005464891 nova_compute[259907]: 2025-10-01 17:05:34.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e463 do_prune osdmap full prune enabled
Oct  1 13:05:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e464 e464: 3 total, 3 up, 3 in
Oct  1 13:05:34 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e464: 3 total, 3 up, 3 in
Oct  1 13:05:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1996: 321 pgs: 12 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 305 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.9 MiB/s wr, 143 op/s
Oct  1 13:05:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e464 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:05:36 np0005464891 nova_compute[259907]: 2025-10-01 17:05:36.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:05:37 np0005464891 nova_compute[259907]: 2025-10-01 17:05:37.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:05:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/183374613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:05:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:05:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/183374613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:05:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1997: 321 pgs: 12 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 305 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.2 MiB/s wr, 124 op/s
Oct  1 13:05:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e464 do_prune osdmap full prune enabled
Oct  1 13:05:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e465 e465: 3 total, 3 up, 3 in
Oct  1 13:05:38 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e465: 3 total, 3 up, 3 in
Oct  1 13:05:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v1999: 321 pgs: 9 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 310 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.8 MiB/s wr, 155 op/s
Oct  1 13:05:39 np0005464891 nova_compute[259907]: 2025-10-01 17:05:39.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:05:39 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2734136246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:05:40 np0005464891 nova_compute[259907]: 2025-10-01 17:05:40.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:05:40 np0005464891 nova_compute[259907]: 2025-10-01 17:05:40.830 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:05:40 np0005464891 nova_compute[259907]: 2025-10-01 17:05:40.831 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:05:40 np0005464891 nova_compute[259907]: 2025-10-01 17:05:40.831 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:05:40 np0005464891 nova_compute[259907]: 2025-10-01 17:05:40.831 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 13:05:40 np0005464891 nova_compute[259907]: 2025-10-01 17:05:40.831 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:05:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:05:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2457994998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:05:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e465 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:05:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e465 do_prune osdmap full prune enabled
Oct  1 13:05:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e466 e466: 3 total, 3 up, 3 in
Oct  1 13:05:41 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e466: 3 total, 3 up, 3 in
Oct  1 13:05:41 np0005464891 nova_compute[259907]: 2025-10-01 17:05:41.316 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:05:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 982 KiB/s rd, 7.3 MiB/s wr, 124 op/s
Oct  1 13:05:41 np0005464891 nova_compute[259907]: 2025-10-01 17:05:41.511 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:05:41 np0005464891 nova_compute[259907]: 2025-10-01 17:05:41.512 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4333MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 13:05:41 np0005464891 nova_compute[259907]: 2025-10-01 17:05:41.512 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:05:41 np0005464891 nova_compute[259907]: 2025-10-01 17:05:41.513 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:05:41 np0005464891 nova_compute[259907]: 2025-10-01 17:05:41.671 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 13:05:41 np0005464891 nova_compute[259907]: 2025-10-01 17:05:41.672 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 13:05:41 np0005464891 nova_compute[259907]: 2025-10-01 17:05:41.755 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:05:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:05:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:05:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:05:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:05:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:05:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:05:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:05:42 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4169569816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:05:42 np0005464891 nova_compute[259907]: 2025-10-01 17:05:42.163 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:05:42 np0005464891 nova_compute[259907]: 2025-10-01 17:05:42.170 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:05:42 np0005464891 nova_compute[259907]: 2025-10-01 17:05:42.210 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:05:42 np0005464891 nova_compute[259907]: 2025-10-01 17:05:42.285 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 13:05:42 np0005464891 nova_compute[259907]: 2025-10-01 17:05:42.285 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:05:42 np0005464891 nova_compute[259907]: 2025-10-01 17:05:42.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:43 np0005464891 nova_compute[259907]: 2025-10-01 17:05:43.287 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:05:43 np0005464891 nova_compute[259907]: 2025-10-01 17:05:43.287 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:05:43 np0005464891 nova_compute[259907]: 2025-10-01 17:05:43.364 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:05:43 np0005464891 nova_compute[259907]: 2025-10-01 17:05:43.364 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 13:05:43 np0005464891 nova_compute[259907]: 2025-10-01 17:05:43.364 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 13:05:43 np0005464891 nova_compute[259907]: 2025-10-01 17:05:43.396 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 13:05:43 np0005464891 nova_compute[259907]: 2025-10-01 17:05:43.396 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:05:43 np0005464891 nova_compute[259907]: 2025-10-01 17:05:43.397 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 13:05:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 2.6 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 72 KiB/s rd, 33 MiB/s wr, 115 op/s
Oct  1 13:05:43 np0005464891 nova_compute[259907]: 2025-10-01 17:05:43.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:05:44 np0005464891 nova_compute[259907]: 2025-10-01 17:05:44.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 2.6 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 72 KiB/s rd, 33 MiB/s wr, 115 op/s
Oct  1 13:05:45 np0005464891 podman[307063]: 2025-10-01 17:05:45.946332423 +0000 UTC m=+0.061720467 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 13:05:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:05:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e466 do_prune osdmap full prune enabled
Oct  1 13:05:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e467 e467: 3 total, 3 up, 3 in
Oct  1 13:05:46 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e467: 3 total, 3 up, 3 in
Oct  1 13:05:46 np0005464891 nova_compute[259907]: 2025-10-01 17:05:46.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:05:46 np0005464891 nova_compute[259907]: 2025-10-01 17:05:46.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:05:47 np0005464891 nova_compute[259907]: 2025-10-01 17:05:47.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 2.8 GiB data, 3.0 GiB used, 57 GiB / 60 GiB avail; 119 KiB/s rd, 58 MiB/s wr, 199 op/s
Oct  1 13:05:47 np0005464891 nova_compute[259907]: 2025-10-01 17:05:47.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:05:49 np0005464891 nova_compute[259907]: 2025-10-01 17:05:49.092 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:05:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 3.1 GiB data, 3.3 GiB used, 57 GiB / 60 GiB avail; 90 KiB/s rd, 89 MiB/s wr, 159 op/s
Oct  1 13:05:49 np0005464891 nova_compute[259907]: 2025-10-01 17:05:49.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e467 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:05:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 2.9 GiB data, 3.5 GiB used, 56 GiB / 60 GiB avail; 141 KiB/s rd, 88 MiB/s wr, 250 op/s
Oct  1 13:05:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e467 do_prune osdmap full prune enabled
Oct  1 13:05:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e468 e468: 3 total, 3 up, 3 in
Oct  1 13:05:51 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e468: 3 total, 3 up, 3 in
Oct  1 13:05:51 np0005464891 nova_compute[259907]: 2025-10-01 17:05:51.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:05:52 np0005464891 nova_compute[259907]: 2025-10-01 17:05:52.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:05:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3500410753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:05:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:05:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3500410753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:05:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 2.4 GiB data, 3.0 GiB used, 57 GiB / 60 GiB avail; 201 KiB/s rd, 95 MiB/s wr, 358 op/s
Oct  1 13:05:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e468 do_prune osdmap full prune enabled
Oct  1 13:05:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e469 e469: 3 total, 3 up, 3 in
Oct  1 13:05:53 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e469: 3 total, 3 up, 3 in
Oct  1 13:05:54 np0005464891 nova_compute[259907]: 2025-10-01 17:05:54.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e469 do_prune osdmap full prune enabled
Oct  1 13:05:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e470 e470: 3 total, 3 up, 3 in
Oct  1 13:05:54 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e470: 3 total, 3 up, 3 in
Oct  1 13:05:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:05:55 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4273837993' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:05:55 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:05:55 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4273837993' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:05:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 2.4 GiB data, 3.0 GiB used, 57 GiB / 60 GiB avail; 173 KiB/s rd, 43 MiB/s wr, 305 op/s
Oct  1 13:05:55 np0005464891 nova_compute[259907]: 2025-10-01 17:05:55.815 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:05:55 np0005464891 nova_compute[259907]: 2025-10-01 17:05:55.816 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  1 13:05:55 np0005464891 nova_compute[259907]: 2025-10-01 17:05:55.833 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e470 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:56.358181) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338356358217, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 762, "num_deletes": 259, "total_data_size": 878494, "memory_usage": 893264, "flush_reason": "Manual Compaction"}
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338356365589, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 868788, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39840, "largest_seqno": 40601, "table_properties": {"data_size": 864708, "index_size": 1796, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9020, "raw_average_key_size": 19, "raw_value_size": 856541, "raw_average_value_size": 1853, "num_data_blocks": 79, "num_entries": 462, "num_filter_entries": 462, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338312, "oldest_key_time": 1759338312, "file_creation_time": 1759338356, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 7449 microseconds, and 4058 cpu microseconds.
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:56.365628) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 868788 bytes OK
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:56.365645) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:56.366992) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:56.367006) EVENT_LOG_v1 {"time_micros": 1759338356367002, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:56.367023) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 874533, prev total WAL file size 874533, number of live WAL files 2.
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:56.367604) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323536' seq:72057594037927935, type:22 .. '6C6F676D0031353037' seq:0, type:0; will stop at (end)
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(848KB)], [83(9733KB)]
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338356367635, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 10836135, "oldest_snapshot_seqno": -1}
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6978 keys, 10678678 bytes, temperature: kUnknown
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338356448371, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 10678678, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10627642, "index_size": 32492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 177439, "raw_average_key_size": 25, "raw_value_size": 10498079, "raw_average_value_size": 1504, "num_data_blocks": 1292, "num_entries": 6978, "num_filter_entries": 6978, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759338356, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:56.449122) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 10678678 bytes
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:56.451108) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.0 rd, 132.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.5 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(24.8) write-amplify(12.3) OK, records in: 7508, records dropped: 530 output_compression: NoCompression
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:56.451140) EVENT_LOG_v1 {"time_micros": 1759338356451126, "job": 48, "event": "compaction_finished", "compaction_time_micros": 80868, "compaction_time_cpu_micros": 30810, "output_level": 6, "num_output_files": 1, "total_output_size": 10678678, "num_input_records": 7508, "num_output_records": 6978, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338356451601, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338356454720, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:56.367432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:56.454822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:56.454829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:56.454831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:56.454833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:05:56.454835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:05:56 np0005464891 ovn_controller[152409]: 2025-10-01T17:05:56Z|00257|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e470 do_prune osdmap full prune enabled
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e471 e471: 3 total, 3 up, 3 in
Oct  1 13:05:56 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e471: 3 total, 3 up, 3 in
Oct  1 13:05:56 np0005464891 podman[307082]: 2025-10-01 17:05:56.998510999 +0000 UTC m=+0.095622542 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:05:57 np0005464891 nova_compute[259907]: 2025-10-01 17:05:57.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 95 KiB/s rd, 17 MiB/s wr, 156 op/s
Oct  1 13:05:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e471 do_prune osdmap full prune enabled
Oct  1 13:05:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e472 e472: 3 total, 3 up, 3 in
Oct  1 13:05:57 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e472: 3 total, 3 up, 3 in
Oct  1 13:05:58 np0005464891 nova_compute[259907]: 2025-10-01 17:05:58.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:05:58 np0005464891 nova_compute[259907]: 2025-10-01 17:05:58.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  1 13:05:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:05:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3114283907' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:05:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:05:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3114283907' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:05:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 2.1 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 72 KiB/s rd, 4.9 KiB/s wr, 110 op/s
Oct  1 13:05:59 np0005464891 nova_compute[259907]: 2025-10-01 17:05:59.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:05:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e472 do_prune osdmap full prune enabled
Oct  1 13:05:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e473 e473: 3 total, 3 up, 3 in
Oct  1 13:05:59 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e473: 3 total, 3 up, 3 in
Oct  1 13:06:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:06:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e473 do_prune osdmap full prune enabled
Oct  1 13:06:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e474 e474: 3 total, 3 up, 3 in
Oct  1 13:06:01 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e474: 3 total, 3 up, 3 in
Oct  1 13:06:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 1.7 GiB data, 2.2 GiB used, 58 GiB / 60 GiB avail; 84 KiB/s rd, 5.8 KiB/s wr, 151 op/s
Oct  1 13:06:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:06:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1717020407' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:06:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:06:01 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1717020407' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:06:01 np0005464891 podman[307108]: 2025-10-01 17:06:01.959636112 +0000 UTC m=+0.071004732 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  1 13:06:02 np0005464891 nova_compute[259907]: 2025-10-01 17:06:02.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2020: 321 pgs: 321 active+clean; 651 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 4.5 KiB/s wr, 164 op/s
Oct  1 13:06:04 np0005464891 nova_compute[259907]: 2025-10-01 17:06:04.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:04 np0005464891 podman[307126]: 2025-10-01 17:06:04.954140173 +0000 UTC m=+0.056756909 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct  1 13:06:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2021: 321 pgs: 321 active+clean; 651 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.5 KiB/s wr, 129 op/s
Oct  1 13:06:06 np0005464891 nova_compute[259907]: 2025-10-01 17:06:06.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:06 np0005464891 nova_compute[259907]: 2025-10-01 17:06:06.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e474 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:06:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e474 do_prune osdmap full prune enabled
Oct  1 13:06:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e475 e475: 3 total, 3 up, 3 in
Oct  1 13:06:06 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e475: 3 total, 3 up, 3 in
Oct  1 13:06:07 np0005464891 nova_compute[259907]: 2025-10-01 17:06:07.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2023: 321 pgs: 321 active+clean; 271 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.4 KiB/s wr, 100 op/s
Oct  1 13:06:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 271 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 507 B/s wr, 51 op/s
Oct  1 13:06:09 np0005464891 nova_compute[259907]: 2025-10-01 17:06:09.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e475 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:06:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 271 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 409 B/s wr, 41 op/s
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_17:06:12
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'images']
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 13:06:12 np0005464891 nova_compute[259907]: 2025-10-01 17:06:12.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:06:12.467 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:06:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:06:12.468 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:06:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:06:12.468 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:06:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:06:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2026: 321 pgs: 321 active+clean; 271 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 614 B/s wr, 13 op/s
Oct  1 13:06:13 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:06:13 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1391588899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:06:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e475 do_prune osdmap full prune enabled
Oct  1 13:06:14 np0005464891 nova_compute[259907]: 2025-10-01 17:06:14.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e476 e476: 3 total, 3 up, 3 in
Oct  1 13:06:15 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e476: 3 total, 3 up, 3 in
Oct  1 13:06:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2028: 321 pgs: 321 active+clean; 271 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 451 B/s rd, 225 B/s wr, 0 op/s
Oct  1 13:06:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e476 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:06:16 np0005464891 podman[307148]: 2025-10-01 17:06:16.942537852 +0000 UTC m=+0.060054289 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:06:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e476 do_prune osdmap full prune enabled
Oct  1 13:06:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e477 e477: 3 total, 3 up, 3 in
Oct  1 13:06:17 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e477: 3 total, 3 up, 3 in
Oct  1 13:06:17 np0005464891 nova_compute[259907]: 2025-10-01 17:06:17.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 271 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.4 KiB/s rd, 511 B/s wr, 5 op/s
Oct  1 13:06:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 271 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  1 13:06:19 np0005464891 nova_compute[259907]: 2025-10-01 17:06:19.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:06:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3119264429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:06:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e477 do_prune osdmap full prune enabled
Oct  1 13:06:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e478 e478: 3 total, 3 up, 3 in
Oct  1 13:06:20 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e478: 3 total, 3 up, 3 in
Oct  1 13:06:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e478 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:06:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 271 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 KiB/s wr, 36 op/s
Oct  1 13:06:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e478 do_prune osdmap full prune enabled
Oct  1 13:06:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e479 e479: 3 total, 3 up, 3 in
Oct  1 13:06:21 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e479: 3 total, 3 up, 3 in
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0028946490199908835 of space, bias 1.0, pg target 0.8683947059972651 quantized to 32 (current 32)
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:06:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 13:06:22 np0005464891 nova_compute[259907]: 2025-10-01 17:06:22.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e479 do_prune osdmap full prune enabled
Oct  1 13:06:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e480 e480: 3 total, 3 up, 3 in
Oct  1 13:06:22 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e480: 3 total, 3 up, 3 in
Oct  1 13:06:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 271 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 4.7 KiB/s wr, 57 op/s
Oct  1 13:06:24 np0005464891 nova_compute[259907]: 2025-10-01 17:06:24.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:06:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1214912297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:06:25 np0005464891 nova_compute[259907]: 2025-10-01 17:06:25.084 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:06:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2037: 321 pgs: 321 active+clean; 271 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 3.2 KiB/s wr, 45 op/s
Oct  1 13:06:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e480 do_prune osdmap full prune enabled
Oct  1 13:06:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e481 e481: 3 total, 3 up, 3 in
Oct  1 13:06:26 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e481: 3 total, 3 up, 3 in
Oct  1 13:06:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e481 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:06:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e481 do_prune osdmap full prune enabled
Oct  1 13:06:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e482 e482: 3 total, 3 up, 3 in
Oct  1 13:06:27 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e482: 3 total, 3 up, 3 in
Oct  1 13:06:27 np0005464891 nova_compute[259907]: 2025-10-01 17:06:27.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 271 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 KiB/s wr, 53 op/s
Oct  1 13:06:27 np0005464891 podman[307167]: 2025-10-01 17:06:27.979233539 +0000 UTC m=+0.094168662 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  1 13:06:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e482 do_prune osdmap full prune enabled
Oct  1 13:06:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e483 e483: 3 total, 3 up, 3 in
Oct  1 13:06:29 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e483: 3 total, 3 up, 3 in
Oct  1 13:06:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 271 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.2 KiB/s wr, 51 op/s
Oct  1 13:06:29 np0005464891 nova_compute[259907]: 2025-10-01 17:06:29.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:06:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:06:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 13:06:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:06:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 13:06:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:06:29 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 2ef38443-f968-4d3a-94a3-55d91f866b02 does not exist
Oct  1 13:06:29 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 048922a4-5aab-4672-b194-663f6919748e does not exist
Oct  1 13:06:29 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 65758b01-b4ee-4005-971a-c519f2d711fa does not exist
Oct  1 13:06:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 13:06:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 13:06:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 13:06:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:06:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:06:29 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:06:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:06:30 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3691042673' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:06:30 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:06:30 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3691042673' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:06:30 np0005464891 podman[307465]: 2025-10-01 17:06:30.427642357 +0000 UTC m=+0.039531832 container create 99d77242c867171a76dee7de059df93181d99ba25d9801ac02b149ccf34f63c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 13:06:30 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:06:30 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:06:30 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:06:30 np0005464891 systemd[1]: Started libpod-conmon-99d77242c867171a76dee7de059df93181d99ba25d9801ac02b149ccf34f63c1.scope.
Oct  1 13:06:30 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:06:30 np0005464891 podman[307465]: 2025-10-01 17:06:30.409047503 +0000 UTC m=+0.020936998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:06:30 np0005464891 podman[307465]: 2025-10-01 17:06:30.517594332 +0000 UTC m=+0.129483827 container init 99d77242c867171a76dee7de059df93181d99ba25d9801ac02b149ccf34f63c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_nobel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 13:06:30 np0005464891 podman[307465]: 2025-10-01 17:06:30.528576735 +0000 UTC m=+0.140466200 container start 99d77242c867171a76dee7de059df93181d99ba25d9801ac02b149ccf34f63c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_nobel, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:06:30 np0005464891 elastic_nobel[307481]: 167 167
Oct  1 13:06:30 np0005464891 systemd[1]: libpod-99d77242c867171a76dee7de059df93181d99ba25d9801ac02b149ccf34f63c1.scope: Deactivated successfully.
Oct  1 13:06:30 np0005464891 podman[307465]: 2025-10-01 17:06:30.537001628 +0000 UTC m=+0.148891123 container attach 99d77242c867171a76dee7de059df93181d99ba25d9801ac02b149ccf34f63c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_nobel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:06:30 np0005464891 podman[307465]: 2025-10-01 17:06:30.537582843 +0000 UTC m=+0.149472318 container died 99d77242c867171a76dee7de059df93181d99ba25d9801ac02b149ccf34f63c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_nobel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:06:30 np0005464891 systemd[1]: var-lib-containers-storage-overlay-fc02b9929020054bd4e867b6df0233c0890d10b67f0c0e463d6df50fe469ff62-merged.mount: Deactivated successfully.
Oct  1 13:06:30 np0005464891 podman[307465]: 2025-10-01 17:06:30.59430527 +0000 UTC m=+0.206194745 container remove 99d77242c867171a76dee7de059df93181d99ba25d9801ac02b149ccf34f63c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 13:06:30 np0005464891 systemd[1]: libpod-conmon-99d77242c867171a76dee7de059df93181d99ba25d9801ac02b149ccf34f63c1.scope: Deactivated successfully.
Oct  1 13:06:30 np0005464891 podman[307503]: 2025-10-01 17:06:30.760896461 +0000 UTC m=+0.042879235 container create 66cad78d7beb48508e98626e8f7eb10a361f368233ffdf7bf33a5617d889e5e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 13:06:30 np0005464891 systemd[1]: Started libpod-conmon-66cad78d7beb48508e98626e8f7eb10a361f368233ffdf7bf33a5617d889e5e8.scope.
Oct  1 13:06:30 np0005464891 podman[307503]: 2025-10-01 17:06:30.741634649 +0000 UTC m=+0.023617443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:06:30 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:06:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed638124076c2f9e0b5e357ec20df0cc7e137ec30c8dec62057f92e5d2a6c06b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:06:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed638124076c2f9e0b5e357ec20df0cc7e137ec30c8dec62057f92e5d2a6c06b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:06:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed638124076c2f9e0b5e357ec20df0cc7e137ec30c8dec62057f92e5d2a6c06b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:06:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed638124076c2f9e0b5e357ec20df0cc7e137ec30c8dec62057f92e5d2a6c06b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:06:30 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed638124076c2f9e0b5e357ec20df0cc7e137ec30c8dec62057f92e5d2a6c06b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 13:06:30 np0005464891 podman[307503]: 2025-10-01 17:06:30.871935107 +0000 UTC m=+0.153917911 container init 66cad78d7beb48508e98626e8f7eb10a361f368233ffdf7bf33a5617d889e5e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 13:06:30 np0005464891 podman[307503]: 2025-10-01 17:06:30.881763599 +0000 UTC m=+0.163746373 container start 66cad78d7beb48508e98626e8f7eb10a361f368233ffdf7bf33a5617d889e5e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:06:30 np0005464891 podman[307503]: 2025-10-01 17:06:30.88686932 +0000 UTC m=+0.168852114 container attach 66cad78d7beb48508e98626e8f7eb10a361f368233ffdf7bf33a5617d889e5e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 13:06:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e483 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:06:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e483 do_prune osdmap full prune enabled
Oct  1 13:06:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e484 e484: 3 total, 3 up, 3 in
Oct  1 13:06:31 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e484: 3 total, 3 up, 3 in
Oct  1 13:06:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 271 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 4.9 KiB/s wr, 114 op/s
Oct  1 13:06:31 np0005464891 heuristic_burnell[307519]: --> passed data devices: 0 physical, 3 LVM
Oct  1 13:06:31 np0005464891 heuristic_burnell[307519]: --> relative data size: 1.0
Oct  1 13:06:31 np0005464891 heuristic_burnell[307519]: --> All data devices are unavailable
Oct  1 13:06:31 np0005464891 systemd[1]: libpod-66cad78d7beb48508e98626e8f7eb10a361f368233ffdf7bf33a5617d889e5e8.scope: Deactivated successfully.
Oct  1 13:06:31 np0005464891 podman[307503]: 2025-10-01 17:06:31.927064188 +0000 UTC m=+1.209046982 container died 66cad78d7beb48508e98626e8f7eb10a361f368233ffdf7bf33a5617d889e5e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 13:06:31 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ed638124076c2f9e0b5e357ec20df0cc7e137ec30c8dec62057f92e5d2a6c06b-merged.mount: Deactivated successfully.
Oct  1 13:06:31 np0005464891 podman[307503]: 2025-10-01 17:06:31.991793525 +0000 UTC m=+1.273776299 container remove 66cad78d7beb48508e98626e8f7eb10a361f368233ffdf7bf33a5617d889e5e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:06:31 np0005464891 systemd[1]: libpod-conmon-66cad78d7beb48508e98626e8f7eb10a361f368233ffdf7bf33a5617d889e5e8.scope: Deactivated successfully.
Oct  1 13:06:32 np0005464891 podman[307562]: 2025-10-01 17:06:32.096351324 +0000 UTC m=+0.067814025 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  1 13:06:32 np0005464891 nova_compute[259907]: 2025-10-01 17:06:32.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:32 np0005464891 podman[307721]: 2025-10-01 17:06:32.594030387 +0000 UTC m=+0.026275566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:06:32 np0005464891 podman[307721]: 2025-10-01 17:06:32.71216107 +0000 UTC m=+0.144406239 container create 332cadf3f03e0dca540e0122bcba6717543de7bebbe148d29ac9e7b1b30a75e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_darwin, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  1 13:06:32 np0005464891 systemd[1]: Started libpod-conmon-332cadf3f03e0dca540e0122bcba6717543de7bebbe148d29ac9e7b1b30a75e6.scope.
Oct  1 13:06:32 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:06:33 np0005464891 podman[307721]: 2025-10-01 17:06:33.132617051 +0000 UTC m=+0.564862230 container init 332cadf3f03e0dca540e0122bcba6717543de7bebbe148d29ac9e7b1b30a75e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_darwin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  1 13:06:33 np0005464891 podman[307721]: 2025-10-01 17:06:33.139438531 +0000 UTC m=+0.571683690 container start 332cadf3f03e0dca540e0122bcba6717543de7bebbe148d29ac9e7b1b30a75e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 13:06:33 np0005464891 systemd[1]: libpod-332cadf3f03e0dca540e0122bcba6717543de7bebbe148d29ac9e7b1b30a75e6.scope: Deactivated successfully.
Oct  1 13:06:33 np0005464891 sad_darwin[307737]: 167 167
Oct  1 13:06:33 np0005464891 conmon[307737]: conmon 332cadf3f03e0dca540e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-332cadf3f03e0dca540e0122bcba6717543de7bebbe148d29ac9e7b1b30a75e6.scope/container/memory.events
Oct  1 13:06:33 np0005464891 podman[307721]: 2025-10-01 17:06:33.257257144 +0000 UTC m=+0.689502363 container attach 332cadf3f03e0dca540e0122bcba6717543de7bebbe148d29ac9e7b1b30a75e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:06:33 np0005464891 podman[307721]: 2025-10-01 17:06:33.257786929 +0000 UTC m=+0.690032098 container died 332cadf3f03e0dca540e0122bcba6717543de7bebbe148d29ac9e7b1b30a75e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_darwin, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 13:06:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 271 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 4.4 KiB/s wr, 111 op/s
Oct  1 13:06:33 np0005464891 systemd[1]: var-lib-containers-storage-overlay-cdedeea24d22afe658058361729ceb493753ac189df4238982d8e12ea71cff29-merged.mount: Deactivated successfully.
Oct  1 13:06:33 np0005464891 podman[307721]: 2025-10-01 17:06:33.519036964 +0000 UTC m=+0.951282143 container remove 332cadf3f03e0dca540e0122bcba6717543de7bebbe148d29ac9e7b1b30a75e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_darwin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:06:33 np0005464891 systemd[1]: libpod-conmon-332cadf3f03e0dca540e0122bcba6717543de7bebbe148d29ac9e7b1b30a75e6.scope: Deactivated successfully.
Oct  1 13:06:33 np0005464891 podman[307761]: 2025-10-01 17:06:33.721071983 +0000 UTC m=+0.040798517 container create 86e84c226bb828789e67738d3d3fba888f935912bdd4c8c15ed2bfc55bc09534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 13:06:33 np0005464891 systemd[1]: Started libpod-conmon-86e84c226bb828789e67738d3d3fba888f935912bdd4c8c15ed2bfc55bc09534.scope.
Oct  1 13:06:33 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:06:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5471ad208275a9c07aea23270c12de0ffa369ef1104f57138fe4191c7c5de068/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:06:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5471ad208275a9c07aea23270c12de0ffa369ef1104f57138fe4191c7c5de068/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:06:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5471ad208275a9c07aea23270c12de0ffa369ef1104f57138fe4191c7c5de068/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:06:33 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5471ad208275a9c07aea23270c12de0ffa369ef1104f57138fe4191c7c5de068/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:06:33 np0005464891 podman[307761]: 2025-10-01 17:06:33.702620154 +0000 UTC m=+0.022346708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:06:33 np0005464891 podman[307761]: 2025-10-01 17:06:33.799881771 +0000 UTC m=+0.119608335 container init 86e84c226bb828789e67738d3d3fba888f935912bdd4c8c15ed2bfc55bc09534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_taussig, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:06:33 np0005464891 podman[307761]: 2025-10-01 17:06:33.811148162 +0000 UTC m=+0.130874696 container start 86e84c226bb828789e67738d3d3fba888f935912bdd4c8c15ed2bfc55bc09534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_taussig, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:06:33 np0005464891 podman[307761]: 2025-10-01 17:06:33.815346157 +0000 UTC m=+0.135072711 container attach 86e84c226bb828789e67738d3d3fba888f935912bdd4c8c15ed2bfc55bc09534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_taussig, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:06:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:06:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1618383924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:06:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e484 do_prune osdmap full prune enabled
Oct  1 13:06:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e485 e485: 3 total, 3 up, 3 in
Oct  1 13:06:34 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e485: 3 total, 3 up, 3 in
Oct  1 13:06:34 np0005464891 elated_taussig[307778]: {
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:    "0": [
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:        {
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "devices": [
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "/dev/loop3"
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            ],
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "lv_name": "ceph_lv0",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "lv_size": "21470642176",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "name": "ceph_lv0",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "tags": {
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.cluster_name": "ceph",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.crush_device_class": "",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.encrypted": "0",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.osd_id": "0",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.type": "block",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.vdo": "0"
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            },
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "type": "block",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "vg_name": "ceph_vg0"
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:        }
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:    ],
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:    "1": [
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:        {
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "devices": [
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "/dev/loop4"
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            ],
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "lv_name": "ceph_lv1",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "lv_size": "21470642176",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "name": "ceph_lv1",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "tags": {
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.cluster_name": "ceph",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.crush_device_class": "",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.encrypted": "0",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.osd_id": "1",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.type": "block",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.vdo": "0"
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            },
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "type": "block",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "vg_name": "ceph_vg1"
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:        }
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:    ],
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:    "2": [
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:        {
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "devices": [
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "/dev/loop5"
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            ],
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "lv_name": "ceph_lv2",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "lv_size": "21470642176",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "name": "ceph_lv2",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "tags": {
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.cluster_name": "ceph",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.crush_device_class": "",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.encrypted": "0",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.osd_id": "2",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.type": "block",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:                "ceph.vdo": "0"
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            },
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "type": "block",
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:            "vg_name": "ceph_vg2"
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:        }
Oct  1 13:06:34 np0005464891 elated_taussig[307778]:    ]
Oct  1 13:06:34 np0005464891 elated_taussig[307778]: }
Oct  1 13:06:34 np0005464891 systemd[1]: libpod-86e84c226bb828789e67738d3d3fba888f935912bdd4c8c15ed2bfc55bc09534.scope: Deactivated successfully.
Oct  1 13:06:34 np0005464891 podman[307761]: 2025-10-01 17:06:34.615632739 +0000 UTC m=+0.935359273 container died 86e84c226bb828789e67738d3d3fba888f935912bdd4c8c15ed2bfc55bc09534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_taussig, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:06:34 np0005464891 systemd[1]: var-lib-containers-storage-overlay-5471ad208275a9c07aea23270c12de0ffa369ef1104f57138fe4191c7c5de068-merged.mount: Deactivated successfully.
Oct  1 13:06:34 np0005464891 nova_compute[259907]: 2025-10-01 17:06:34.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:34 np0005464891 podman[307761]: 2025-10-01 17:06:34.692048529 +0000 UTC m=+1.011775063 container remove 86e84c226bb828789e67738d3d3fba888f935912bdd4c8c15ed2bfc55bc09534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_taussig, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 13:06:34 np0005464891 systemd[1]: libpod-conmon-86e84c226bb828789e67738d3d3fba888f935912bdd4c8c15ed2bfc55bc09534.scope: Deactivated successfully.
Oct  1 13:06:35 np0005464891 podman[307901]: 2025-10-01 17:06:35.078468411 +0000 UTC m=+0.063052642 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  1 13:06:35 np0005464891 podman[307961]: 2025-10-01 17:06:35.336381164 +0000 UTC m=+0.044543201 container create 3535b61ed11452f59241083b1ab3c3b077fff4bc0da38e1099b0a8729e118ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:06:35 np0005464891 systemd[1]: Started libpod-conmon-3535b61ed11452f59241083b1ab3c3b077fff4bc0da38e1099b0a8729e118ecf.scope.
Oct  1 13:06:35 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:06:35 np0005464891 podman[307961]: 2025-10-01 17:06:35.313150603 +0000 UTC m=+0.021312620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:06:35 np0005464891 podman[307961]: 2025-10-01 17:06:35.426185864 +0000 UTC m=+0.134347951 container init 3535b61ed11452f59241083b1ab3c3b077fff4bc0da38e1099b0a8729e118ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brattain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:06:35 np0005464891 podman[307961]: 2025-10-01 17:06:35.434699289 +0000 UTC m=+0.142861296 container start 3535b61ed11452f59241083b1ab3c3b077fff4bc0da38e1099b0a8729e118ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 13:06:35 np0005464891 podman[307961]: 2025-10-01 17:06:35.438966207 +0000 UTC m=+0.147128284 container attach 3535b61ed11452f59241083b1ab3c3b077fff4bc0da38e1099b0a8729e118ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 13:06:35 np0005464891 gallant_brattain[307977]: 167 167
Oct  1 13:06:35 np0005464891 systemd[1]: libpod-3535b61ed11452f59241083b1ab3c3b077fff4bc0da38e1099b0a8729e118ecf.scope: Deactivated successfully.
Oct  1 13:06:35 np0005464891 podman[307961]: 2025-10-01 17:06:35.444437869 +0000 UTC m=+0.152599906 container died 3535b61ed11452f59241083b1ab3c3b077fff4bc0da38e1099b0a8729e118ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brattain, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:06:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 271 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 3.7 KiB/s wr, 92 op/s
Oct  1 13:06:35 np0005464891 systemd[1]: var-lib-containers-storage-overlay-57891d5942fa6846be6add319c94dbd5ca6f4421d69bb920da5470331b4e23c0-merged.mount: Deactivated successfully.
Oct  1 13:06:35 np0005464891 podman[307961]: 2025-10-01 17:06:35.506703448 +0000 UTC m=+0.214865445 container remove 3535b61ed11452f59241083b1ab3c3b077fff4bc0da38e1099b0a8729e118ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brattain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 13:06:35 np0005464891 systemd[1]: libpod-conmon-3535b61ed11452f59241083b1ab3c3b077fff4bc0da38e1099b0a8729e118ecf.scope: Deactivated successfully.
Oct  1 13:06:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e485 do_prune osdmap full prune enabled
Oct  1 13:06:35 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e486 e486: 3 total, 3 up, 3 in
Oct  1 13:06:35 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e486: 3 total, 3 up, 3 in
Oct  1 13:06:35 np0005464891 podman[308001]: 2025-10-01 17:06:35.676065115 +0000 UTC m=+0.043100161 container create 9eb9b8d04b943a4cdad301940cd1ec9ba028e97ca31e0d9267fe197d61e26533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:06:35 np0005464891 systemd[1]: Started libpod-conmon-9eb9b8d04b943a4cdad301940cd1ec9ba028e97ca31e0d9267fe197d61e26533.scope.
Oct  1 13:06:35 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:06:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ed9e9ecbaeae4c880d033a4004fc21a6fe25cb2a1742ca7bdb482de16e936f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:06:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ed9e9ecbaeae4c880d033a4004fc21a6fe25cb2a1742ca7bdb482de16e936f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:06:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ed9e9ecbaeae4c880d033a4004fc21a6fe25cb2a1742ca7bdb482de16e936f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:06:35 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ed9e9ecbaeae4c880d033a4004fc21a6fe25cb2a1742ca7bdb482de16e936f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:06:35 np0005464891 podman[308001]: 2025-10-01 17:06:35.655735384 +0000 UTC m=+0.022770450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:06:35 np0005464891 podman[308001]: 2025-10-01 17:06:35.758200734 +0000 UTC m=+0.125235810 container init 9eb9b8d04b943a4cdad301940cd1ec9ba028e97ca31e0d9267fe197d61e26533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 13:06:35 np0005464891 podman[308001]: 2025-10-01 17:06:35.766616236 +0000 UTC m=+0.133651302 container start 9eb9b8d04b943a4cdad301940cd1ec9ba028e97ca31e0d9267fe197d61e26533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:06:35 np0005464891 podman[308001]: 2025-10-01 17:06:35.770123063 +0000 UTC m=+0.137158129 container attach 9eb9b8d04b943a4cdad301940cd1ec9ba028e97ca31e0d9267fe197d61e26533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:06:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e486 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:06:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e486 do_prune osdmap full prune enabled
Oct  1 13:06:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e487 e487: 3 total, 3 up, 3 in
Oct  1 13:06:36 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e487: 3 total, 3 up, 3 in
Oct  1 13:06:36 np0005464891 eager_kalam[308018]: {
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:        "osd_id": 2,
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:        "type": "bluestore"
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:    },
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:        "osd_id": 0,
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:        "type": "bluestore"
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:    },
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:        "osd_id": 1,
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:        "type": "bluestore"
Oct  1 13:06:36 np0005464891 eager_kalam[308018]:    }
Oct  1 13:06:36 np0005464891 eager_kalam[308018]: }
Oct  1 13:06:36 np0005464891 systemd[1]: libpod-9eb9b8d04b943a4cdad301940cd1ec9ba028e97ca31e0d9267fe197d61e26533.scope: Deactivated successfully.
Oct  1 13:06:36 np0005464891 systemd[1]: libpod-9eb9b8d04b943a4cdad301940cd1ec9ba028e97ca31e0d9267fe197d61e26533.scope: Consumed 1.069s CPU time.
Oct  1 13:06:36 np0005464891 conmon[308018]: conmon 9eb9b8d04b943a4cdad3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9eb9b8d04b943a4cdad301940cd1ec9ba028e97ca31e0d9267fe197d61e26533.scope/container/memory.events
Oct  1 13:06:36 np0005464891 podman[308001]: 2025-10-01 17:06:36.832083901 +0000 UTC m=+1.199118937 container died 9eb9b8d04b943a4cdad301940cd1ec9ba028e97ca31e0d9267fe197d61e26533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 13:06:36 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c1ed9e9ecbaeae4c880d033a4004fc21a6fe25cb2a1742ca7bdb482de16e936f-merged.mount: Deactivated successfully.
Oct  1 13:06:36 np0005464891 nova_compute[259907]: 2025-10-01 17:06:36.883 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:06:36 np0005464891 podman[308001]: 2025-10-01 17:06:36.888110509 +0000 UTC m=+1.255145555 container remove 9eb9b8d04b943a4cdad301940cd1ec9ba028e97ca31e0d9267fe197d61e26533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:06:36 np0005464891 systemd[1]: libpod-conmon-9eb9b8d04b943a4cdad301940cd1ec9ba028e97ca31e0d9267fe197d61e26533.scope: Deactivated successfully.
Oct  1 13:06:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:06:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:06:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:06:36 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:06:36 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 25301f71-d5c9-4294-bea5-f1ccb05f5460 does not exist
Oct  1 13:06:36 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev ed2d0200-bd4b-4c14-b892-28e5686f2b52 does not exist
Oct  1 13:06:37 np0005464891 ovn_controller[152409]: 2025-10-01T17:06:37Z|00258|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Oct  1 13:06:37 np0005464891 nova_compute[259907]: 2025-10-01 17:06:37.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:06:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1038131871' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:06:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:06:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1038131871' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:06:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 271 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.2 KiB/s wr, 43 op/s
Oct  1 13:06:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e487 do_prune osdmap full prune enabled
Oct  1 13:06:37 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:06:37 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:06:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e488 e488: 3 total, 3 up, 3 in
Oct  1 13:06:38 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e488: 3 total, 3 up, 3 in
Oct  1 13:06:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:06:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3390575358' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:06:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:06:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3390575358' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:06:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 271 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.8 KiB/s wr, 39 op/s
Oct  1 13:06:39 np0005464891 nova_compute[259907]: 2025-10-01 17:06:39.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e488 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:06:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 271 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 3.8 KiB/s wr, 66 op/s
Oct  1 13:06:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:06:41 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/547083872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:06:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:06:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:06:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:06:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:06:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:06:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:06:42 np0005464891 nova_compute[259907]: 2025-10-01 17:06:42.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e488 do_prune osdmap full prune enabled
Oct  1 13:06:42 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e489 e489: 3 total, 3 up, 3 in
Oct  1 13:06:42 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e489: 3 total, 3 up, 3 in
Oct  1 13:06:42 np0005464891 nova_compute[259907]: 2025-10-01 17:06:42.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:06:42 np0005464891 nova_compute[259907]: 2025-10-01 17:06:42.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 13:06:42 np0005464891 nova_compute[259907]: 2025-10-01 17:06:42.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:06:42 np0005464891 nova_compute[259907]: 2025-10-01 17:06:42.837 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:06:42 np0005464891 nova_compute[259907]: 2025-10-01 17:06:42.837 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:06:42 np0005464891 nova_compute[259907]: 2025-10-01 17:06:42.837 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:06:42 np0005464891 nova_compute[259907]: 2025-10-01 17:06:42.838 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 13:06:42 np0005464891 nova_compute[259907]: 2025-10-01 17:06:42.838 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:06:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:06:43 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2392784505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:06:43 np0005464891 nova_compute[259907]: 2025-10-01 17:06:43.291 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:06:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 271 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.0 KiB/s wr, 52 op/s
Oct  1 13:06:43 np0005464891 nova_compute[259907]: 2025-10-01 17:06:43.462 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:06:43 np0005464891 nova_compute[259907]: 2025-10-01 17:06:43.464 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4391MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 13:06:43 np0005464891 nova_compute[259907]: 2025-10-01 17:06:43.464 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:06:43 np0005464891 nova_compute[259907]: 2025-10-01 17:06:43.464 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:06:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e489 do_prune osdmap full prune enabled
Oct  1 13:06:43 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e490 e490: 3 total, 3 up, 3 in
Oct  1 13:06:43 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e490: 3 total, 3 up, 3 in
Oct  1 13:06:43 np0005464891 nova_compute[259907]: 2025-10-01 17:06:43.553 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 13:06:43 np0005464891 nova_compute[259907]: 2025-10-01 17:06:43.554 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 13:06:43 np0005464891 nova_compute[259907]: 2025-10-01 17:06:43.586 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:06:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:06:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/120452878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:06:44 np0005464891 nova_compute[259907]: 2025-10-01 17:06:44.099 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:06:44 np0005464891 nova_compute[259907]: 2025-10-01 17:06:44.108 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:06:44 np0005464891 nova_compute[259907]: 2025-10-01 17:06:44.156 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:06:44 np0005464891 nova_compute[259907]: 2025-10-01 17:06:44.330 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 13:06:44 np0005464891 nova_compute[259907]: 2025-10-01 17:06:44.330 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:06:44 np0005464891 nova_compute[259907]: 2025-10-01 17:06:44.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:45 np0005464891 nova_compute[259907]: 2025-10-01 17:06:45.331 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:06:45 np0005464891 nova_compute[259907]: 2025-10-01 17:06:45.331 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:06:45 np0005464891 nova_compute[259907]: 2025-10-01 17:06:45.332 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 13:06:45 np0005464891 nova_compute[259907]: 2025-10-01 17:06:45.332 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 13:06:45 np0005464891 nova_compute[259907]: 2025-10-01 17:06:45.350 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 13:06:45 np0005464891 nova_compute[259907]: 2025-10-01 17:06:45.350 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:06:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 271 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.6 KiB/s wr, 48 op/s
Oct  1 13:06:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e490 do_prune osdmap full prune enabled
Oct  1 13:06:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e491 e491: 3 total, 3 up, 3 in
Oct  1 13:06:45 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e491: 3 total, 3 up, 3 in
Oct  1 13:06:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:06:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3920080365' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:06:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:06:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3920080365' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:06:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e491 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:06:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e491 do_prune osdmap full prune enabled
Oct  1 13:06:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e492 e492: 3 total, 3 up, 3 in
Oct  1 13:06:46 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e492: 3 total, 3 up, 3 in
Oct  1 13:06:47 np0005464891 nova_compute[259907]: 2025-10-01 17:06:47.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 271 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.4 KiB/s wr, 48 op/s
Oct  1 13:06:47 np0005464891 nova_compute[259907]: 2025-10-01 17:06:47.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:06:47 np0005464891 nova_compute[259907]: 2025-10-01 17:06:47.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:06:48 np0005464891 podman[308159]: 2025-10-01 17:06:48.001602213 +0000 UTC m=+0.096999300 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  1 13:06:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:06:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3352574913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:06:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e492 do_prune osdmap full prune enabled
Oct  1 13:06:48 np0005464891 nova_compute[259907]: 2025-10-01 17:06:48.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:06:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e493 e493: 3 total, 3 up, 3 in
Oct  1 13:06:49 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e493: 3 total, 3 up, 3 in
Oct  1 13:06:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 271 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 2.2 KiB/s wr, 134 op/s
Oct  1 13:06:49 np0005464891 nova_compute[259907]: 2025-10-01 17:06:49.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e493 do_prune osdmap full prune enabled
Oct  1 13:06:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e494 e494: 3 total, 3 up, 3 in
Oct  1 13:06:51 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e494: 3 total, 3 up, 3 in
Oct  1 13:06:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:06:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 271 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 109 KiB/s rd, 2.7 KiB/s wr, 172 op/s
Oct  1 13:06:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:06:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2893813961' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:06:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:06:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2893813961' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:06:52 np0005464891 nova_compute[259907]: 2025-10-01 17:06:52.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e494 do_prune osdmap full prune enabled
Oct  1 13:06:53 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e495 e495: 3 total, 3 up, 3 in
Oct  1 13:06:53 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e495: 3 total, 3 up, 3 in
Oct  1 13:06:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 271 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 162 KiB/s rd, 3.8 KiB/s wr, 234 op/s
Oct  1 13:06:54 np0005464891 nova_compute[259907]: 2025-10-01 17:06:54.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 271 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 2.0 KiB/s wr, 113 op/s
Oct  1 13:06:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:06:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e495 do_prune osdmap full prune enabled
Oct  1 13:06:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e496 e496: 3 total, 3 up, 3 in
Oct  1 13:06:56 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e496: 3 total, 3 up, 3 in
Oct  1 13:06:57 np0005464891 nova_compute[259907]: 2025-10-01 17:06:57.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:06:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2069: 321 pgs: 321 active+clean; 271 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 2.5 KiB/s wr, 83 op/s
Oct  1 13:06:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e496 do_prune osdmap full prune enabled
Oct  1 13:06:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e497 e497: 3 total, 3 up, 3 in
Oct  1 13:06:58 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e497: 3 total, 3 up, 3 in
Oct  1 13:06:58 np0005464891 podman[308177]: 2025-10-01 17:06:58.983117604 +0000 UTC m=+0.094721917 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 13:06:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 271 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Oct  1 13:06:59 np0005464891 nova_compute[259907]: 2025-10-01 17:06:59.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e497 do_prune osdmap full prune enabled
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:07:00.669228) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338420669268, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1125, "num_deletes": 260, "total_data_size": 1431671, "memory_usage": 1462696, "flush_reason": "Manual Compaction"}
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e498 e498: 3 total, 3 up, 3 in
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338420689934, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 1412841, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40602, "largest_seqno": 41726, "table_properties": {"data_size": 1407130, "index_size": 3106, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12754, "raw_average_key_size": 20, "raw_value_size": 1395572, "raw_average_value_size": 2295, "num_data_blocks": 135, "num_entries": 608, "num_filter_entries": 608, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338356, "oldest_key_time": 1759338356, "file_creation_time": 1759338420, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 20748 microseconds, and 4264 cpu microseconds.
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e498: 3 total, 3 up, 3 in
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:07:00.689975) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 1412841 bytes OK
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:07:00.689994) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:07:00.695169) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:07:00.695191) EVENT_LOG_v1 {"time_micros": 1759338420695185, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:07:00.695211) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1426212, prev total WAL file size 1426253, number of live WAL files 2.
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:07:00.696018) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(1379KB)], [86(10MB)]
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338420696071, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 12091519, "oldest_snapshot_seqno": -1}
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 7057 keys, 10405856 bytes, temperature: kUnknown
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338420794532, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10405856, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10354193, "index_size": 32927, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 180141, "raw_average_key_size": 25, "raw_value_size": 10223060, "raw_average_value_size": 1448, "num_data_blocks": 1301, "num_entries": 7057, "num_filter_entries": 7057, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759338420, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:07:00.794779) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10405856 bytes
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:07:00.798911) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.7 rd, 105.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.2 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(15.9) write-amplify(7.4) OK, records in: 7586, records dropped: 529 output_compression: NoCompression
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:07:00.798938) EVENT_LOG_v1 {"time_micros": 1759338420798924, "job": 50, "event": "compaction_finished", "compaction_time_micros": 98546, "compaction_time_cpu_micros": 54076, "output_level": 6, "num_output_files": 1, "total_output_size": 10405856, "num_input_records": 7586, "num_output_records": 7057, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338420799603, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338420801960, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:07:00.695877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:07:00.802128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:07:00.802137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:07:00.802140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:07:00.802144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:07:00 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:07:00.802147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:07:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e498 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:07:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e498 do_prune osdmap full prune enabled
Oct  1 13:07:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e499 e499: 3 total, 3 up, 3 in
Oct  1 13:07:01 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e499: 3 total, 3 up, 3 in
Oct  1 13:07:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 271 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.6 KiB/s wr, 73 op/s
Oct  1 13:07:02 np0005464891 nova_compute[259907]: 2025-10-01 17:07:02.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e499 do_prune osdmap full prune enabled
Oct  1 13:07:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e500 e500: 3 total, 3 up, 3 in
Oct  1 13:07:02 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e500: 3 total, 3 up, 3 in
Oct  1 13:07:02 np0005464891 podman[308203]: 2025-10-01 17:07:02.968547911 +0000 UTC m=+0.072993537 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  1 13:07:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 2.3 KiB/s wr, 90 op/s
Oct  1 13:07:03 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e500 do_prune osdmap full prune enabled
Oct  1 13:07:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e501 e501: 3 total, 3 up, 3 in
Oct  1 13:07:04 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e501: 3 total, 3 up, 3 in
Oct  1 13:07:04 np0005464891 nova_compute[259907]: 2025-10-01 17:07:04.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 2.3 KiB/s wr, 92 op/s
Oct  1 13:07:05 np0005464891 podman[308223]: 2025-10-01 17:07:05.947584204 +0000 UTC m=+0.061854949 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  1 13:07:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:07:05 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/736631793' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:07:05 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:07:05 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/736631793' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:07:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e501 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:07:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e501 do_prune osdmap full prune enabled
Oct  1 13:07:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e502 e502: 3 total, 3 up, 3 in
Oct  1 13:07:06 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e502: 3 total, 3 up, 3 in
Oct  1 13:07:07 np0005464891 nova_compute[259907]: 2025-10-01 17:07:07.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 2.7 KiB/s wr, 73 op/s
Oct  1 13:07:08 np0005464891 nova_compute[259907]: 2025-10-01 17:07:08.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:07:08.760 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:07:08 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:07:08.762 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 13:07:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 2.4 KiB/s wr, 67 op/s
Oct  1 13:07:09 np0005464891 nova_compute[259907]: 2025-10-01 17:07:09.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:07:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e502 do_prune osdmap full prune enabled
Oct  1 13:07:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.7 KiB/s wr, 33 op/s
Oct  1 13:07:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 e503: 3 total, 3 up, 3 in
Oct  1 13:07:11 np0005464891 ceph-mon[74303]: log_channel(cluster) log [DBG] : osdmap e503: 3 total, 3 up, 3 in
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_17:07:12
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['default.rgw.control', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.log', 'images']
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 13:07:12 np0005464891 nova_compute[259907]: 2025-10-01 17:07:12.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:07:12.468 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:07:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:07:12.469 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:07:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:07:12.469 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:07:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:07:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2084: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.7 KiB/s wr, 33 op/s
Oct  1 13:07:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:07:13.764 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:07:14 np0005464891 nova_compute[259907]: 2025-10-01 17:07:14.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 226 B/s wr, 6 op/s
Oct  1 13:07:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:07:17 np0005464891 nova_compute[259907]: 2025-10-01 17:07:17.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 4.5 KiB/s rd, 204 B/s wr, 6 op/s
Oct  1 13:07:18 np0005464891 podman[308243]: 2025-10-01 17:07:18.928334104 +0000 UTC m=+0.047959405 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent)
Oct  1 13:07:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 3.4 KiB/s rd, 204 B/s wr, 4 op/s
Oct  1 13:07:19 np0005464891 nova_compute[259907]: 2025-10-01 17:07:19.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:07:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:07:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 13:07:22 np0005464891 nova_compute[259907]: 2025-10-01 17:07:22.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:07:24 np0005464891 nova_compute[259907]: 2025-10-01 17:07:24.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2090: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:07:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:07:27 np0005464891 nova_compute[259907]: 2025-10-01 17:07:27.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:07:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:07:29 np0005464891 nova_compute[259907]: 2025-10-01 17:07:29.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:29 np0005464891 podman[308260]: 2025-10-01 17:07:29.966282542 +0000 UTC m=+0.084011441 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:07:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:07:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:07:32 np0005464891 nova_compute[259907]: 2025-10-01 17:07:32.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:07:33 np0005464891 podman[308286]: 2025-10-01 17:07:33.953723475 +0000 UTC m=+0.068438851 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  1 13:07:34 np0005464891 nova_compute[259907]: 2025-10-01 17:07:34.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:07:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:07:36 np0005464891 podman[308307]: 2025-10-01 17:07:36.538692895 +0000 UTC m=+0.063688849 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  1 13:07:37 np0005464891 nova_compute[259907]: 2025-10-01 17:07:37.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:07:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2572993106' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:07:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:07:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2572993106' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:07:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:07:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:07:37 np0005464891 nova_compute[259907]: 2025-10-01 17:07:37.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:07:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:07:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:07:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:07:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:07:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:07:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 13:07:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:07:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 13:07:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:07:38 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 56262024-9169-4029-a030-f97337bbffa7 does not exist
Oct  1 13:07:38 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 256f32c6-8cd1-444b-96de-fda22f82bae9 does not exist
Oct  1 13:07:38 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 1233a2ff-5f25-4e55-ba7c-d40e3da814a1 does not exist
Oct  1 13:07:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 13:07:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 13:07:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 13:07:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:07:38 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:07:38 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:07:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:07:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:07:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:07:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:07:39 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:07:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:07:39 np0005464891 podman[308718]: 2025-10-01 17:07:39.608969789 +0000 UTC m=+0.029358552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:07:39 np0005464891 podman[308718]: 2025-10-01 17:07:39.800575031 +0000 UTC m=+0.220963804 container create b33a9f3e1af3cf3302a13519b8a4151309d33b98e7282441c52e7dfc2d9c25e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:07:39 np0005464891 nova_compute[259907]: 2025-10-01 17:07:39.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:39 np0005464891 systemd[1]: Started libpod-conmon-b33a9f3e1af3cf3302a13519b8a4151309d33b98e7282441c52e7dfc2d9c25e3.scope.
Oct  1 13:07:39 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:07:40 np0005464891 podman[308718]: 2025-10-01 17:07:40.347935677 +0000 UTC m=+0.768324440 container init b33a9f3e1af3cf3302a13519b8a4151309d33b98e7282441c52e7dfc2d9c25e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:07:40 np0005464891 podman[308718]: 2025-10-01 17:07:40.357683127 +0000 UTC m=+0.778071880 container start b33a9f3e1af3cf3302a13519b8a4151309d33b98e7282441c52e7dfc2d9c25e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:07:40 np0005464891 systemd[1]: libpod-b33a9f3e1af3cf3302a13519b8a4151309d33b98e7282441c52e7dfc2d9c25e3.scope: Deactivated successfully.
Oct  1 13:07:40 np0005464891 charming_sutherland[308734]: 167 167
Oct  1 13:07:40 np0005464891 conmon[308734]: conmon b33a9f3e1af3cf3302a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b33a9f3e1af3cf3302a13519b8a4151309d33b98e7282441c52e7dfc2d9c25e3.scope/container/memory.events
Oct  1 13:07:40 np0005464891 podman[308718]: 2025-10-01 17:07:40.510715042 +0000 UTC m=+0.931103845 container attach b33a9f3e1af3cf3302a13519b8a4151309d33b98e7282441c52e7dfc2d9c25e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 13:07:40 np0005464891 podman[308718]: 2025-10-01 17:07:40.511653728 +0000 UTC m=+0.932042481 container died b33a9f3e1af3cf3302a13519b8a4151309d33b98e7282441c52e7dfc2d9c25e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 13:07:41 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c89bce5304e2e1933f96dee1f925322498f2c0c8fd264670cd9a70aa2073b0ee-merged.mount: Deactivated successfully.
Oct  1 13:07:41 np0005464891 podman[308718]: 2025-10-01 17:07:41.209309057 +0000 UTC m=+1.629697800 container remove b33a9f3e1af3cf3302a13519b8a4151309d33b98e7282441c52e7dfc2d9c25e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:07:41 np0005464891 systemd[1]: libpod-conmon-b33a9f3e1af3cf3302a13519b8a4151309d33b98e7282441c52e7dfc2d9c25e3.scope: Deactivated successfully.
Oct  1 13:07:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:07:41 np0005464891 podman[308757]: 2025-10-01 17:07:41.394365308 +0000 UTC m=+0.036341585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:07:41 np0005464891 podman[308757]: 2025-10-01 17:07:41.521248462 +0000 UTC m=+0.163224729 container create 51b43356fce4a87463eed94c392e27fc9a438795dbfe67b5b6d19129d200eedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 13:07:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2098: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:07:41 np0005464891 systemd[1]: Started libpod-conmon-51b43356fce4a87463eed94c392e27fc9a438795dbfe67b5b6d19129d200eedc.scope.
Oct  1 13:07:41 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:07:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f0b3bded584aa3a2866219ba7bcab514d9256b0ade3129afffbd4c656d758d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:07:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f0b3bded584aa3a2866219ba7bcab514d9256b0ade3129afffbd4c656d758d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:07:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f0b3bded584aa3a2866219ba7bcab514d9256b0ade3129afffbd4c656d758d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:07:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f0b3bded584aa3a2866219ba7bcab514d9256b0ade3129afffbd4c656d758d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:07:41 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f0b3bded584aa3a2866219ba7bcab514d9256b0ade3129afffbd4c656d758d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 13:07:41 np0005464891 podman[308757]: 2025-10-01 17:07:41.659045558 +0000 UTC m=+0.301021865 container init 51b43356fce4a87463eed94c392e27fc9a438795dbfe67b5b6d19129d200eedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 13:07:41 np0005464891 podman[308757]: 2025-10-01 17:07:41.665637229 +0000 UTC m=+0.307613496 container start 51b43356fce4a87463eed94c392e27fc9a438795dbfe67b5b6d19129d200eedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 13:07:41 np0005464891 podman[308757]: 2025-10-01 17:07:41.681661302 +0000 UTC m=+0.323637529 container attach 51b43356fce4a87463eed94c392e27fc9a438795dbfe67b5b6d19129d200eedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 13:07:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:07:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:07:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:07:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:07:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:07:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:07:42 np0005464891 nova_compute[259907]: 2025-10-01 17:07:42.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:42 np0005464891 angry_blackburn[308773]: --> passed data devices: 0 physical, 3 LVM
Oct  1 13:07:42 np0005464891 angry_blackburn[308773]: --> relative data size: 1.0
Oct  1 13:07:42 np0005464891 angry_blackburn[308773]: --> All data devices are unavailable
Oct  1 13:07:42 np0005464891 systemd[1]: libpod-51b43356fce4a87463eed94c392e27fc9a438795dbfe67b5b6d19129d200eedc.scope: Deactivated successfully.
Oct  1 13:07:42 np0005464891 systemd[1]: libpod-51b43356fce4a87463eed94c392e27fc9a438795dbfe67b5b6d19129d200eedc.scope: Consumed 1.051s CPU time.
Oct  1 13:07:42 np0005464891 podman[308757]: 2025-10-01 17:07:42.80187318 +0000 UTC m=+1.443849437 container died 51b43356fce4a87463eed94c392e27fc9a438795dbfe67b5b6d19129d200eedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:07:42 np0005464891 systemd[1]: var-lib-containers-storage-overlay-93f0b3bded584aa3a2866219ba7bcab514d9256b0ade3129afffbd4c656d758d-merged.mount: Deactivated successfully.
Oct  1 13:07:42 np0005464891 podman[308757]: 2025-10-01 17:07:42.858178325 +0000 UTC m=+1.500154552 container remove 51b43356fce4a87463eed94c392e27fc9a438795dbfe67b5b6d19129d200eedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:07:42 np0005464891 systemd[1]: libpod-conmon-51b43356fce4a87463eed94c392e27fc9a438795dbfe67b5b6d19129d200eedc.scope: Deactivated successfully.
Oct  1 13:07:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:07:43 np0005464891 podman[308954]: 2025-10-01 17:07:43.556055908 +0000 UTC m=+0.042868425 container create 13138b3650eeddd82a65bd500259c63c4efe2024b57dde4d8b5dcd7818385425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 13:07:43 np0005464891 systemd[1]: Started libpod-conmon-13138b3650eeddd82a65bd500259c63c4efe2024b57dde4d8b5dcd7818385425.scope.
Oct  1 13:07:43 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:07:43 np0005464891 podman[308954]: 2025-10-01 17:07:43.538064232 +0000 UTC m=+0.024876779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:07:43 np0005464891 podman[308954]: 2025-10-01 17:07:43.64339451 +0000 UTC m=+0.130207057 container init 13138b3650eeddd82a65bd500259c63c4efe2024b57dde4d8b5dcd7818385425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:07:43 np0005464891 podman[308954]: 2025-10-01 17:07:43.652372779 +0000 UTC m=+0.139185296 container start 13138b3650eeddd82a65bd500259c63c4efe2024b57dde4d8b5dcd7818385425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 13:07:43 np0005464891 podman[308954]: 2025-10-01 17:07:43.656008399 +0000 UTC m=+0.142820976 container attach 13138b3650eeddd82a65bd500259c63c4efe2024b57dde4d8b5dcd7818385425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 13:07:43 np0005464891 interesting_mirzakhani[308971]: 167 167
Oct  1 13:07:43 np0005464891 systemd[1]: libpod-13138b3650eeddd82a65bd500259c63c4efe2024b57dde4d8b5dcd7818385425.scope: Deactivated successfully.
Oct  1 13:07:43 np0005464891 podman[308954]: 2025-10-01 17:07:43.658385615 +0000 UTC m=+0.145198142 container died 13138b3650eeddd82a65bd500259c63c4efe2024b57dde4d8b5dcd7818385425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:07:43 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9d3602c2fc61623c5a1f780386e39d8468c7cd9f36cb29894f9a0636c1c74010-merged.mount: Deactivated successfully.
Oct  1 13:07:43 np0005464891 podman[308954]: 2025-10-01 17:07:43.69985536 +0000 UTC m=+0.186667877 container remove 13138b3650eeddd82a65bd500259c63c4efe2024b57dde4d8b5dcd7818385425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:07:43 np0005464891 systemd[1]: libpod-conmon-13138b3650eeddd82a65bd500259c63c4efe2024b57dde4d8b5dcd7818385425.scope: Deactivated successfully.
Oct  1 13:07:43 np0005464891 podman[308995]: 2025-10-01 17:07:43.858917632 +0000 UTC m=+0.044324574 container create a7c0a583a3f4a47489432e55c490f90435068e1f18b98d32ff50d92d1786fb0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_morse, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:07:43 np0005464891 systemd[1]: Started libpod-conmon-a7c0a583a3f4a47489432e55c490f90435068e1f18b98d32ff50d92d1786fb0a.scope.
Oct  1 13:07:43 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:07:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/699a7d7d0eafcd493fc617c078cd60cc9bb2c0b05e8f905e413844b2ed82332a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:07:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/699a7d7d0eafcd493fc617c078cd60cc9bb2c0b05e8f905e413844b2ed82332a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:07:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/699a7d7d0eafcd493fc617c078cd60cc9bb2c0b05e8f905e413844b2ed82332a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:07:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/699a7d7d0eafcd493fc617c078cd60cc9bb2c0b05e8f905e413844b2ed82332a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:07:43 np0005464891 podman[308995]: 2025-10-01 17:07:43.838040226 +0000 UTC m=+0.023447178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:07:43 np0005464891 podman[308995]: 2025-10-01 17:07:43.93305376 +0000 UTC m=+0.118460722 container init a7c0a583a3f4a47489432e55c490f90435068e1f18b98d32ff50d92d1786fb0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_morse, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:07:43 np0005464891 podman[308995]: 2025-10-01 17:07:43.945094403 +0000 UTC m=+0.130501335 container start a7c0a583a3f4a47489432e55c490f90435068e1f18b98d32ff50d92d1786fb0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_morse, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 13:07:43 np0005464891 podman[308995]: 2025-10-01 17:07:43.948224489 +0000 UTC m=+0.133631501 container attach a7c0a583a3f4a47489432e55c490f90435068e1f18b98d32ff50d92d1786fb0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 13:07:44 np0005464891 competent_morse[309011]: {
Oct  1 13:07:44 np0005464891 competent_morse[309011]:    "0": [
Oct  1 13:07:44 np0005464891 competent_morse[309011]:        {
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "devices": [
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "/dev/loop3"
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            ],
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "lv_name": "ceph_lv0",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "lv_size": "21470642176",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "name": "ceph_lv0",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "tags": {
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.cluster_name": "ceph",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.crush_device_class": "",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.encrypted": "0",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.osd_id": "0",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.type": "block",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.vdo": "0"
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            },
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "type": "block",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "vg_name": "ceph_vg0"
Oct  1 13:07:44 np0005464891 competent_morse[309011]:        }
Oct  1 13:07:44 np0005464891 competent_morse[309011]:    ],
Oct  1 13:07:44 np0005464891 competent_morse[309011]:    "1": [
Oct  1 13:07:44 np0005464891 competent_morse[309011]:        {
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "devices": [
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "/dev/loop4"
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            ],
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "lv_name": "ceph_lv1",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "lv_size": "21470642176",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "name": "ceph_lv1",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "tags": {
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.cluster_name": "ceph",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.crush_device_class": "",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.encrypted": "0",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.osd_id": "1",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.type": "block",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.vdo": "0"
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            },
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "type": "block",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "vg_name": "ceph_vg1"
Oct  1 13:07:44 np0005464891 competent_morse[309011]:        }
Oct  1 13:07:44 np0005464891 competent_morse[309011]:    ],
Oct  1 13:07:44 np0005464891 competent_morse[309011]:    "2": [
Oct  1 13:07:44 np0005464891 competent_morse[309011]:        {
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "devices": [
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "/dev/loop5"
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            ],
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "lv_name": "ceph_lv2",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "lv_size": "21470642176",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "name": "ceph_lv2",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "tags": {
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.cluster_name": "ceph",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.crush_device_class": "",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.encrypted": "0",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.osd_id": "2",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.type": "block",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:                "ceph.vdo": "0"
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            },
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "type": "block",
Oct  1 13:07:44 np0005464891 competent_morse[309011]:            "vg_name": "ceph_vg2"
Oct  1 13:07:44 np0005464891 competent_morse[309011]:        }
Oct  1 13:07:44 np0005464891 competent_morse[309011]:    ]
Oct  1 13:07:44 np0005464891 competent_morse[309011]: }
Oct  1 13:07:44 np0005464891 systemd[1]: libpod-a7c0a583a3f4a47489432e55c490f90435068e1f18b98d32ff50d92d1786fb0a.scope: Deactivated successfully.
Oct  1 13:07:44 np0005464891 podman[309020]: 2025-10-01 17:07:44.769732657 +0000 UTC m=+0.028632032 container died a7c0a583a3f4a47489432e55c490f90435068e1f18b98d32ff50d92d1786fb0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:07:44 np0005464891 nova_compute[259907]: 2025-10-01 17:07:44.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:07:44 np0005464891 nova_compute[259907]: 2025-10-01 17:07:44.806 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 13:07:44 np0005464891 nova_compute[259907]: 2025-10-01 17:07:44.806 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:07:44 np0005464891 nova_compute[259907]: 2025-10-01 17:07:44.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:44 np0005464891 nova_compute[259907]: 2025-10-01 17:07:44.830 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:07:44 np0005464891 nova_compute[259907]: 2025-10-01 17:07:44.830 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:07:44 np0005464891 nova_compute[259907]: 2025-10-01 17:07:44.831 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:07:44 np0005464891 nova_compute[259907]: 2025-10-01 17:07:44.831 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 13:07:44 np0005464891 nova_compute[259907]: 2025-10-01 17:07:44.831 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:07:44 np0005464891 systemd[1]: var-lib-containers-storage-overlay-699a7d7d0eafcd493fc617c078cd60cc9bb2c0b05e8f905e413844b2ed82332a-merged.mount: Deactivated successfully.
Oct  1 13:07:44 np0005464891 podman[309020]: 2025-10-01 17:07:44.882243045 +0000 UTC m=+0.141142370 container remove a7c0a583a3f4a47489432e55c490f90435068e1f18b98d32ff50d92d1786fb0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_morse, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:07:44 np0005464891 systemd[1]: libpod-conmon-a7c0a583a3f4a47489432e55c490f90435068e1f18b98d32ff50d92d1786fb0a.scope: Deactivated successfully.
Oct  1 13:07:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:07:45 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2289726924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:07:45 np0005464891 nova_compute[259907]: 2025-10-01 17:07:45.318 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:07:45 np0005464891 nova_compute[259907]: 2025-10-01 17:07:45.466 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:07:45 np0005464891 nova_compute[259907]: 2025-10-01 17:07:45.466 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4386MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 13:07:45 np0005464891 nova_compute[259907]: 2025-10-01 17:07:45.467 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:07:45 np0005464891 nova_compute[259907]: 2025-10-01 17:07:45.467 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:07:45 np0005464891 podman[309196]: 2025-10-01 17:07:45.520545202 +0000 UTC m=+0.076904044 container create 121069be1a606370f0d7c389abcde8745ec3969523de1e07df6a0d825e79cc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galileo, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:07:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:07:45 np0005464891 systemd[1]: Started libpod-conmon-121069be1a606370f0d7c389abcde8745ec3969523de1e07df6a0d825e79cc35.scope.
Oct  1 13:07:45 np0005464891 podman[309196]: 2025-10-01 17:07:45.467185569 +0000 UTC m=+0.023544431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:07:45 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:07:45 np0005464891 podman[309196]: 2025-10-01 17:07:45.61424474 +0000 UTC m=+0.170603612 container init 121069be1a606370f0d7c389abcde8745ec3969523de1e07df6a0d825e79cc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galileo, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 13:07:45 np0005464891 podman[309196]: 2025-10-01 17:07:45.621082649 +0000 UTC m=+0.177441491 container start 121069be1a606370f0d7c389abcde8745ec3969523de1e07df6a0d825e79cc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galileo, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 13:07:45 np0005464891 vigorous_galileo[309211]: 167 167
Oct  1 13:07:45 np0005464891 systemd[1]: libpod-121069be1a606370f0d7c389abcde8745ec3969523de1e07df6a0d825e79cc35.scope: Deactivated successfully.
Oct  1 13:07:45 np0005464891 conmon[309211]: conmon 121069be1a606370f0d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-121069be1a606370f0d7c389abcde8745ec3969523de1e07df6a0d825e79cc35.scope/container/memory.events
Oct  1 13:07:45 np0005464891 podman[309196]: 2025-10-01 17:07:45.675390009 +0000 UTC m=+0.231748881 container attach 121069be1a606370f0d7c389abcde8745ec3969523de1e07df6a0d825e79cc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galileo, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:07:45 np0005464891 podman[309196]: 2025-10-01 17:07:45.676040477 +0000 UTC m=+0.232399319 container died 121069be1a606370f0d7c389abcde8745ec3969523de1e07df6a0d825e79cc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galileo, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 13:07:45 np0005464891 nova_compute[259907]: 2025-10-01 17:07:45.721 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 13:07:45 np0005464891 nova_compute[259907]: 2025-10-01 17:07:45.723 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 13:07:45 np0005464891 systemd[1]: var-lib-containers-storage-overlay-aca026b3ac74f77f3b6166b4d8748109ca8e5498a6e410738e41ba2fbfbeea90-merged.mount: Deactivated successfully.
Oct  1 13:07:45 np0005464891 nova_compute[259907]: 2025-10-01 17:07:45.891 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing inventories for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 13:07:45 np0005464891 podman[309196]: 2025-10-01 17:07:45.984086025 +0000 UTC m=+0.540444867 container remove 121069be1a606370f0d7c389abcde8745ec3969523de1e07df6a0d825e79cc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galileo, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:07:45 np0005464891 systemd[1]: libpod-conmon-121069be1a606370f0d7c389abcde8745ec3969523de1e07df6a0d825e79cc35.scope: Deactivated successfully.
Oct  1 13:07:46 np0005464891 nova_compute[259907]: 2025-10-01 17:07:46.072 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating ProviderTree inventory for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 13:07:46 np0005464891 nova_compute[259907]: 2025-10-01 17:07:46.073 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating inventory in ProviderTree for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 13:07:46 np0005464891 nova_compute[259907]: 2025-10-01 17:07:46.086 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing aggregate associations for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 13:07:46 np0005464891 nova_compute[259907]: 2025-10-01 17:07:46.106 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing trait associations for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8, traits: HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 13:07:46 np0005464891 nova_compute[259907]: 2025-10-01 17:07:46.120 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:07:46 np0005464891 podman[309235]: 2025-10-01 17:07:46.160329052 +0000 UTC m=+0.046283930 container create 77c7cf9cde6c1811935622729f69b0794a359d8024ab76fe6170913eadb54013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kalam, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 13:07:46 np0005464891 systemd[1]: Started libpod-conmon-77c7cf9cde6c1811935622729f69b0794a359d8024ab76fe6170913eadb54013.scope.
Oct  1 13:07:46 np0005464891 podman[309235]: 2025-10-01 17:07:46.138661784 +0000 UTC m=+0.024616692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:07:46 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:07:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b28168dda00f8e1b858d1737797ed436e81d1b4355d4f67eb51c7da36dd8721/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:07:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b28168dda00f8e1b858d1737797ed436e81d1b4355d4f67eb51c7da36dd8721/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:07:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b28168dda00f8e1b858d1737797ed436e81d1b4355d4f67eb51c7da36dd8721/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:07:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b28168dda00f8e1b858d1737797ed436e81d1b4355d4f67eb51c7da36dd8721/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:07:46 np0005464891 podman[309235]: 2025-10-01 17:07:46.261860936 +0000 UTC m=+0.147815834 container init 77c7cf9cde6c1811935622729f69b0794a359d8024ab76fe6170913eadb54013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kalam, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 13:07:46 np0005464891 podman[309235]: 2025-10-01 17:07:46.271206374 +0000 UTC m=+0.157161252 container start 77c7cf9cde6c1811935622729f69b0794a359d8024ab76fe6170913eadb54013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kalam, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 13:07:46 np0005464891 podman[309235]: 2025-10-01 17:07:46.27502074 +0000 UTC m=+0.160975628 container attach 77c7cf9cde6c1811935622729f69b0794a359d8024ab76fe6170913eadb54013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kalam, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:07:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:07:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:07:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3008440279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:07:46 np0005464891 nova_compute[259907]: 2025-10-01 17:07:46.549 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:07:46 np0005464891 nova_compute[259907]: 2025-10-01 17:07:46.557 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:07:46 np0005464891 nova_compute[259907]: 2025-10-01 17:07:46.573 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:07:46 np0005464891 nova_compute[259907]: 2025-10-01 17:07:46.574 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 13:07:46 np0005464891 nova_compute[259907]: 2025-10-01 17:07:46.574 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]: {
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:        "osd_id": 2,
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:        "type": "bluestore"
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:    },
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:        "osd_id": 0,
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:        "type": "bluestore"
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:    },
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:        "osd_id": 1,
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:        "type": "bluestore"
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]:    }
Oct  1 13:07:47 np0005464891 pedantic_kalam[309251]: }
Oct  1 13:07:47 np0005464891 systemd[1]: libpod-77c7cf9cde6c1811935622729f69b0794a359d8024ab76fe6170913eadb54013.scope: Deactivated successfully.
Oct  1 13:07:47 np0005464891 podman[309235]: 2025-10-01 17:07:47.253964716 +0000 UTC m=+1.139919594 container died 77c7cf9cde6c1811935622729f69b0794a359d8024ab76fe6170913eadb54013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kalam, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:07:47 np0005464891 systemd[1]: var-lib-containers-storage-overlay-4b28168dda00f8e1b858d1737797ed436e81d1b4355d4f67eb51c7da36dd8721-merged.mount: Deactivated successfully.
Oct  1 13:07:47 np0005464891 podman[309235]: 2025-10-01 17:07:47.309299734 +0000 UTC m=+1.195254612 container remove 77c7cf9cde6c1811935622729f69b0794a359d8024ab76fe6170913eadb54013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kalam, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:07:47 np0005464891 systemd[1]: libpod-conmon-77c7cf9cde6c1811935622729f69b0794a359d8024ab76fe6170913eadb54013.scope: Deactivated successfully.
Oct  1 13:07:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:07:47 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:07:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:07:47 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:07:47 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 9570d5a4-d1ab-478a-89b2-4769dc04c724 does not exist
Oct  1 13:07:47 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev c6f8d7f5-3bb7-42a9-b1cb-49f6875aa7e6 does not exist
Oct  1 13:07:47 np0005464891 nova_compute[259907]: 2025-10-01 17:07:47.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:07:47 np0005464891 nova_compute[259907]: 2025-10-01 17:07:47.569 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:07:47 np0005464891 nova_compute[259907]: 2025-10-01 17:07:47.570 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:07:47 np0005464891 nova_compute[259907]: 2025-10-01 17:07:47.570 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 13:07:47 np0005464891 nova_compute[259907]: 2025-10-01 17:07:47.570 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 13:07:47 np0005464891 nova_compute[259907]: 2025-10-01 17:07:47.588 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 13:07:47 np0005464891 nova_compute[259907]: 2025-10-01 17:07:47.588 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:07:47 np0005464891 nova_compute[259907]: 2025-10-01 17:07:47.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:07:47 np0005464891 nova_compute[259907]: 2025-10-01 17:07:47.983 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:07:47 np0005464891 nova_compute[259907]: 2025-10-01 17:07:47.983 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:07:48 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:07:48 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:07:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:07:49 np0005464891 nova_compute[259907]: 2025-10-01 17:07:49.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:07:49 np0005464891 nova_compute[259907]: 2025-10-01 17:07:49.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:49 np0005464891 podman[309370]: 2025-10-01 17:07:49.950154929 +0000 UTC m=+0.064271596 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Oct  1 13:07:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:07:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:07:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:07:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2715828051' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:07:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:07:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2715828051' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:07:52 np0005464891 nova_compute[259907]: 2025-10-01 17:07:52.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 682 B/s wr, 14 op/s
Oct  1 13:07:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:07:54 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1402057240' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:07:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:07:54 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1402057240' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:07:54 np0005464891 nova_compute[259907]: 2025-10-01 17:07:54.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 682 B/s wr, 14 op/s
Oct  1 13:07:56 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:07:57 np0005464891 nova_compute[259907]: 2025-10-01 17:07:57.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:07:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Oct  1 13:07:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  1 13:07:59 np0005464891 nova_compute[259907]: 2025-10-01 17:07:59.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:00 np0005464891 podman[309389]: 2025-10-01 17:08:00.988705457 +0000 UTC m=+0.097487114 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Oct  1 13:08:01 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:08:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  1 13:08:02 np0005464891 nova_compute[259907]: 2025-10-01 17:08:02.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  1 13:08:04 np0005464891 nova_compute[259907]: 2025-10-01 17:08:04.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:04 np0005464891 podman[309416]: 2025-10-01 17:08:04.940537978 +0000 UTC m=+0.061006706 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  1 13:08:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 511 B/s wr, 14 op/s
Oct  1 13:08:06 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:08:06 np0005464891 podman[309437]: 2025-10-01 17:08:06.997619878 +0000 UTC m=+0.102044889 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  1 13:08:07 np0005464891 nova_compute[259907]: 2025-10-01 17:08:07.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 511 B/s wr, 14 op/s
Oct  1 13:08:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:08:09.343 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:08:09 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:08:09.343 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 13:08:09 np0005464891 nova_compute[259907]: 2025-10-01 17:08:09.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 12 op/s
Oct  1 13:08:09 np0005464891 nova_compute[259907]: 2025-10-01 17:08:09.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:11 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:08:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_17:08:12
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.mgr', 'volumes', 'images', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.meta']
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 13:08:12 np0005464891 nova_compute[259907]: 2025-10-01 17:08:12.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:08:12.469 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:08:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:08:12.469 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:08:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:08:12.469 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:08:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:08:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:08:14 np0005464891 nova_compute[259907]: 2025-10-01 17:08:14.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:08:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:08:17 np0005464891 nova_compute[259907]: 2025-10-01 17:08:17.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:08:18 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:08:18.346 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:08:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:08:19 np0005464891 nova_compute[259907]: 2025-10-01 17:08:19.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:21 np0005464891 podman[309457]: 2025-10-01 17:08:21.003502626 +0000 UTC m=+0.107902482 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 13:08:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:08:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:08:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 13:08:22 np0005464891 nova_compute[259907]: 2025-10-01 17:08:22.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:08:24 np0005464891 nova_compute[259907]: 2025-10-01 17:08:24.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:08:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:08:27 np0005464891 nova_compute[259907]: 2025-10-01 17:08:27.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:08:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:08:29 np0005464891 nova_compute[259907]: 2025-10-01 17:08:29.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:08:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:08:32 np0005464891 podman[309477]: 2025-10-01 17:08:32.010237645 +0000 UTC m=+0.112309773 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  1 13:08:32 np0005464891 nova_compute[259907]: 2025-10-01 17:08:32.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 85 B/s wr, 6 op/s
Oct  1 13:08:34 np0005464891 nova_compute[259907]: 2025-10-01 17:08:34.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 85 B/s wr, 6 op/s
Oct  1 13:08:35 np0005464891 podman[309504]: 2025-10-01 17:08:35.958899936 +0000 UTC m=+0.068187694 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct  1 13:08:36 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:08:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:08:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2998949888' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:08:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:08:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2998949888' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:08:37 np0005464891 nova_compute[259907]: 2025-10-01 17:08:37.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct  1 13:08:37 np0005464891 podman[309524]: 2025-10-01 17:08:37.976423345 +0000 UTC m=+0.084372301 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:08:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct  1 13:08:39 np0005464891 ovn_controller[152409]: 2025-10-01T17:08:39Z|00259|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Oct  1 13:08:39 np0005464891 nova_compute[259907]: 2025-10-01 17:08:39.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:08:39 np0005464891 nova_compute[259907]: 2025-10-01 17:08:39.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:41 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:08:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 271 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct  1 13:08:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:08:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:08:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:08:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:08:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:08:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:08:42 np0005464891 nova_compute[259907]: 2025-10-01 17:08:42.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 271 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Oct  1 13:08:44 np0005464891 nova_compute[259907]: 2025-10-01 17:08:44.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 271 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 22 KiB/s wr, 5 op/s
Oct  1 13:08:45 np0005464891 nova_compute[259907]: 2025-10-01 17:08:45.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:08:45 np0005464891 nova_compute[259907]: 2025-10-01 17:08:45.804 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 13:08:45 np0005464891 nova_compute[259907]: 2025-10-01 17:08:45.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 13:08:45 np0005464891 nova_compute[259907]: 2025-10-01 17:08:45.823 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 13:08:45 np0005464891 nova_compute[259907]: 2025-10-01 17:08:45.824 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:08:45 np0005464891 nova_compute[259907]: 2025-10-01 17:08:45.824 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 13:08:45 np0005464891 nova_compute[259907]: 2025-10-01 17:08:45.824 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:08:45 np0005464891 nova_compute[259907]: 2025-10-01 17:08:45.848 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:08:45 np0005464891 nova_compute[259907]: 2025-10-01 17:08:45.849 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:08:45 np0005464891 nova_compute[259907]: 2025-10-01 17:08:45.849 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:08:45 np0005464891 nova_compute[259907]: 2025-10-01 17:08:45.849 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 13:08:45 np0005464891 nova_compute[259907]: 2025-10-01 17:08:45.849 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:08:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:08:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833929955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:08:46 np0005464891 nova_compute[259907]: 2025-10-01 17:08:46.263 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:08:46 np0005464891 nova_compute[259907]: 2025-10-01 17:08:46.441 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:08:46 np0005464891 nova_compute[259907]: 2025-10-01 17:08:46.442 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4402MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 13:08:46 np0005464891 nova_compute[259907]: 2025-10-01 17:08:46.443 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:08:46 np0005464891 nova_compute[259907]: 2025-10-01 17:08:46.443 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:08:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:08:46 np0005464891 nova_compute[259907]: 2025-10-01 17:08:46.511 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 13:08:46 np0005464891 nova_compute[259907]: 2025-10-01 17:08:46.511 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 13:08:46 np0005464891 nova_compute[259907]: 2025-10-01 17:08:46.529 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:08:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:08:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4096884022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:08:46 np0005464891 nova_compute[259907]: 2025-10-01 17:08:46.939 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:08:46 np0005464891 nova_compute[259907]: 2025-10-01 17:08:46.948 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:08:46 np0005464891 nova_compute[259907]: 2025-10-01 17:08:46.970 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:08:46 np0005464891 nova_compute[259907]: 2025-10-01 17:08:46.973 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 13:08:46 np0005464891 nova_compute[259907]: 2025-10-01 17:08:46.974 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:08:47 np0005464891 nova_compute[259907]: 2025-10-01 17:08:47.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 271 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 22 KiB/s wr, 5 op/s
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:08:48 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 3406be0a-db9c-4e19-991c-9f4a30e6addf does not exist
Oct  1 13:08:48 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 820ecb0c-d9b9-48ba-b47d-3d6cbc655a9d does not exist
Oct  1 13:08:48 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 311f5e8f-9fbc-4d4b-9eb0-e96b23bfeac3 does not exist
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:08:48 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:08:48 np0005464891 nova_compute[259907]: 2025-10-01 17:08:48.956 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:08:48 np0005464891 nova_compute[259907]: 2025-10-01 17:08:48.957 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:08:48 np0005464891 nova_compute[259907]: 2025-10-01 17:08:48.958 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:08:49 np0005464891 podman[309858]: 2025-10-01 17:08:49.079787491 +0000 UTC m=+0.111239884 container create 93a1b9c13dbc9797d830b8d305601188b290d53ff7efd8b2c0223ed575bea427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 13:08:49 np0005464891 podman[309858]: 2025-10-01 17:08:49.00370995 +0000 UTC m=+0.035162433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:08:49 np0005464891 systemd[1]: Started libpod-conmon-93a1b9c13dbc9797d830b8d305601188b290d53ff7efd8b2c0223ed575bea427.scope.
Oct  1 13:08:49 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:08:49 np0005464891 podman[309858]: 2025-10-01 17:08:49.226386649 +0000 UTC m=+0.257839142 container init 93a1b9c13dbc9797d830b8d305601188b290d53ff7efd8b2c0223ed575bea427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tharp, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 13:08:49 np0005464891 podman[309858]: 2025-10-01 17:08:49.237506706 +0000 UTC m=+0.268959099 container start 93a1b9c13dbc9797d830b8d305601188b290d53ff7efd8b2c0223ed575bea427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:08:49 np0005464891 inspiring_tharp[309875]: 167 167
Oct  1 13:08:49 np0005464891 systemd[1]: libpod-93a1b9c13dbc9797d830b8d305601188b290d53ff7efd8b2c0223ed575bea427.scope: Deactivated successfully.
Oct  1 13:08:49 np0005464891 conmon[309875]: conmon 93a1b9c13dbc9797d830 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93a1b9c13dbc9797d830b8d305601188b290d53ff7efd8b2c0223ed575bea427.scope/container/memory.events
Oct  1 13:08:49 np0005464891 podman[309858]: 2025-10-01 17:08:49.258335092 +0000 UTC m=+0.289787525 container attach 93a1b9c13dbc9797d830b8d305601188b290d53ff7efd8b2c0223ed575bea427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tharp, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:08:49 np0005464891 podman[309858]: 2025-10-01 17:08:49.260878683 +0000 UTC m=+0.292331106 container died 93a1b9c13dbc9797d830b8d305601188b290d53ff7efd8b2c0223ed575bea427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tharp, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 13:08:49 np0005464891 systemd[1]: var-lib-containers-storage-overlay-7ee9f016c44a2fb96a32118991dc80cdef69d50c197fd22a3010475fac3688d0-merged.mount: Deactivated successfully.
Oct  1 13:08:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 271 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct  1 13:08:49 np0005464891 nova_compute[259907]: 2025-10-01 17:08:49.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:08:49 np0005464891 podman[309858]: 2025-10-01 17:08:49.870870838 +0000 UTC m=+0.902323321 container remove 93a1b9c13dbc9797d830b8d305601188b290d53ff7efd8b2c0223ed575bea427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 13:08:49 np0005464891 systemd[1]: libpod-conmon-93a1b9c13dbc9797d830b8d305601188b290d53ff7efd8b2c0223ed575bea427.scope: Deactivated successfully.
Oct  1 13:08:49 np0005464891 nova_compute[259907]: 2025-10-01 17:08:49.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:50 np0005464891 podman[309901]: 2025-10-01 17:08:50.032559984 +0000 UTC m=+0.027123210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:08:50 np0005464891 nova_compute[259907]: 2025-10-01 17:08:50.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:08:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2133: 321 pgs: 321 active+clean; 271 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct  1 13:08:52 np0005464891 nova_compute[259907]: 2025-10-01 17:08:52.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 271 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct  1 13:08:54 np0005464891 ceph-mds[100500]: mds.beacon.cephfs.compute-0.dnoypt missed beacon ack from the monitors
Oct  1 13:08:54 np0005464891 nova_compute[259907]: 2025-10-01 17:08:54.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2135: 321 pgs: 321 active+clean; 271 MiB data, 635 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:08:57 np0005464891 nova_compute[259907]: 2025-10-01 17:08:57.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:08:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 271 MiB data, 635 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:08:58 np0005464891 ceph-mds[100500]: mds.beacon.cephfs.compute-0.dnoypt missed beacon ack from the monitors
Oct  1 13:08:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 12.7036 seconds
Oct  1 13:08:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:08:59 np0005464891 podman[309901]: 2025-10-01 17:08:59.279014689 +0000 UTC m=+9.273577915 container create 4afff33f36d65ff531e945f0ca3f6123b19efbf6f7dac836bb98ae966e500327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:08:59 np0005464891 systemd[1]: Started libpod-conmon-4afff33f36d65ff531e945f0ca3f6123b19efbf6f7dac836bb98ae966e500327.scope.
Oct  1 13:08:59 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:08:59 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa3ca330bb22fda44292b7137fb28674f8d7457fab0c61d4fd78904534b8f35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:08:59 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa3ca330bb22fda44292b7137fb28674f8d7457fab0c61d4fd78904534b8f35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:08:59 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa3ca330bb22fda44292b7137fb28674f8d7457fab0c61d4fd78904534b8f35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:08:59 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa3ca330bb22fda44292b7137fb28674f8d7457fab0c61d4fd78904534b8f35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:08:59 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa3ca330bb22fda44292b7137fb28674f8d7457fab0c61d4fd78904534b8f35/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 13:08:59 np0005464891 podman[309917]: 2025-10-01 17:08:59.537699523 +0000 UTC m=+7.636133344 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  1 13:08:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 271 MiB data, 635 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:08:59 np0005464891 podman[309901]: 2025-10-01 17:08:59.689946928 +0000 UTC m=+9.684510154 container init 4afff33f36d65ff531e945f0ca3f6123b19efbf6f7dac836bb98ae966e500327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 13:08:59 np0005464891 podman[309901]: 2025-10-01 17:08:59.702810614 +0000 UTC m=+9.697373820 container start 4afff33f36d65ff531e945f0ca3f6123b19efbf6f7dac836bb98ae966e500327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 13:08:59 np0005464891 podman[309901]: 2025-10-01 17:08:59.817059899 +0000 UTC m=+9.811623125 container attach 4afff33f36d65ff531e945f0ca3f6123b19efbf6f7dac836bb98ae966e500327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:08:59 np0005464891 nova_compute[259907]: 2025-10-01 17:08:59.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:00 np0005464891 pensive_khorana[309933]: --> passed data devices: 0 physical, 3 LVM
Oct  1 13:09:00 np0005464891 pensive_khorana[309933]: --> relative data size: 1.0
Oct  1 13:09:00 np0005464891 pensive_khorana[309933]: --> All data devices are unavailable
Oct  1 13:09:00 np0005464891 systemd[1]: libpod-4afff33f36d65ff531e945f0ca3f6123b19efbf6f7dac836bb98ae966e500327.scope: Deactivated successfully.
Oct  1 13:09:00 np0005464891 podman[309901]: 2025-10-01 17:09:00.961788913 +0000 UTC m=+10.956352129 container died 4afff33f36d65ff531e945f0ca3f6123b19efbf6f7dac836bb98ae966e500327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:09:00 np0005464891 systemd[1]: libpod-4afff33f36d65ff531e945f0ca3f6123b19efbf6f7dac836bb98ae966e500327.scope: Consumed 1.184s CPU time.
Oct  1 13:09:01 np0005464891 systemd[1]: var-lib-containers-storage-overlay-5fa3ca330bb22fda44292b7137fb28674f8d7457fab0c61d4fd78904534b8f35-merged.mount: Deactivated successfully.
Oct  1 13:09:01 np0005464891 podman[309901]: 2025-10-01 17:09:01.186560882 +0000 UTC m=+11.181124078 container remove 4afff33f36d65ff531e945f0ca3f6123b19efbf6f7dac836bb98ae966e500327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 13:09:01 np0005464891 systemd[1]: libpod-conmon-4afff33f36d65ff531e945f0ca3f6123b19efbf6f7dac836bb98ae966e500327.scope: Deactivated successfully.
Oct  1 13:09:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 271 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Oct  1 13:09:02 np0005464891 podman[310129]: 2025-10-01 17:09:02.012619355 +0000 UTC m=+0.131893414 container create e8ee18a00e6fa86ae5ea09dfb8d5497053918f71a7785a231baba9a81ea352a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_turing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 13:09:02 np0005464891 podman[310129]: 2025-10-01 17:09:01.92521136 +0000 UTC m=+0.044485389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:09:02 np0005464891 systemd[1]: Started libpod-conmon-e8ee18a00e6fa86ae5ea09dfb8d5497053918f71a7785a231baba9a81ea352a3.scope.
Oct  1 13:09:02 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:09:02 np0005464891 podman[310129]: 2025-10-01 17:09:02.396033533 +0000 UTC m=+0.515307652 container init e8ee18a00e6fa86ae5ea09dfb8d5497053918f71a7785a231baba9a81ea352a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 13:09:02 np0005464891 podman[310129]: 2025-10-01 17:09:02.408046645 +0000 UTC m=+0.527320704 container start e8ee18a00e6fa86ae5ea09dfb8d5497053918f71a7785a231baba9a81ea352a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 13:09:02 np0005464891 serene_turing[310161]: 167 167
Oct  1 13:09:02 np0005464891 systemd[1]: libpod-e8ee18a00e6fa86ae5ea09dfb8d5497053918f71a7785a231baba9a81ea352a3.scope: Deactivated successfully.
Oct  1 13:09:02 np0005464891 nova_compute[259907]: 2025-10-01 17:09:02.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:02 np0005464891 podman[310129]: 2025-10-01 17:09:02.583128639 +0000 UTC m=+0.702402738 container attach e8ee18a00e6fa86ae5ea09dfb8d5497053918f71a7785a231baba9a81ea352a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_turing, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 13:09:02 np0005464891 podman[310129]: 2025-10-01 17:09:02.583827878 +0000 UTC m=+0.703101897 container died e8ee18a00e6fa86ae5ea09dfb8d5497053918f71a7785a231baba9a81ea352a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:09:02 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e8dbed15212bb9b8e1482dbe208c04972a920723464c8504a782061eb6d7bdcc-merged.mount: Deactivated successfully.
Oct  1 13:09:03 np0005464891 podman[310129]: 2025-10-01 17:09:03.003358155 +0000 UTC m=+1.122632194 container remove e8ee18a00e6fa86ae5ea09dfb8d5497053918f71a7785a231baba9a81ea352a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_turing, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 13:09:03 np0005464891 systemd[1]: libpod-conmon-e8ee18a00e6fa86ae5ea09dfb8d5497053918f71a7785a231baba9a81ea352a3.scope: Deactivated successfully.
Oct  1 13:09:03 np0005464891 podman[310143]: 2025-10-01 17:09:03.046759785 +0000 UTC m=+0.977319342 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  1 13:09:03 np0005464891 podman[310195]: 2025-10-01 17:09:03.278659319 +0000 UTC m=+0.109906026 container create 26340c7b01a52f20876dd232a501f1d1257d68d56856368a58352f57184823bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:09:03 np0005464891 podman[310195]: 2025-10-01 17:09:03.21136319 +0000 UTC m=+0.042609947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:09:03 np0005464891 systemd[1]: Started libpod-conmon-26340c7b01a52f20876dd232a501f1d1257d68d56856368a58352f57184823bc.scope.
Oct  1 13:09:03 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:09:03 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1db7c986cb651d2eaad509ab29ec08a77e0e20192754f9ede3d90629d3363a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:09:03 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1db7c986cb651d2eaad509ab29ec08a77e0e20192754f9ede3d90629d3363a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:09:03 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1db7c986cb651d2eaad509ab29ec08a77e0e20192754f9ede3d90629d3363a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:09:03 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1db7c986cb651d2eaad509ab29ec08a77e0e20192754f9ede3d90629d3363a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:09:03 np0005464891 podman[310195]: 2025-10-01 17:09:03.447198664 +0000 UTC m=+0.278445411 container init 26340c7b01a52f20876dd232a501f1d1257d68d56856368a58352f57184823bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_joliot, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:09:03 np0005464891 podman[310195]: 2025-10-01 17:09:03.457522488 +0000 UTC m=+0.288769195 container start 26340c7b01a52f20876dd232a501f1d1257d68d56856368a58352f57184823bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_joliot, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:09:03 np0005464891 podman[310195]: 2025-10-01 17:09:03.493920083 +0000 UTC m=+0.325166800 container attach 26340c7b01a52f20876dd232a501f1d1257d68d56856368a58352f57184823bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_joliot, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:09:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 339 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 5.5 MiB/s wr, 34 op/s
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]: {
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:    "0": [
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:        {
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "devices": [
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "/dev/loop3"
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            ],
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "lv_name": "ceph_lv0",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "lv_size": "21470642176",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "name": "ceph_lv0",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "tags": {
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.cluster_name": "ceph",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.crush_device_class": "",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.encrypted": "0",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.osd_id": "0",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.type": "block",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.vdo": "0"
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            },
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "type": "block",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "vg_name": "ceph_vg0"
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:        }
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:    ],
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:    "1": [
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:        {
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "devices": [
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "/dev/loop4"
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            ],
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "lv_name": "ceph_lv1",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "lv_size": "21470642176",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "name": "ceph_lv1",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "tags": {
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.cluster_name": "ceph",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.crush_device_class": "",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.encrypted": "0",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.osd_id": "1",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.type": "block",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.vdo": "0"
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            },
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "type": "block",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "vg_name": "ceph_vg1"
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:        }
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:    ],
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:    "2": [
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:        {
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "devices": [
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "/dev/loop5"
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            ],
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "lv_name": "ceph_lv2",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "lv_size": "21470642176",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "name": "ceph_lv2",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "tags": {
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.cluster_name": "ceph",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.crush_device_class": "",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.encrypted": "0",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.osd_id": "2",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.type": "block",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:                "ceph.vdo": "0"
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            },
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "type": "block",
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:            "vg_name": "ceph_vg2"
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:        }
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]:    ]
Oct  1 13:09:04 np0005464891 awesome_joliot[310212]: }
Oct  1 13:09:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:09:04 np0005464891 systemd[1]: libpod-26340c7b01a52f20876dd232a501f1d1257d68d56856368a58352f57184823bc.scope: Deactivated successfully.
Oct  1 13:09:04 np0005464891 podman[310195]: 2025-10-01 17:09:04.174627863 +0000 UTC m=+1.005874560 container died 26340c7b01a52f20876dd232a501f1d1257d68d56856368a58352f57184823bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_joliot, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:09:04 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d1db7c986cb651d2eaad509ab29ec08a77e0e20192754f9ede3d90629d3363a5-merged.mount: Deactivated successfully.
Oct  1 13:09:04 np0005464891 podman[310195]: 2025-10-01 17:09:04.325790088 +0000 UTC m=+1.157036755 container remove 26340c7b01a52f20876dd232a501f1d1257d68d56856368a58352f57184823bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 13:09:04 np0005464891 systemd[1]: libpod-conmon-26340c7b01a52f20876dd232a501f1d1257d68d56856368a58352f57184823bc.scope: Deactivated successfully.
Oct  1 13:09:04 np0005464891 nova_compute[259907]: 2025-10-01 17:09:04.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:05 np0005464891 podman[310372]: 2025-10-01 17:09:05.12138978 +0000 UTC m=+0.097094102 container create 3abbd92a8b6809d170e8df0fc24514b890a4ab0c0c49139ea5aa7c12b042b6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 13:09:05 np0005464891 podman[310372]: 2025-10-01 17:09:05.060760956 +0000 UTC m=+0.036465298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:09:05 np0005464891 systemd[1]: Started libpod-conmon-3abbd92a8b6809d170e8df0fc24514b890a4ab0c0c49139ea5aa7c12b042b6a2.scope.
Oct  1 13:09:05 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:09:05 np0005464891 podman[310372]: 2025-10-01 17:09:05.279323893 +0000 UTC m=+0.255028325 container init 3abbd92a8b6809d170e8df0fc24514b890a4ab0c0c49139ea5aa7c12b042b6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_allen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 13:09:05 np0005464891 podman[310372]: 2025-10-01 17:09:05.290224614 +0000 UTC m=+0.265928976 container start 3abbd92a8b6809d170e8df0fc24514b890a4ab0c0c49139ea5aa7c12b042b6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_allen, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 13:09:05 np0005464891 exciting_allen[310388]: 167 167
Oct  1 13:09:05 np0005464891 systemd[1]: libpod-3abbd92a8b6809d170e8df0fc24514b890a4ab0c0c49139ea5aa7c12b042b6a2.scope: Deactivated successfully.
Oct  1 13:09:05 np0005464891 conmon[310388]: conmon 3abbd92a8b6809d170e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3abbd92a8b6809d170e8df0fc24514b890a4ab0c0c49139ea5aa7c12b042b6a2.scope/container/memory.events
Oct  1 13:09:05 np0005464891 podman[310372]: 2025-10-01 17:09:05.307942133 +0000 UTC m=+0.283646505 container attach 3abbd92a8b6809d170e8df0fc24514b890a4ab0c0c49139ea5aa7c12b042b6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_allen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 13:09:05 np0005464891 podman[310372]: 2025-10-01 17:09:05.309711541 +0000 UTC m=+0.285415873 container died 3abbd92a8b6809d170e8df0fc24514b890a4ab0c0c49139ea5aa7c12b042b6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 13:09:05 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9417f0440ef52e724b1189f5eb3529cc5baebc36a02aca1110e5383a3e81d6aa-merged.mount: Deactivated successfully.
Oct  1 13:09:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 339 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 5.5 MiB/s wr, 34 op/s
Oct  1 13:09:05 np0005464891 podman[310372]: 2025-10-01 17:09:05.70902414 +0000 UTC m=+0.684728472 container remove 3abbd92a8b6809d170e8df0fc24514b890a4ab0c0c49139ea5aa7c12b042b6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_allen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:09:05 np0005464891 systemd[1]: libpod-conmon-3abbd92a8b6809d170e8df0fc24514b890a4ab0c0c49139ea5aa7c12b042b6a2.scope: Deactivated successfully.
Oct  1 13:09:05 np0005464891 podman[310415]: 2025-10-01 17:09:05.992631492 +0000 UTC m=+0.083984140 container create 802419425e1b4b0f52a7b2683fffe9264c50c69d4eddd4330ce54e9a738e22df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:09:06 np0005464891 podman[310415]: 2025-10-01 17:09:05.944679938 +0000 UTC m=+0.036032636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:09:06 np0005464891 systemd[1]: Started libpod-conmon-802419425e1b4b0f52a7b2683fffe9264c50c69d4eddd4330ce54e9a738e22df.scope.
Oct  1 13:09:06 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:09:06 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a609828aa74ff5ab3c76cfa2467714c7f489054e87cdf19754b0378afeff202/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:09:06 np0005464891 podman[310429]: 2025-10-01 17:09:06.109359226 +0000 UTC m=+0.073316696 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  1 13:09:06 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a609828aa74ff5ab3c76cfa2467714c7f489054e87cdf19754b0378afeff202/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:09:06 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a609828aa74ff5ab3c76cfa2467714c7f489054e87cdf19754b0378afeff202/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:09:06 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a609828aa74ff5ab3c76cfa2467714c7f489054e87cdf19754b0378afeff202/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:09:06 np0005464891 podman[310415]: 2025-10-01 17:09:06.163330696 +0000 UTC m=+0.254683364 container init 802419425e1b4b0f52a7b2683fffe9264c50c69d4eddd4330ce54e9a738e22df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 13:09:06 np0005464891 podman[310415]: 2025-10-01 17:09:06.171853082 +0000 UTC m=+0.263205730 container start 802419425e1b4b0f52a7b2683fffe9264c50c69d4eddd4330ce54e9a738e22df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:09:06 np0005464891 podman[310415]: 2025-10-01 17:09:06.198748505 +0000 UTC m=+0.290101153 container attach 802419425e1b4b0f52a7b2683fffe9264c50c69d4eddd4330ce54e9a738e22df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]: {
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:        "osd_id": 2,
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:        "type": "bluestore"
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:    },
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:        "osd_id": 0,
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:        "type": "bluestore"
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:    },
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:        "osd_id": 1,
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:        "type": "bluestore"
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]:    }
Oct  1 13:09:07 np0005464891 amazing_bartik[310444]: }
Oct  1 13:09:07 np0005464891 systemd[1]: libpod-802419425e1b4b0f52a7b2683fffe9264c50c69d4eddd4330ce54e9a738e22df.scope: Deactivated successfully.
Oct  1 13:09:07 np0005464891 systemd[1]: libpod-802419425e1b4b0f52a7b2683fffe9264c50c69d4eddd4330ce54e9a738e22df.scope: Consumed 1.034s CPU time.
Oct  1 13:09:07 np0005464891 podman[310415]: 2025-10-01 17:09:07.203995877 +0000 UTC m=+1.295348495 container died 802419425e1b4b0f52a7b2683fffe9264c50c69d4eddd4330ce54e9a738e22df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 13:09:07 np0005464891 systemd[1]: var-lib-containers-storage-overlay-5a609828aa74ff5ab3c76cfa2467714c7f489054e87cdf19754b0378afeff202-merged.mount: Deactivated successfully.
Oct  1 13:09:07 np0005464891 podman[310415]: 2025-10-01 17:09:07.267170591 +0000 UTC m=+1.358523199 container remove 802419425e1b4b0f52a7b2683fffe9264c50c69d4eddd4330ce54e9a738e22df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct  1 13:09:07 np0005464891 systemd[1]: libpod-conmon-802419425e1b4b0f52a7b2683fffe9264c50c69d4eddd4330ce54e9a738e22df.scope: Deactivated successfully.
Oct  1 13:09:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:09:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:09:07 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:09:07 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:09:07 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev d28dffce-621b-4b98-9b08-466816af6cb7 does not exist
Oct  1 13:09:07 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 1ddfdf0b-e6b4-471d-9a71-c42ebaef9f9f does not exist
Oct  1 13:09:07 np0005464891 nova_compute[259907]: 2025-10-01 17:09:07.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2141: 321 pgs: 321 active+clean; 381 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 9.0 MiB/s wr, 40 op/s
Oct  1 13:09:08 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:09:08 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:09:08 np0005464891 podman[310549]: 2025-10-01 17:09:08.995212415 +0000 UTC m=+0.096239099 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.098 2 DEBUG oslo_concurrency.lockutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "f58b995c-9c33-443c-9c3c-715eb493032f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.098 2 DEBUG oslo_concurrency.lockutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.114 2 DEBUG nova.compute.manager [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 13:09:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.184 2 DEBUG oslo_concurrency.lockutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.184 2 DEBUG oslo_concurrency.lockutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.193 2 DEBUG nova.virt.hardware [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.194 2 INFO nova.compute.claims [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.308 2 DEBUG oslo_concurrency.processutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:09:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 385 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct  1 13:09:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:09:09 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3802891127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.736 2 DEBUG oslo_concurrency.processutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.744 2 DEBUG nova.compute.provider_tree [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.863 2 DEBUG nova.scheduler.client.report [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.890 2 DEBUG oslo_concurrency.lockutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.891 2 DEBUG nova.compute.manager [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.936 2 DEBUG nova.compute.manager [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.937 2 DEBUG nova.network.neutron [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.959 2 INFO nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 13:09:09 np0005464891 nova_compute[259907]: 2025-10-01 17:09:09.978 2 DEBUG nova.compute.manager [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.023 2 INFO nova.virt.block_device [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Booting with volume 716796d4-34be-42fb-b848-e2b478eb2841 at /dev/vda#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.146 2 DEBUG os_brick.utils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.147 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.158 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.159 741 DEBUG oslo.privsep.daemon [-] privsep: reply[59c9fac7-ea12-4def-806f-8767ed28517e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.161 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.173 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.174 741 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce3f9eb-292f-4cf4-afc0-47c03815600f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.176 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.189 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.190 741 DEBUG oslo.privsep.daemon [-] privsep: reply[76795c4e-9360-4018-958d-d0c4c7993a82]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.192 741 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d1146e-5c4b-4463-924d-c52be1da4877]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.193 2 DEBUG oslo_concurrency.processutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.229 2 DEBUG oslo_concurrency.processutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.232 2 DEBUG os_brick.initiator.connectors.lightos [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.233 2 DEBUG os_brick.initiator.connectors.lightos [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.233 2 DEBUG os_brick.initiator.connectors.lightos [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.234 2 DEBUG os_brick.utils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] <== get_connector_properties: return (87ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.234 2 DEBUG nova.virt.block_device [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Updating existing volume attachment record: c8c13bbc-c9e5-4453-a109-b07894dedf8f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 13:09:10 np0005464891 nova_compute[259907]: 2025-10-01 17:09:10.775 2 DEBUG nova.policy [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c440275c1a1e4cf09fcf789374345bb2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7101f2ff48f540a08f6ec15b324152c6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 13:09:10 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:09:10 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3203225842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:09:11 np0005464891 nova_compute[259907]: 2025-10-01 17:09:11.463 2 DEBUG nova.compute.manager [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 13:09:11 np0005464891 nova_compute[259907]: 2025-10-01 17:09:11.465 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 13:09:11 np0005464891 nova_compute[259907]: 2025-10-01 17:09:11.466 2 INFO nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Creating image(s)#033[00m
Oct  1 13:09:11 np0005464891 nova_compute[259907]: 2025-10-01 17:09:11.466 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  1 13:09:11 np0005464891 nova_compute[259907]: 2025-10-01 17:09:11.467 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Ensure instance console log exists: /var/lib/nova/instances/f58b995c-9c33-443c-9c3c-715eb493032f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 13:09:11 np0005464891 nova_compute[259907]: 2025-10-01 17:09:11.467 2 DEBUG oslo_concurrency.lockutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:11 np0005464891 nova_compute[259907]: 2025-10-01 17:09:11.468 2 DEBUG oslo_concurrency.lockutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:11 np0005464891 nova_compute[259907]: 2025-10-01 17:09:11.468 2 DEBUG oslo_concurrency.lockutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 385 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct  1 13:09:11 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:11.633 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:09:11 np0005464891 nova_compute[259907]: 2025-10-01 17:09:11.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:11 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:11.635 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 13:09:11 np0005464891 nova_compute[259907]: 2025-10-01 17:09:11.727 2 DEBUG nova.network.neutron [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Successfully created port: 2f3b7601-86e2-45bc-9d3d-f75a39660a96 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_17:09:12
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'volumes', 'vms', 'default.rgw.log', '.mgr', '.rgw.root']
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 13:09:12 np0005464891 nova_compute[259907]: 2025-10-01 17:09:12.368 2 DEBUG nova.network.neutron [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Successfully updated port: 2f3b7601-86e2-45bc-9d3d-f75a39660a96 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 13:09:12 np0005464891 nova_compute[259907]: 2025-10-01 17:09:12.386 2 DEBUG oslo_concurrency.lockutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "refresh_cache-f58b995c-9c33-443c-9c3c-715eb493032f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:09:12 np0005464891 nova_compute[259907]: 2025-10-01 17:09:12.387 2 DEBUG oslo_concurrency.lockutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquired lock "refresh_cache-f58b995c-9c33-443c-9c3c-715eb493032f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:09:12 np0005464891 nova_compute[259907]: 2025-10-01 17:09:12.387 2 DEBUG nova.network.neutron [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 13:09:12 np0005464891 nova_compute[259907]: 2025-10-01 17:09:12.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:12.470 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:12.470 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:12.470 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:12 np0005464891 nova_compute[259907]: 2025-10-01 17:09:12.472 2 DEBUG nova.compute.manager [req-cdfca070-615b-45e8-8107-ae83150d0dec req-30cb728e-0fca-4e91-b84d-99460af1c179 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Received event network-changed-2f3b7601-86e2-45bc-9d3d-f75a39660a96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:09:12 np0005464891 nova_compute[259907]: 2025-10-01 17:09:12.473 2 DEBUG nova.compute.manager [req-cdfca070-615b-45e8-8107-ae83150d0dec req-30cb728e-0fca-4e91-b84d-99460af1c179 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Refreshing instance network info cache due to event network-changed-2f3b7601-86e2-45bc-9d3d-f75a39660a96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 13:09:12 np0005464891 nova_compute[259907]: 2025-10-01 17:09:12.473 2 DEBUG oslo_concurrency.lockutils [req-cdfca070-615b-45e8-8107-ae83150d0dec req-30cb728e-0fca-4e91-b84d-99460af1c179 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-f58b995c-9c33-443c-9c3c-715eb493032f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:09:12 np0005464891 nova_compute[259907]: 2025-10-01 17:09:12.517 2 DEBUG nova.network.neutron [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:09:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:09:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 385 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.599 2 DEBUG nova.network.neutron [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Updating instance_info_cache with network_info: [{"id": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "address": "fa:16:3e:20:21:b5", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f3b7601-86", "ovs_interfaceid": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.616 2 DEBUG oslo_concurrency.lockutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Releasing lock "refresh_cache-f58b995c-9c33-443c-9c3c-715eb493032f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.616 2 DEBUG nova.compute.manager [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Instance network_info: |[{"id": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "address": "fa:16:3e:20:21:b5", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f3b7601-86", "ovs_interfaceid": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.617 2 DEBUG oslo_concurrency.lockutils [req-cdfca070-615b-45e8-8107-ae83150d0dec req-30cb728e-0fca-4e91-b84d-99460af1c179 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-f58b995c-9c33-443c-9c3c-715eb493032f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.618 2 DEBUG nova.network.neutron [req-cdfca070-615b-45e8-8107-ae83150d0dec req-30cb728e-0fca-4e91-b84d-99460af1c179 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Refreshing network info cache for port 2f3b7601-86e2-45bc-9d3d-f75a39660a96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.624 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Start _get_guest_xml network_info=[{"id": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "address": "fa:16:3e:20:21:b5", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f3b7601-86", "ovs_interfaceid": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'mount_device': '/dev/vda', 'attachment_id': 'c8c13bbc-c9e5-4453-a109-b07894dedf8f', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-716796d4-34be-42fb-b848-e2b478eb2841', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '716796d4-34be-42fb-b848-e2b478eb2841', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'f58b995c-9c33-443c-9c3c-715eb493032f', 'attached_at': '', 'detached_at': '', 'volume_id': '716796d4-34be-42fb-b848-e2b478eb2841', 'serial': '716796d4-34be-42fb-b848-e2b478eb2841'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.632 2 WARNING nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.644 2 DEBUG nova.virt.libvirt.host [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.645 2 DEBUG nova.virt.libvirt.host [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.652 2 DEBUG nova.virt.libvirt.host [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.653 2 DEBUG nova.virt.libvirt.host [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.653 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.654 2 DEBUG nova.virt.hardware [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.655 2 DEBUG nova.virt.hardware [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.655 2 DEBUG nova.virt.hardware [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.655 2 DEBUG nova.virt.hardware [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.656 2 DEBUG nova.virt.hardware [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.656 2 DEBUG nova.virt.hardware [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.657 2 DEBUG nova.virt.hardware [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.657 2 DEBUG nova.virt.hardware [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.658 2 DEBUG nova.virt.hardware [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.658 2 DEBUG nova.virt.hardware [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.659 2 DEBUG nova.virt.hardware [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.696 2 DEBUG nova.storage.rbd_utils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] rbd image f58b995c-9c33-443c-9c3c-715eb493032f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:09:13 np0005464891 nova_compute[259907]: 2025-10-01 17:09:13.700 2 DEBUG oslo_concurrency.processutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:09:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:09:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3845196037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:09:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.177 2 DEBUG oslo_concurrency.processutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.385 2 DEBUG os_brick.encryptors [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Using volume encryption metadata '{'encryption_key_id': '96bc1574-0517-49ea-b2b8-c9e046f42770', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-716796d4-34be-42fb-b848-e2b478eb2841', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '716796d4-34be-42fb-b848-e2b478eb2841', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'f58b995c-9c33-443c-9c3c-715eb493032f', 'attached_at': '', 'detached_at': '', 'volume_id': '716796d4-34be-42fb-b848-e2b478eb2841', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.387 2 DEBUG barbicanclient.client [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.399 2 DEBUG barbicanclient.v1.secrets [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/96bc1574-0517-49ea-b2b8-c9e046f42770 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.399 2 INFO barbicanclient.base [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/96bc1574-0517-49ea-b2b8-c9e046f42770#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.419 2 DEBUG barbicanclient.client [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.419 2 INFO barbicanclient.base [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/96bc1574-0517-49ea-b2b8-c9e046f42770#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.438 2 DEBUG barbicanclient.client [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.439 2 INFO barbicanclient.base [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/96bc1574-0517-49ea-b2b8-c9e046f42770#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.465 2 DEBUG barbicanclient.client [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.466 2 INFO barbicanclient.base [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/96bc1574-0517-49ea-b2b8-c9e046f42770#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.491 2 DEBUG barbicanclient.client [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.491 2 INFO barbicanclient.base [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/96bc1574-0517-49ea-b2b8-c9e046f42770#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.509 2 DEBUG barbicanclient.client [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.509 2 INFO barbicanclient.base [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/96bc1574-0517-49ea-b2b8-c9e046f42770#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.535 2 DEBUG barbicanclient.client [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.536 2 INFO barbicanclient.base [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/96bc1574-0517-49ea-b2b8-c9e046f42770#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.574 2 DEBUG barbicanclient.client [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.575 2 INFO barbicanclient.base [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/96bc1574-0517-49ea-b2b8-c9e046f42770#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.592 2 DEBUG barbicanclient.client [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.593 2 INFO barbicanclient.base [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/96bc1574-0517-49ea-b2b8-c9e046f42770#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.611 2 DEBUG barbicanclient.client [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.611 2 INFO barbicanclient.base [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/96bc1574-0517-49ea-b2b8-c9e046f42770#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.632 2 DEBUG barbicanclient.client [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.633 2 INFO barbicanclient.base [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/96bc1574-0517-49ea-b2b8-c9e046f42770#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.650 2 DEBUG barbicanclient.client [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.651 2 INFO barbicanclient.base [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/96bc1574-0517-49ea-b2b8-c9e046f42770#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.691 2 DEBUG barbicanclient.client [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.692 2 INFO barbicanclient.base [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/96bc1574-0517-49ea-b2b8-c9e046f42770#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.714 2 DEBUG barbicanclient.client [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.715 2 INFO barbicanclient.base [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/96bc1574-0517-49ea-b2b8-c9e046f42770#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.744 2 DEBUG barbicanclient.client [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.744 2 INFO barbicanclient.base [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/96bc1574-0517-49ea-b2b8-c9e046f42770#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.762 2 DEBUG barbicanclient.client [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.763 2 DEBUG nova.virt.libvirt.host [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  <usage type="volume">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <volume>716796d4-34be-42fb-b848-e2b478eb2841</volume>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  </usage>
Oct  1 13:09:14 np0005464891 nova_compute[259907]: </secret>
Oct  1 13:09:14 np0005464891 nova_compute[259907]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.791 2 DEBUG nova.virt.libvirt.vif [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T17:09:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1723789885',display_name='tempest-TransferEncryptedVolumeTest-server-1723789885',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1723789885',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDg5DQ7zjtFPcwFlQ5c4RmSqdiymCwHuuIH20+rbjv/O1v35DyytGl6//xUvotUS7Kzw36qLhLq5I09Wysu4SY0CkP602jXi/K2rnz98jI+qtsF54Xtpb6f0pP7J8Fn4NQ==',key_name='tempest-TransferEncryptedVolumeTest-797389342',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7101f2ff48f540a08f6ec15b324152c6',ramdisk_id='',reservation_id='r-gmml47cn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1550217158',owner_user_name='tempest-TransferEncryptedVolumeTest-1550217158-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T17:09:10Z,user_data=None,user_id='c440275c1a1e4cf09fcf789374345bb2',uuid=f58b995c-9c33-443c-9c3c-715eb493032f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "address": "fa:16:3e:20:21:b5", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f3b7601-86", "ovs_interfaceid": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.792 2 DEBUG nova.network.os_vif_util [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converting VIF {"id": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "address": "fa:16:3e:20:21:b5", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f3b7601-86", "ovs_interfaceid": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.792 2 DEBUG nova.network.os_vif_util [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:21:b5,bridge_name='br-int',has_traffic_filtering=True,id=2f3b7601-86e2-45bc-9d3d-f75a39660a96,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f3b7601-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.794 2 DEBUG nova.objects.instance [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid f58b995c-9c33-443c-9c3c-715eb493032f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.806 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] End _get_guest_xml xml=<domain type="kvm">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  <uuid>f58b995c-9c33-443c-9c3c-715eb493032f</uuid>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  <name>instance-0000001b</name>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-1723789885</nova:name>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 17:09:13</nova:creationTime>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:        <nova:user uuid="c440275c1a1e4cf09fcf789374345bb2">tempest-TransferEncryptedVolumeTest-1550217158-project-member</nova:user>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:        <nova:project uuid="7101f2ff48f540a08f6ec15b324152c6">tempest-TransferEncryptedVolumeTest-1550217158</nova:project>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:        <nova:port uuid="2f3b7601-86e2-45bc-9d3d-f75a39660a96">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <system>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <entry name="serial">f58b995c-9c33-443c-9c3c-715eb493032f</entry>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <entry name="uuid">f58b995c-9c33-443c-9c3c-715eb493032f</entry>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    </system>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  <os>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  </os>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  <features>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  </features>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  </clock>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  <devices>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/f58b995c-9c33-443c-9c3c-715eb493032f_disk.config">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      </source>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      </auth>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    </disk>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="volumes/volume-716796d4-34be-42fb-b848-e2b478eb2841">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      </source>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      </auth>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <serial>716796d4-34be-42fb-b848-e2b478eb2841</serial>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <encryption format="luks">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:        <secret type="passphrase" uuid="d5471ea7-5150-4406-a310-2b37bac36435"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      </encryption>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    </disk>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:20:21:b5"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <target dev="tap2f3b7601-86"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    </interface>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/f58b995c-9c33-443c-9c3c-715eb493032f/console.log" append="off"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    </serial>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <video>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    </video>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    </rng>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 13:09:14 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 13:09:14 np0005464891 nova_compute[259907]:  </devices>
Oct  1 13:09:14 np0005464891 nova_compute[259907]: </domain>
Oct  1 13:09:14 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.808 2 DEBUG nova.compute.manager [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Preparing to wait for external event network-vif-plugged-2f3b7601-86e2-45bc-9d3d-f75a39660a96 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.808 2 DEBUG oslo_concurrency.lockutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.808 2 DEBUG oslo_concurrency.lockutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.808 2 DEBUG oslo_concurrency.lockutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.809 2 DEBUG nova.virt.libvirt.vif [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T17:09:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1723789885',display_name='tempest-TransferEncryptedVolumeTest-server-1723789885',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1723789885',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDg5DQ7zjtFPcwFlQ5c4RmSqdiymCwHuuIH20+rbjv/O1v35DyytGl6//xUvotUS7Kzw36qLhLq5I09Wysu4SY0CkP602jXi/K2rnz98jI+qtsF54Xtpb6f0pP7J8Fn4NQ==',key_name='tempest-TransferEncryptedVolumeTest-797389342',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7101f2ff48f540a08f6ec15b324152c6',ramdisk_id='',reservation_id='r-gmml47cn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1550217158',owner_user_name='tempest-TransferEncryptedVolumeTest-1550217158-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T17:09:10Z,user_data=None,user_id='c440275c1a1e4cf09fcf789374345bb2',uuid=f58b995c-9c33-443c-9c3c-715eb493032f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "address": "fa:16:3e:20:21:b5", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f3b7601-86", "ovs_interfaceid": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.809 2 DEBUG nova.network.os_vif_util [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converting VIF {"id": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "address": "fa:16:3e:20:21:b5", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f3b7601-86", "ovs_interfaceid": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.810 2 DEBUG nova.network.os_vif_util [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:21:b5,bridge_name='br-int',has_traffic_filtering=True,id=2f3b7601-86e2-45bc-9d3d-f75a39660a96,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f3b7601-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.810 2 DEBUG os_vif [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:21:b5,bridge_name='br-int',has_traffic_filtering=True,id=2f3b7601-86e2-45bc-9d3d-f75a39660a96,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f3b7601-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.811 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.812 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.815 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f3b7601-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.816 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2f3b7601-86, col_values=(('external_ids', {'iface-id': '2f3b7601-86e2-45bc-9d3d-f75a39660a96', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:21:b5', 'vm-uuid': 'f58b995c-9c33-443c-9c3c-715eb493032f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:14 np0005464891 NetworkManager[44940]: <info>  [1759338554.8188] manager: (tap2f3b7601-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.827 2 INFO os_vif [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:21:b5,bridge_name='br-int',has_traffic_filtering=True,id=2f3b7601-86e2-45bc-9d3d-f75a39660a96,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f3b7601-86')#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.875 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.876 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.876 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] No VIF found with MAC fa:16:3e:20:21:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.877 2 INFO nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Using config drive#033[00m
Oct  1 13:09:14 np0005464891 nova_compute[259907]: 2025-10-01 17:09:14.901 2 DEBUG nova.storage.rbd_utils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] rbd image f58b995c-9c33-443c-9c3c-715eb493032f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:09:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 385 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 3.8 MiB/s wr, 10 op/s
Oct  1 13:09:15 np0005464891 nova_compute[259907]: 2025-10-01 17:09:15.648 2 DEBUG nova.network.neutron [req-cdfca070-615b-45e8-8107-ae83150d0dec req-30cb728e-0fca-4e91-b84d-99460af1c179 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Updated VIF entry in instance network info cache for port 2f3b7601-86e2-45bc-9d3d-f75a39660a96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 13:09:15 np0005464891 nova_compute[259907]: 2025-10-01 17:09:15.649 2 DEBUG nova.network.neutron [req-cdfca070-615b-45e8-8107-ae83150d0dec req-30cb728e-0fca-4e91-b84d-99460af1c179 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Updating instance_info_cache with network_info: [{"id": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "address": "fa:16:3e:20:21:b5", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f3b7601-86", "ovs_interfaceid": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:09:15 np0005464891 nova_compute[259907]: 2025-10-01 17:09:15.667 2 DEBUG oslo_concurrency.lockutils [req-cdfca070-615b-45e8-8107-ae83150d0dec req-30cb728e-0fca-4e91-b84d-99460af1c179 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-f58b995c-9c33-443c-9c3c-715eb493032f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:09:15 np0005464891 nova_compute[259907]: 2025-10-01 17:09:15.795 2 INFO nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Creating config drive at /var/lib/nova/instances/f58b995c-9c33-443c-9c3c-715eb493032f/disk.config#033[00m
Oct  1 13:09:15 np0005464891 nova_compute[259907]: 2025-10-01 17:09:15.800 2 DEBUG oslo_concurrency.processutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f58b995c-9c33-443c-9c3c-715eb493032f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcnb2nzbz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:09:15 np0005464891 nova_compute[259907]: 2025-10-01 17:09:15.925 2 DEBUG oslo_concurrency.processutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f58b995c-9c33-443c-9c3c-715eb493032f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcnb2nzbz" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:09:15 np0005464891 nova_compute[259907]: 2025-10-01 17:09:15.961 2 DEBUG nova.storage.rbd_utils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] rbd image f58b995c-9c33-443c-9c3c-715eb493032f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:09:15 np0005464891 nova_compute[259907]: 2025-10-01 17:09:15.965 2 DEBUG oslo_concurrency.processutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f58b995c-9c33-443c-9c3c-715eb493032f/disk.config f58b995c-9c33-443c-9c3c-715eb493032f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:09:16 np0005464891 nova_compute[259907]: 2025-10-01 17:09:16.150 2 DEBUG oslo_concurrency.processutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f58b995c-9c33-443c-9c3c-715eb493032f/disk.config f58b995c-9c33-443c-9c3c-715eb493032f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:09:16 np0005464891 nova_compute[259907]: 2025-10-01 17:09:16.151 2 INFO nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Deleting local config drive /var/lib/nova/instances/f58b995c-9c33-443c-9c3c-715eb493032f/disk.config because it was imported into RBD.#033[00m
Oct  1 13:09:16 np0005464891 kernel: tap2f3b7601-86: entered promiscuous mode
Oct  1 13:09:16 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:16Z|00260|binding|INFO|Claiming lport 2f3b7601-86e2-45bc-9d3d-f75a39660a96 for this chassis.
Oct  1 13:09:16 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:16Z|00261|binding|INFO|2f3b7601-86e2-45bc-9d3d-f75a39660a96: Claiming fa:16:3e:20:21:b5 10.100.0.5
Oct  1 13:09:16 np0005464891 nova_compute[259907]: 2025-10-01 17:09:16.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:16 np0005464891 NetworkManager[44940]: <info>  [1759338556.2296] manager: (tap2f3b7601-86): new Tun device (/org/freedesktop/NetworkManager/Devices/137)
Oct  1 13:09:16 np0005464891 nova_compute[259907]: 2025-10-01 17:09:16.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.260 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:21:b5 10.100.0.5'], port_security=['fa:16:3e:20:21:b5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f58b995c-9c33-443c-9c3c-715eb493032f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d747029d-7cd7-4e92-a356-867cacbb54c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7101f2ff48f540a08f6ec15b324152c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd4fc8115-d40c-458e-b5f5-c46a5e06662d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ce41f0a-b119-4c05-a337-15f2fa115c91, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=2f3b7601-86e2-45bc-9d3d-f75a39660a96) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.261 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 2f3b7601-86e2-45bc-9d3d-f75a39660a96 in datapath d747029d-7cd7-4e92-a356-867cacbb54c4 bound to our chassis#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.263 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d747029d-7cd7-4e92-a356-867cacbb54c4#033[00m
Oct  1 13:09:16 np0005464891 systemd-machined[214891]: New machine qemu-27-instance-0000001b.
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.273 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[20039a79-ab56-47ea-a1ef-10de38c3fd5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.274 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd747029d-71 in ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.276 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd747029d-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.276 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[bc6bc83c-4a6b-43f1-9378-5330a5b829f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.277 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[7cc640a0-e6c1-4589-aa73-ebb9fe54ff62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.291 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b690f8-82a1-4c17-b985-31db4e29636d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:16 np0005464891 systemd[1]: Started Virtual Machine qemu-27-instance-0000001b.
Oct  1 13:09:16 np0005464891 nova_compute[259907]: 2025-10-01 17:09:16.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:16 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:16Z|00262|binding|INFO|Setting lport 2f3b7601-86e2-45bc-9d3d-f75a39660a96 ovn-installed in OVS
Oct  1 13:09:16 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:16Z|00263|binding|INFO|Setting lport 2f3b7601-86e2-45bc-9d3d-f75a39660a96 up in Southbound
Oct  1 13:09:16 np0005464891 nova_compute[259907]: 2025-10-01 17:09:16.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.317 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ae698e43-f85e-45fd-bab2-be079f9de1ba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:16 np0005464891 systemd-udevd[310720]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 13:09:16 np0005464891 NetworkManager[44940]: <info>  [1759338556.3329] device (tap2f3b7601-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 13:09:16 np0005464891 NetworkManager[44940]: <info>  [1759338556.3349] device (tap2f3b7601-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.349 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea562cb-6928-4fcd-921f-3503a6814fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:16 np0005464891 NetworkManager[44940]: <info>  [1759338556.3552] manager: (tapd747029d-70): new Veth device (/org/freedesktop/NetworkManager/Devices/138)
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.355 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[616761df-c673-4581-804a-6f840fed9361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.382 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[5bfdc737-4c0f-47cf-8c47-85bcc19855c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.384 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[ca462fb2-6bc8-433d-bf9b-e73303d74c45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:16 np0005464891 NetworkManager[44940]: <info>  [1759338556.4065] device (tapd747029d-70): carrier: link connected
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.413 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[6eab8430-6627-4da6-b0a0-5548092e7521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.432 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[822017c0-81b9-44aa-89ce-3ac5571b070c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd747029d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:a1:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550497, 'reachable_time': 30452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310750, 'error': None, 'target': 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.448 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[cda2760f-09c0-444d-a71b-bc2687235ce6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:a1a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550497, 'tstamp': 550497}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310751, 'error': None, 'target': 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.466 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa4236f-c83d-4fde-a491-f05a54957820]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd747029d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:a1:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550497, 'reachable_time': 30452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310752, 'error': None, 'target': 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.493 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5f2c8f75-0fdd-47ae-bfcc-25038a277024]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.566 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8187105e-d7b5-4234-81a4-18cce1692bcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.568 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd747029d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.569 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.569 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd747029d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:09:16 np0005464891 nova_compute[259907]: 2025-10-01 17:09:16.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:16 np0005464891 NetworkManager[44940]: <info>  [1759338556.6193] manager: (tapd747029d-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Oct  1 13:09:16 np0005464891 kernel: tapd747029d-70: entered promiscuous mode
Oct  1 13:09:16 np0005464891 nova_compute[259907]: 2025-10-01 17:09:16.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.624 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd747029d-70, col_values=(('external_ids', {'iface-id': '3454e5b0-0c54-4314-89c0-47c1b5603195'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:09:16 np0005464891 nova_compute[259907]: 2025-10-01 17:09:16.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:16 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:16Z|00264|binding|INFO|Releasing lport 3454e5b0-0c54-4314-89c0-47c1b5603195 from this chassis (sb_readonly=0)
Oct  1 13:09:16 np0005464891 nova_compute[259907]: 2025-10-01 17:09:16.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.646 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d747029d-7cd7-4e92-a356-867cacbb54c4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d747029d-7cd7-4e92-a356-867cacbb54c4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.647 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ce180cbe-8b31-488f-a061-6929c8d7173c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.648 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-d747029d-7cd7-4e92-a356-867cacbb54c4
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/d747029d-7cd7-4e92-a356-867cacbb54c4.pid.haproxy
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID d747029d-7cd7-4e92-a356-867cacbb54c4
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 13:09:16 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:16.649 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'env', 'PROCESS_TAG=haproxy-d747029d-7cd7-4e92-a356-867cacbb54c4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d747029d-7cd7-4e92-a356-867cacbb54c4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 13:09:16 np0005464891 nova_compute[259907]: 2025-10-01 17:09:16.693 2 DEBUG nova.compute.manager [req-412d2ea1-ef90-46ac-8ac2-178151d56004 req-f49d3d23-5599-43a3-9a05-315c10e1766c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Received event network-vif-plugged-2f3b7601-86e2-45bc-9d3d-f75a39660a96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:09:16 np0005464891 nova_compute[259907]: 2025-10-01 17:09:16.694 2 DEBUG oslo_concurrency.lockutils [req-412d2ea1-ef90-46ac-8ac2-178151d56004 req-f49d3d23-5599-43a3-9a05-315c10e1766c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:16 np0005464891 nova_compute[259907]: 2025-10-01 17:09:16.695 2 DEBUG oslo_concurrency.lockutils [req-412d2ea1-ef90-46ac-8ac2-178151d56004 req-f49d3d23-5599-43a3-9a05-315c10e1766c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:16 np0005464891 nova_compute[259907]: 2025-10-01 17:09:16.695 2 DEBUG oslo_concurrency.lockutils [req-412d2ea1-ef90-46ac-8ac2-178151d56004 req-f49d3d23-5599-43a3-9a05-315c10e1766c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:16 np0005464891 nova_compute[259907]: 2025-10-01 17:09:16.696 2 DEBUG nova.compute.manager [req-412d2ea1-ef90-46ac-8ac2-178151d56004 req-f49d3d23-5599-43a3-9a05-315c10e1766c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Processing event network-vif-plugged-2f3b7601-86e2-45bc-9d3d-f75a39660a96 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 13:09:17 np0005464891 podman[310782]: 2025-10-01 17:09:17.114731616 +0000 UTC m=+0.101907545 container create 7901cc2a861526ca0dd5a49b1161c1f232ffe4e4f8463a62584087dae7d3801b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  1 13:09:17 np0005464891 podman[310782]: 2025-10-01 17:09:17.049295899 +0000 UTC m=+0.036471858 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 13:09:17 np0005464891 systemd[1]: Started libpod-conmon-7901cc2a861526ca0dd5a49b1161c1f232ffe4e4f8463a62584087dae7d3801b.scope.
Oct  1 13:09:17 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:09:17 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5222ce1e87915601b2940a4a581c645b64581844b53d5c307a0926d1d3528111/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 13:09:17 np0005464891 podman[310782]: 2025-10-01 17:09:17.210964914 +0000 UTC m=+0.198140883 container init 7901cc2a861526ca0dd5a49b1161c1f232ffe4e4f8463a62584087dae7d3801b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:09:17 np0005464891 podman[310782]: 2025-10-01 17:09:17.217911156 +0000 UTC m=+0.205087065 container start 7901cc2a861526ca0dd5a49b1161c1f232ffe4e4f8463a62584087dae7d3801b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:09:17 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[310834]: [NOTICE]   (310838) : New worker (310840) forked
Oct  1 13:09:17 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[310834]: [NOTICE]   (310838) : Loading success.
Oct  1 13:09:17 np0005464891 nova_compute[259907]: 2025-10-01 17:09:17.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 385 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 3.8 MiB/s wr, 10 op/s
Oct  1 13:09:18 np0005464891 nova_compute[259907]: 2025-10-01 17:09:18.830 2 DEBUG nova.compute.manager [req-5de53d26-6364-4f38-b7bf-165398535e4f req-89fb7c21-a9da-4838-82cb-cd9c432b9360 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Received event network-vif-plugged-2f3b7601-86e2-45bc-9d3d-f75a39660a96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:09:18 np0005464891 nova_compute[259907]: 2025-10-01 17:09:18.831 2 DEBUG oslo_concurrency.lockutils [req-5de53d26-6364-4f38-b7bf-165398535e4f req-89fb7c21-a9da-4838-82cb-cd9c432b9360 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:18 np0005464891 nova_compute[259907]: 2025-10-01 17:09:18.831 2 DEBUG oslo_concurrency.lockutils [req-5de53d26-6364-4f38-b7bf-165398535e4f req-89fb7c21-a9da-4838-82cb-cd9c432b9360 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:18 np0005464891 nova_compute[259907]: 2025-10-01 17:09:18.831 2 DEBUG oslo_concurrency.lockutils [req-5de53d26-6364-4f38-b7bf-165398535e4f req-89fb7c21-a9da-4838-82cb-cd9c432b9360 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:18 np0005464891 nova_compute[259907]: 2025-10-01 17:09:18.832 2 DEBUG nova.compute.manager [req-5de53d26-6364-4f38-b7bf-165398535e4f req-89fb7c21-a9da-4838-82cb-cd9c432b9360 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] No waiting events found dispatching network-vif-plugged-2f3b7601-86e2-45bc-9d3d-f75a39660a96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:09:18 np0005464891 nova_compute[259907]: 2025-10-01 17:09:18.832 2 WARNING nova.compute.manager [req-5de53d26-6364-4f38-b7bf-165398535e4f req-89fb7c21-a9da-4838-82cb-cd9c432b9360 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Received unexpected event network-vif-plugged-2f3b7601-86e2-45bc-9d3d-f75a39660a96 for instance with vm_state building and task_state spawning.#033[00m
Oct  1 13:09:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:09:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 385 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 342 KiB/s wr, 11 op/s
Oct  1 13:09:19 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:19.638 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:09:19 np0005464891 nova_compute[259907]: 2025-10-01 17:09:19.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.227 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338560.2274146, f58b995c-9c33-443c-9c3c-715eb493032f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.229 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] VM Started (Lifecycle Event)#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.231 2 DEBUG nova.compute.manager [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.235 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.239 2 INFO nova.virt.libvirt.driver [-] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Instance spawned successfully.#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.240 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.265 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.271 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.276 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.276 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.277 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.277 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.278 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.278 2 DEBUG nova.virt.libvirt.driver [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.307 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.308 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338560.2284088, f58b995c-9c33-443c-9c3c-715eb493032f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.308 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] VM Paused (Lifecycle Event)#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.344 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.348 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338560.2343712, f58b995c-9c33-443c-9c3c-715eb493032f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.348 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] VM Resumed (Lifecycle Event)#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.353 2 INFO nova.compute.manager [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Took 8.89 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.353 2 DEBUG nova.compute.manager [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.363 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.366 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.392 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.419 2 INFO nova.compute.manager [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Took 11.26 seconds to build instance.#033[00m
Oct  1 13:09:20 np0005464891 nova_compute[259907]: 2025-10-01 17:09:20.437 2 DEBUG oslo_concurrency.lockutils [None req-84e01b4b-5888-438e-a124-b94623f1e28e c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 385 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 12 KiB/s wr, 11 op/s
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0047219280092140395 of space, bias 1.0, pg target 1.4165784027642119 quantized to 32 (current 32)
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.9013621638340822e-05 quantized to 32 (current 32)
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.19918670028325844 quantized to 32 (current 32)
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:09:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct  1 13:09:22 np0005464891 nova_compute[259907]: 2025-10-01 17:09:22.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 385 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 48 op/s
Oct  1 13:09:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:09:24 np0005464891 nova_compute[259907]: 2025-10-01 17:09:24.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:24 np0005464891 NetworkManager[44940]: <info>  [1759338564.9281] manager: (patch-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Oct  1 13:09:24 np0005464891 NetworkManager[44940]: <info>  [1759338564.9286] manager: (patch-br-int-to-provnet-ee30e212-e482-494e-8716-bb3c2ef00bd5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Oct  1 13:09:24 np0005464891 nova_compute[259907]: 2025-10-01 17:09:24.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:25 np0005464891 nova_compute[259907]: 2025-10-01 17:09:25.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:25 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:25Z|00265|binding|INFO|Releasing lport 3454e5b0-0c54-4314-89c0-47c1b5603195 from this chassis (sb_readonly=0)
Oct  1 13:09:25 np0005464891 nova_compute[259907]: 2025-10-01 17:09:25.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:25 np0005464891 nova_compute[259907]: 2025-10-01 17:09:25.294 2 DEBUG nova.compute.manager [req-60fbb33e-c324-4b2b-b87c-cb66b6d1e202 req-7852964b-71b3-465e-897f-2ec623795b4b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Received event network-changed-2f3b7601-86e2-45bc-9d3d-f75a39660a96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:09:25 np0005464891 nova_compute[259907]: 2025-10-01 17:09:25.294 2 DEBUG nova.compute.manager [req-60fbb33e-c324-4b2b-b87c-cb66b6d1e202 req-7852964b-71b3-465e-897f-2ec623795b4b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Refreshing instance network info cache due to event network-changed-2f3b7601-86e2-45bc-9d3d-f75a39660a96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 13:09:25 np0005464891 nova_compute[259907]: 2025-10-01 17:09:25.294 2 DEBUG oslo_concurrency.lockutils [req-60fbb33e-c324-4b2b-b87c-cb66b6d1e202 req-7852964b-71b3-465e-897f-2ec623795b4b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-f58b995c-9c33-443c-9c3c-715eb493032f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:09:25 np0005464891 nova_compute[259907]: 2025-10-01 17:09:25.294 2 DEBUG oslo_concurrency.lockutils [req-60fbb33e-c324-4b2b-b87c-cb66b6d1e202 req-7852964b-71b3-465e-897f-2ec623795b4b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-f58b995c-9c33-443c-9c3c-715eb493032f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:09:25 np0005464891 nova_compute[259907]: 2025-10-01 17:09:25.294 2 DEBUG nova.network.neutron [req-60fbb33e-c324-4b2b-b87c-cb66b6d1e202 req-7852964b-71b3-465e-897f-2ec623795b4b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Refreshing network info cache for port 2f3b7601-86e2-45bc-9d3d-f75a39660a96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 13:09:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 385 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 48 op/s
Oct  1 13:09:26 np0005464891 nova_compute[259907]: 2025-10-01 17:09:26.330 2 DEBUG nova.network.neutron [req-60fbb33e-c324-4b2b-b87c-cb66b6d1e202 req-7852964b-71b3-465e-897f-2ec623795b4b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Updated VIF entry in instance network info cache for port 2f3b7601-86e2-45bc-9d3d-f75a39660a96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 13:09:26 np0005464891 nova_compute[259907]: 2025-10-01 17:09:26.331 2 DEBUG nova.network.neutron [req-60fbb33e-c324-4b2b-b87c-cb66b6d1e202 req-7852964b-71b3-465e-897f-2ec623795b4b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Updating instance_info_cache with network_info: [{"id": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "address": "fa:16:3e:20:21:b5", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f3b7601-86", "ovs_interfaceid": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:09:26 np0005464891 nova_compute[259907]: 2025-10-01 17:09:26.363 2 DEBUG oslo_concurrency.lockutils [req-60fbb33e-c324-4b2b-b87c-cb66b6d1e202 req-7852964b-71b3-465e-897f-2ec623795b4b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-f58b995c-9c33-443c-9c3c-715eb493032f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:09:27 np0005464891 nova_compute[259907]: 2025-10-01 17:09:27.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 385 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  1 13:09:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:09:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 385 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  1 13:09:29 np0005464891 nova_compute[259907]: 2025-10-01 17:09:29.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:29 np0005464891 podman[310858]: 2025-10-01 17:09:29.962306855 +0000 UTC m=+0.072961146 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 13:09:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 385 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 66 op/s
Oct  1 13:09:32 np0005464891 nova_compute[259907]: 2025-10-01 17:09:32.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:32 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct  1 13:09:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 385 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 81 op/s
Oct  1 13:09:33 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:33Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:21:b5 10.100.0.5
Oct  1 13:09:33 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:33Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:21:b5 10.100.0.5
Oct  1 13:09:34 np0005464891 podman[310876]: 2025-10-01 17:09:34.020496282 +0000 UTC m=+0.120379926 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller)
Oct  1 13:09:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:09:34 np0005464891 nova_compute[259907]: 2025-10-01 17:09:34.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:34 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct  1 13:09:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 385 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 993 KiB/s rd, 1.2 MiB/s wr, 44 op/s
Oct  1 13:09:36 np0005464891 podman[310900]: 2025-10-01 17:09:36.983470062 +0000 UTC m=+0.075224658 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:09:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:09:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/665725288' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:09:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:09:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/665725288' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:09:37 np0005464891 nova_compute[259907]: 2025-10-01 17:09:37.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 411 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.4 MiB/s wr, 74 op/s
Oct  1 13:09:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:09:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 453 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 542 KiB/s rd, 5.8 MiB/s wr, 76 op/s
Oct  1 13:09:39 np0005464891 nova_compute[259907]: 2025-10-01 17:09:39.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:09:39 np0005464891 nova_compute[259907]: 2025-10-01 17:09:39.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:39 np0005464891 podman[310922]: 2025-10-01 17:09:39.975440243 +0000 UTC m=+0.085085620 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.176 2 DEBUG oslo_concurrency.lockutils [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "f58b995c-9c33-443c-9c3c-715eb493032f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.176 2 DEBUG oslo_concurrency.lockutils [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.177 2 DEBUG oslo_concurrency.lockutils [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.177 2 DEBUG oslo_concurrency.lockutils [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.177 2 DEBUG oslo_concurrency.lockutils [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.178 2 INFO nova.compute.manager [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Terminating instance#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.180 2 DEBUG nova.compute.manager [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 13:09:41 np0005464891 kernel: tap2f3b7601-86 (unregistering): left promiscuous mode
Oct  1 13:09:41 np0005464891 NetworkManager[44940]: <info>  [1759338581.2583] device (tap2f3b7601-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:41 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:41Z|00266|binding|INFO|Releasing lport 2f3b7601-86e2-45bc-9d3d-f75a39660a96 from this chassis (sb_readonly=0)
Oct  1 13:09:41 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:41Z|00267|binding|INFO|Setting lport 2f3b7601-86e2-45bc-9d3d-f75a39660a96 down in Southbound
Oct  1 13:09:41 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:41Z|00268|binding|INFO|Removing iface tap2f3b7601-86 ovn-installed in OVS
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.277 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:21:b5 10.100.0.5'], port_security=['fa:16:3e:20:21:b5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f58b995c-9c33-443c-9c3c-715eb493032f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d747029d-7cd7-4e92-a356-867cacbb54c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7101f2ff48f540a08f6ec15b324152c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd4fc8115-d40c-458e-b5f5-c46a5e06662d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ce41f0a-b119-4c05-a337-15f2fa115c91, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=2f3b7601-86e2-45bc-9d3d-f75a39660a96) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.278 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 2f3b7601-86e2-45bc-9d3d-f75a39660a96 in datapath d747029d-7cd7-4e92-a356-867cacbb54c4 unbound from our chassis#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.281 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d747029d-7cd7-4e92-a356-867cacbb54c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.282 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[73b82c98-c027-44b7-b1a8-e3aa90d54baf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.282 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 namespace which is not needed anymore#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:41 np0005464891 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Oct  1 13:09:41 np0005464891 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Consumed 17.863s CPU time.
Oct  1 13:09:41 np0005464891 systemd-machined[214891]: Machine qemu-27-instance-0000001b terminated.
Oct  1 13:09:41 np0005464891 kernel: tap2f3b7601-86: entered promiscuous mode
Oct  1 13:09:41 np0005464891 NetworkManager[44940]: <info>  [1759338581.4096] manager: (tap2f3b7601-86): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Oct  1 13:09:41 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:41Z|00269|binding|INFO|Claiming lport 2f3b7601-86e2-45bc-9d3d-f75a39660a96 for this chassis.
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:41 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:41Z|00270|binding|INFO|2f3b7601-86e2-45bc-9d3d-f75a39660a96: Claiming fa:16:3e:20:21:b5 10.100.0.5
Oct  1 13:09:41 np0005464891 kernel: tap2f3b7601-86 (unregistering): left promiscuous mode
Oct  1 13:09:41 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[310834]: [NOTICE]   (310838) : haproxy version is 2.8.14-c23fe91
Oct  1 13:09:41 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[310834]: [NOTICE]   (310838) : path to executable is /usr/sbin/haproxy
Oct  1 13:09:41 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[310834]: [WARNING]  (310838) : Exiting Master process...
Oct  1 13:09:41 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[310834]: [WARNING]  (310838) : Exiting Master process...
Oct  1 13:09:41 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[310834]: [ALERT]    (310838) : Current worker (310840) exited with code 143 (Terminated)
Oct  1 13:09:41 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[310834]: [WARNING]  (310838) : All workers exited. Exiting... (0)
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.421 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:21:b5 10.100.0.5'], port_security=['fa:16:3e:20:21:b5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f58b995c-9c33-443c-9c3c-715eb493032f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d747029d-7cd7-4e92-a356-867cacbb54c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7101f2ff48f540a08f6ec15b324152c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd4fc8115-d40c-458e-b5f5-c46a5e06662d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ce41f0a-b119-4c05-a337-15f2fa115c91, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=2f3b7601-86e2-45bc-9d3d-f75a39660a96) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:09:41 np0005464891 systemd[1]: libpod-7901cc2a861526ca0dd5a49b1161c1f232ffe4e4f8463a62584087dae7d3801b.scope: Deactivated successfully.
Oct  1 13:09:41 np0005464891 podman[310966]: 2025-10-01 17:09:41.428665519 +0000 UTC m=+0.050489785 container died 7901cc2a861526ca0dd5a49b1161c1f232ffe4e4f8463a62584087dae7d3801b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 13:09:41 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:41Z|00271|binding|INFO|Setting lport 2f3b7601-86e2-45bc-9d3d-f75a39660a96 ovn-installed in OVS
Oct  1 13:09:41 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:41Z|00272|binding|INFO|Setting lport 2f3b7601-86e2-45bc-9d3d-f75a39660a96 up in Southbound
Oct  1 13:09:41 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:41Z|00273|binding|INFO|Releasing lport 2f3b7601-86e2-45bc-9d3d-f75a39660a96 from this chassis (sb_readonly=1)
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:41 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:41Z|00274|if_status|INFO|Dropped 2 log messages in last 277 seconds (most recently, 277 seconds ago) due to excessive rate
Oct  1 13:09:41 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:41Z|00275|if_status|INFO|Not setting lport 2f3b7601-86e2-45bc-9d3d-f75a39660a96 down as sb is readonly
Oct  1 13:09:41 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:41Z|00276|binding|INFO|Removing iface tap2f3b7601-86 ovn-installed in OVS
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.443 2 INFO nova.virt.libvirt.driver [-] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Instance destroyed successfully.#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.443 2 DEBUG nova.objects.instance [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lazy-loading 'resources' on Instance uuid f58b995c-9c33-443c-9c3c-715eb493032f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:41 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:41Z|00277|binding|INFO|Releasing lport 2f3b7601-86e2-45bc-9d3d-f75a39660a96 from this chassis (sb_readonly=0)
Oct  1 13:09:41 np0005464891 ovn_controller[152409]: 2025-10-01T17:09:41Z|00278|binding|INFO|Setting lport 2f3b7601-86e2-45bc-9d3d-f75a39660a96 down in Southbound
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.468 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:21:b5 10.100.0.5'], port_security=['fa:16:3e:20:21:b5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f58b995c-9c33-443c-9c3c-715eb493032f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d747029d-7cd7-4e92-a356-867cacbb54c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7101f2ff48f540a08f6ec15b324152c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd4fc8115-d40c-458e-b5f5-c46a5e06662d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ce41f0a-b119-4c05-a337-15f2fa115c91, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=2f3b7601-86e2-45bc-9d3d-f75a39660a96) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:09:41 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7901cc2a861526ca0dd5a49b1161c1f232ffe4e4f8463a62584087dae7d3801b-userdata-shm.mount: Deactivated successfully.
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.479 2 DEBUG nova.virt.libvirt.vif [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T17:09:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1723789885',display_name='tempest-TransferEncryptedVolumeTest-server-1723789885',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1723789885',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDg5DQ7zjtFPcwFlQ5c4RmSqdiymCwHuuIH20+rbjv/O1v35DyytGl6//xUvotUS7Kzw36qLhLq5I09Wysu4SY0CkP602jXi/K2rnz98jI+qtsF54Xtpb6f0pP7J8Fn4NQ==',key_name='tempest-TransferEncryptedVolumeTest-797389342',keypairs=<?>,launch_index=0,launched_at=2025-10-01T17:09:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7101f2ff48f540a08f6ec15b324152c6',ramdisk_id='',reservation_id='r-gmml47cn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1550217158',owner_user_name='tempest-TransferEncryptedVolumeTest-1550217158-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T17:09:20Z,user_data=None,user_id='c440275c1a1e4cf09fcf789374345bb2',uuid=f58b995c-9c33-443c-9c3c-715eb493032f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "address": "fa:16:3e:20:21:b5", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f3b7601-86", "ovs_interfaceid": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.480 2 DEBUG nova.network.os_vif_util [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converting VIF {"id": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "address": "fa:16:3e:20:21:b5", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f3b7601-86", "ovs_interfaceid": "2f3b7601-86e2-45bc-9d3d-f75a39660a96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:09:41 np0005464891 systemd[1]: var-lib-containers-storage-overlay-5222ce1e87915601b2940a4a581c645b64581844b53d5c307a0926d1d3528111-merged.mount: Deactivated successfully.
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.482 2 DEBUG nova.network.os_vif_util [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:20:21:b5,bridge_name='br-int',has_traffic_filtering=True,id=2f3b7601-86e2-45bc-9d3d-f75a39660a96,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f3b7601-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.482 2 DEBUG os_vif [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:21:b5,bridge_name='br-int',has_traffic_filtering=True,id=2f3b7601-86e2-45bc-9d3d-f75a39660a96,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f3b7601-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.485 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f3b7601-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.491 2 INFO os_vif [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:21:b5,bridge_name='br-int',has_traffic_filtering=True,id=2f3b7601-86e2-45bc-9d3d-f75a39660a96,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f3b7601-86')#033[00m
Oct  1 13:09:41 np0005464891 podman[310966]: 2025-10-01 17:09:41.492801211 +0000 UTC m=+0.114625467 container cleanup 7901cc2a861526ca0dd5a49b1161c1f232ffe4e4f8463a62584087dae7d3801b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:09:41 np0005464891 systemd[1]: libpod-conmon-7901cc2a861526ca0dd5a49b1161c1f232ffe4e4f8463a62584087dae7d3801b.scope: Deactivated successfully.
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.572 2 DEBUG nova.compute.manager [req-9f045146-65ec-46fe-a507-e50636214624 req-ab771006-390f-4207-8085-3a2983123f88 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Received event network-vif-unplugged-2f3b7601-86e2-45bc-9d3d-f75a39660a96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.573 2 DEBUG oslo_concurrency.lockutils [req-9f045146-65ec-46fe-a507-e50636214624 req-ab771006-390f-4207-8085-3a2983123f88 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.573 2 DEBUG oslo_concurrency.lockutils [req-9f045146-65ec-46fe-a507-e50636214624 req-ab771006-390f-4207-8085-3a2983123f88 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.574 2 DEBUG oslo_concurrency.lockutils [req-9f045146-65ec-46fe-a507-e50636214624 req-ab771006-390f-4207-8085-3a2983123f88 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.574 2 DEBUG nova.compute.manager [req-9f045146-65ec-46fe-a507-e50636214624 req-ab771006-390f-4207-8085-3a2983123f88 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] No waiting events found dispatching network-vif-unplugged-2f3b7601-86e2-45bc-9d3d-f75a39660a96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.574 2 DEBUG nova.compute.manager [req-9f045146-65ec-46fe-a507-e50636214624 req-ab771006-390f-4207-8085-3a2983123f88 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Received event network-vif-unplugged-2f3b7601-86e2-45bc-9d3d-f75a39660a96 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 13:09:41 np0005464891 podman[311007]: 2025-10-01 17:09:41.583407713 +0000 UTC m=+0.060038319 container remove 7901cc2a861526ca0dd5a49b1161c1f232ffe4e4f8463a62584087dae7d3801b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  1 13:09:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 453 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 545 KiB/s rd, 5.8 MiB/s wr, 77 op/s
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.591 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f1157882-8e46-4c48-a712-13e7d196a6e5]: (4, ('Wed Oct  1 05:09:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 (7901cc2a861526ca0dd5a49b1161c1f232ffe4e4f8463a62584087dae7d3801b)\n7901cc2a861526ca0dd5a49b1161c1f232ffe4e4f8463a62584087dae7d3801b\nWed Oct  1 05:09:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 (7901cc2a861526ca0dd5a49b1161c1f232ffe4e4f8463a62584087dae7d3801b)\n7901cc2a861526ca0dd5a49b1161c1f232ffe4e4f8463a62584087dae7d3801b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.593 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8769e2-c47e-4cba-81c2-bdead258af8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.594 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd747029d-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:41 np0005464891 kernel: tapd747029d-70: left promiscuous mode
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.666 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4a4624-c4d9-4853-972b-4d5f05b82bf2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.688 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[91b697a3-66be-455b-a5b2-9605fe578e3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.690 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[654d9a76-44da-445a-89d0-a99047a101d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.708 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[122d93dc-8c01-49d6-9b6e-e474f0086fa3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550491, 'reachable_time': 33473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311040, 'error': None, 'target': 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:41 np0005464891 systemd[1]: run-netns-ovnmeta\x2dd747029d\x2d7cd7\x2d4e92\x2da356\x2d867cacbb54c4.mount: Deactivated successfully.
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.712 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.712 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0d7e94-3178-408a-8a0d-8dd842e29af8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.713 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 2f3b7601-86e2-45bc-9d3d-f75a39660a96 in datapath d747029d-7cd7-4e92-a356-867cacbb54c4 unbound from our chassis#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.714 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d747029d-7cd7-4e92-a356-867cacbb54c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.714 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2f247af3-1419-47a3-9b32-c9f53b2928ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.715 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 2f3b7601-86e2-45bc-9d3d-f75a39660a96 in datapath d747029d-7cd7-4e92-a356-867cacbb54c4 unbound from our chassis#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.716 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d747029d-7cd7-4e92-a356-867cacbb54c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 13:09:41 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:09:41.716 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[73a2c80d-f238-4f1a-b3b4-0a5dca7a88f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.730 2 INFO nova.virt.libvirt.driver [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Deleting instance files /var/lib/nova/instances/f58b995c-9c33-443c-9c3c-715eb493032f_del#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.731 2 INFO nova.virt.libvirt.driver [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Deletion of /var/lib/nova/instances/f58b995c-9c33-443c-9c3c-715eb493032f_del complete#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.797 2 INFO nova.compute.manager [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.798 2 DEBUG oslo.service.loopingcall [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.798 2 DEBUG nova.compute.manager [-] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 13:09:41 np0005464891 nova_compute[259907]: 2025-10-01 17:09:41.798 2 DEBUG nova.network.neutron [-] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 13:09:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:09:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:09:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:09:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:09:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:09:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:09:42 np0005464891 nova_compute[259907]: 2025-10-01 17:09:42.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 554 KiB/s rd, 5.8 MiB/s wr, 90 op/s
Oct  1 13:09:43 np0005464891 nova_compute[259907]: 2025-10-01 17:09:43.682 2 DEBUG nova.compute.manager [req-8d45913b-d2f6-4b1d-bd8b-2c3cac944344 req-19ca9eb4-9c5b-448a-89ff-2c3d2ed44f52 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Received event network-vif-plugged-2f3b7601-86e2-45bc-9d3d-f75a39660a96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:09:43 np0005464891 nova_compute[259907]: 2025-10-01 17:09:43.683 2 DEBUG oslo_concurrency.lockutils [req-8d45913b-d2f6-4b1d-bd8b-2c3cac944344 req-19ca9eb4-9c5b-448a-89ff-2c3d2ed44f52 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:43 np0005464891 nova_compute[259907]: 2025-10-01 17:09:43.683 2 DEBUG oslo_concurrency.lockutils [req-8d45913b-d2f6-4b1d-bd8b-2c3cac944344 req-19ca9eb4-9c5b-448a-89ff-2c3d2ed44f52 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:43 np0005464891 nova_compute[259907]: 2025-10-01 17:09:43.684 2 DEBUG oslo_concurrency.lockutils [req-8d45913b-d2f6-4b1d-bd8b-2c3cac944344 req-19ca9eb4-9c5b-448a-89ff-2c3d2ed44f52 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:43 np0005464891 nova_compute[259907]: 2025-10-01 17:09:43.684 2 DEBUG nova.compute.manager [req-8d45913b-d2f6-4b1d-bd8b-2c3cac944344 req-19ca9eb4-9c5b-448a-89ff-2c3d2ed44f52 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] No waiting events found dispatching network-vif-plugged-2f3b7601-86e2-45bc-9d3d-f75a39660a96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:09:43 np0005464891 nova_compute[259907]: 2025-10-01 17:09:43.685 2 WARNING nova.compute.manager [req-8d45913b-d2f6-4b1d-bd8b-2c3cac944344 req-19ca9eb4-9c5b-448a-89ff-2c3d2ed44f52 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Received unexpected event network-vif-plugged-2f3b7601-86e2-45bc-9d3d-f75a39660a96 for instance with vm_state active and task_state deleting.#033[00m
Oct  1 13:09:44 np0005464891 nova_compute[259907]: 2025-10-01 17:09:44.134 2 DEBUG nova.network.neutron [-] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:09:44 np0005464891 nova_compute[259907]: 2025-10-01 17:09:44.156 2 INFO nova.compute.manager [-] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Took 2.36 seconds to deallocate network for instance.#033[00m
Oct  1 13:09:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:09:44 np0005464891 nova_compute[259907]: 2025-10-01 17:09:44.319 2 INFO nova.compute.manager [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Took 0.16 seconds to detach 1 volumes for instance.#033[00m
Oct  1 13:09:44 np0005464891 nova_compute[259907]: 2025-10-01 17:09:44.379 2 DEBUG oslo_concurrency.lockutils [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:44 np0005464891 nova_compute[259907]: 2025-10-01 17:09:44.379 2 DEBUG oslo_concurrency.lockutils [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:44 np0005464891 nova_compute[259907]: 2025-10-01 17:09:44.438 2 DEBUG oslo_concurrency.processutils [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:09:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:09:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1898937383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:09:44 np0005464891 nova_compute[259907]: 2025-10-01 17:09:44.859 2 DEBUG oslo_concurrency.processutils [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:09:44 np0005464891 nova_compute[259907]: 2025-10-01 17:09:44.864 2 DEBUG nova.compute.provider_tree [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:09:44 np0005464891 nova_compute[259907]: 2025-10-01 17:09:44.883 2 DEBUG nova.scheduler.client.report [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:09:44 np0005464891 nova_compute[259907]: 2025-10-01 17:09:44.913 2 DEBUG oslo_concurrency.lockutils [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:44 np0005464891 nova_compute[259907]: 2025-10-01 17:09:44.944 2 INFO nova.scheduler.client.report [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Deleted allocations for instance f58b995c-9c33-443c-9c3c-715eb493032f#033[00m
Oct  1 13:09:45 np0005464891 nova_compute[259907]: 2025-10-01 17:09:45.046 2 DEBUG oslo_concurrency.lockutils [None req-25bc68c6-a0e5-4f47-8153-40bfd28245ce c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 4.6 MiB/s wr, 71 op/s
Oct  1 13:09:45 np0005464891 nova_compute[259907]: 2025-10-01 17:09:45.781 2 DEBUG nova.compute.manager [req-b4e6fd33-2542-4960-b06d-1b663985ee22 req-a616fa30-6657-4483-b5af-c8e50165733b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Received event network-vif-plugged-2f3b7601-86e2-45bc-9d3d-f75a39660a96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:09:45 np0005464891 nova_compute[259907]: 2025-10-01 17:09:45.782 2 DEBUG oslo_concurrency.lockutils [req-b4e6fd33-2542-4960-b06d-1b663985ee22 req-a616fa30-6657-4483-b5af-c8e50165733b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:45 np0005464891 nova_compute[259907]: 2025-10-01 17:09:45.782 2 DEBUG oslo_concurrency.lockutils [req-b4e6fd33-2542-4960-b06d-1b663985ee22 req-a616fa30-6657-4483-b5af-c8e50165733b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:45 np0005464891 nova_compute[259907]: 2025-10-01 17:09:45.782 2 DEBUG oslo_concurrency.lockutils [req-b4e6fd33-2542-4960-b06d-1b663985ee22 req-a616fa30-6657-4483-b5af-c8e50165733b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "f58b995c-9c33-443c-9c3c-715eb493032f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:45 np0005464891 nova_compute[259907]: 2025-10-01 17:09:45.782 2 DEBUG nova.compute.manager [req-b4e6fd33-2542-4960-b06d-1b663985ee22 req-a616fa30-6657-4483-b5af-c8e50165733b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] No waiting events found dispatching network-vif-plugged-2f3b7601-86e2-45bc-9d3d-f75a39660a96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:09:45 np0005464891 nova_compute[259907]: 2025-10-01 17:09:45.782 2 WARNING nova.compute.manager [req-b4e6fd33-2542-4960-b06d-1b663985ee22 req-a616fa30-6657-4483-b5af-c8e50165733b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Received unexpected event network-vif-plugged-2f3b7601-86e2-45bc-9d3d-f75a39660a96 for instance with vm_state deleted and task_state None.#033[00m
Oct  1 13:09:45 np0005464891 nova_compute[259907]: 2025-10-01 17:09:45.783 2 DEBUG nova.compute.manager [req-b4e6fd33-2542-4960-b06d-1b663985ee22 req-a616fa30-6657-4483-b5af-c8e50165733b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Received event network-vif-deleted-2f3b7601-86e2-45bc-9d3d-f75a39660a96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:09:45 np0005464891 nova_compute[259907]: 2025-10-01 17:09:45.803 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:09:45 np0005464891 nova_compute[259907]: 2025-10-01 17:09:45.804 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 13:09:45 np0005464891 nova_compute[259907]: 2025-10-01 17:09:45.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:09:45 np0005464891 nova_compute[259907]: 2025-10-01 17:09:45.837 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:45 np0005464891 nova_compute[259907]: 2025-10-01 17:09:45.837 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:45 np0005464891 nova_compute[259907]: 2025-10-01 17:09:45.838 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:45 np0005464891 nova_compute[259907]: 2025-10-01 17:09:45.838 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 13:09:45 np0005464891 nova_compute[259907]: 2025-10-01 17:09:45.839 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:09:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:09:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4065681131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:09:46 np0005464891 nova_compute[259907]: 2025-10-01 17:09:46.333 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:09:46 np0005464891 nova_compute[259907]: 2025-10-01 17:09:46.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:46 np0005464891 nova_compute[259907]: 2025-10-01 17:09:46.529 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:09:46 np0005464891 nova_compute[259907]: 2025-10-01 17:09:46.531 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4342MB free_disk=59.98814010620117GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 13:09:46 np0005464891 nova_compute[259907]: 2025-10-01 17:09:46.532 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:46 np0005464891 nova_compute[259907]: 2025-10-01 17:09:46.532 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:46 np0005464891 nova_compute[259907]: 2025-10-01 17:09:46.615 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 13:09:46 np0005464891 nova_compute[259907]: 2025-10-01 17:09:46.616 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 13:09:46 np0005464891 nova_compute[259907]: 2025-10-01 17:09:46.634 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:09:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:09:47 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1839575962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:09:47 np0005464891 nova_compute[259907]: 2025-10-01 17:09:47.179 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:09:47 np0005464891 nova_compute[259907]: 2025-10-01 17:09:47.186 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:09:47 np0005464891 nova_compute[259907]: 2025-10-01 17:09:47.216 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:09:47 np0005464891 nova_compute[259907]: 2025-10-01 17:09:47.240 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 13:09:47 np0005464891 nova_compute[259907]: 2025-10-01 17:09:47.241 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:47 np0005464891 nova_compute[259907]: 2025-10-01 17:09:47.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 4.6 MiB/s wr, 72 op/s
Oct  1 13:09:48 np0005464891 nova_compute[259907]: 2025-10-01 17:09:48.242 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:09:48 np0005464891 nova_compute[259907]: 2025-10-01 17:09:48.243 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 13:09:48 np0005464891 nova_compute[259907]: 2025-10-01 17:09:48.243 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 13:09:48 np0005464891 nova_compute[259907]: 2025-10-01 17:09:48.262 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 13:09:48 np0005464891 nova_compute[259907]: 2025-10-01 17:09:48.263 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:09:48 np0005464891 nova_compute[259907]: 2025-10-01 17:09:48.820 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:09:48 np0005464891 nova_compute[259907]: 2025-10-01 17:09:48.820 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:09:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:09:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 2.4 MiB/s wr, 42 op/s
Oct  1 13:09:49 np0005464891 nova_compute[259907]: 2025-10-01 17:09:49.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:09:50 np0005464891 nova_compute[259907]: 2025-10-01 17:09:50.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:09:51 np0005464891 nova_compute[259907]: 2025-10-01 17:09:51.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 15 KiB/s wr, 15 op/s
Oct  1 13:09:52 np0005464891 nova_compute[259907]: 2025-10-01 17:09:52.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:52 np0005464891 nova_compute[259907]: 2025-10-01 17:09:52.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:09:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 14 KiB/s wr, 14 op/s
Oct  1 13:09:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:09:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 0 B/s wr, 0 op/s
Oct  1 13:09:56 np0005464891 nova_compute[259907]: 2025-10-01 17:09:56.438 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759338581.4369938, f58b995c-9c33-443c-9c3c-715eb493032f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:09:56 np0005464891 nova_compute[259907]: 2025-10-01 17:09:56.439 2 INFO nova.compute.manager [-] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] VM Stopped (Lifecycle Event)#033[00m
Oct  1 13:09:56 np0005464891 nova_compute[259907]: 2025-10-01 17:09:56.458 2 DEBUG nova.compute.manager [None req-8a6565c6-7acf-4200-8546-d2ed9e8ff9f0 - - - - - -] [instance: f58b995c-9c33-443c-9c3c-715eb493032f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:09:56 np0005464891 nova_compute[259907]: 2025-10-01 17:09:56.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:56 np0005464891 nova_compute[259907]: 2025-10-01 17:09:56.801 2 DEBUG oslo_concurrency.lockutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "e655026a-cec2-4cc0-97ae-6bde056da6fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:56 np0005464891 nova_compute[259907]: 2025-10-01 17:09:56.801 2 DEBUG oslo_concurrency.lockutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "e655026a-cec2-4cc0-97ae-6bde056da6fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:56 np0005464891 nova_compute[259907]: 2025-10-01 17:09:56.819 2 DEBUG nova.compute.manager [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 13:09:56 np0005464891 nova_compute[259907]: 2025-10-01 17:09:56.890 2 DEBUG oslo_concurrency.lockutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:56 np0005464891 nova_compute[259907]: 2025-10-01 17:09:56.891 2 DEBUG oslo_concurrency.lockutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:56 np0005464891 nova_compute[259907]: 2025-10-01 17:09:56.896 2 DEBUG nova.virt.hardware [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 13:09:56 np0005464891 nova_compute[259907]: 2025-10-01 17:09:56.897 2 INFO nova.compute.claims [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.003 2 DEBUG oslo_concurrency.processutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:09:57 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:09:57 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3696811377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.440 2 DEBUG oslo_concurrency.processutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.447 2 DEBUG nova.compute.provider_tree [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.465 2 DEBUG nova.scheduler.client.report [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.492 2 DEBUG oslo_concurrency.lockutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.493 2 DEBUG nova.compute.manager [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.545 2 DEBUG nova.compute.manager [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.545 2 DEBUG nova.network.neutron [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.568 2 INFO nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.588 2 DEBUG nova.compute.manager [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 13:09:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 0 B/s wr, 0 op/s
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.633 2 INFO nova.virt.block_device [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Booting with volume 716796d4-34be-42fb-b848-e2b478eb2841 at /dev/vda#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.778 2 DEBUG os_brick.utils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.780 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.794 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.795 741 DEBUG oslo.privsep.daemon [-] privsep: reply[e90fc849-7b26-4075-b2f7-5c5dbb748794]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.796 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.805 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.806 741 DEBUG oslo.privsep.daemon [-] privsep: reply[30c967db-b773-422e-80b2-d0456d0e824b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.807 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.822 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.822 741 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7f7721-7b6d-415d-82dc-2fee6fc2605e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.823 741 DEBUG oslo.privsep.daemon [-] privsep: reply[82a12d4c-1604-4034-ada0-f6f45ff78253]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.824 2 DEBUG oslo_concurrency.processutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.862 2 DEBUG oslo_concurrency.processutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "nvme version" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.866 2 DEBUG os_brick.initiator.connectors.lightos [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.866 2 DEBUG os_brick.initiator.connectors.lightos [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.867 2 DEBUG os_brick.initiator.connectors.lightos [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.867 2 DEBUG os_brick.utils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] <== get_connector_properties: return (87ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 13:09:57 np0005464891 nova_compute[259907]: 2025-10-01 17:09:57.868 2 DEBUG nova.virt.block_device [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Updating existing volume attachment record: adfb97f8-1e6f-4da8-a402-0a86459cdb3a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 13:09:58 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:09:58 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/281941615' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:09:58 np0005464891 nova_compute[259907]: 2025-10-01 17:09:58.580 2 DEBUG nova.policy [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c440275c1a1e4cf09fcf789374345bb2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7101f2ff48f540a08f6ec15b324152c6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 13:09:59 np0005464891 nova_compute[259907]: 2025-10-01 17:09:59.052 2 DEBUG nova.compute.manager [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 13:09:59 np0005464891 nova_compute[259907]: 2025-10-01 17:09:59.054 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 13:09:59 np0005464891 nova_compute[259907]: 2025-10-01 17:09:59.055 2 INFO nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Creating image(s)#033[00m
Oct  1 13:09:59 np0005464891 nova_compute[259907]: 2025-10-01 17:09:59.056 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  1 13:09:59 np0005464891 nova_compute[259907]: 2025-10-01 17:09:59.056 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Ensure instance console log exists: /var/lib/nova/instances/e655026a-cec2-4cc0-97ae-6bde056da6fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 13:09:59 np0005464891 nova_compute[259907]: 2025-10-01 17:09:59.057 2 DEBUG oslo_concurrency.lockutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:09:59 np0005464891 nova_compute[259907]: 2025-10-01 17:09:59.058 2 DEBUG oslo_concurrency.lockutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:09:59 np0005464891 nova_compute[259907]: 2025-10-01 17:09:59.059 2 DEBUG oslo_concurrency.lockutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:09:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:09:59 np0005464891 nova_compute[259907]: 2025-10-01 17:09:59.206 2 DEBUG nova.network.neutron [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Successfully created port: 7a26e0a1-31a3-4972-ae7d-6df86b28214c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 13:09:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:10:00 np0005464891 nova_compute[259907]: 2025-10-01 17:10:00.622 2 DEBUG nova.network.neutron [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Successfully updated port: 7a26e0a1-31a3-4972-ae7d-6df86b28214c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 13:10:00 np0005464891 nova_compute[259907]: 2025-10-01 17:10:00.660 2 DEBUG oslo_concurrency.lockutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "refresh_cache-e655026a-cec2-4cc0-97ae-6bde056da6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:10:00 np0005464891 nova_compute[259907]: 2025-10-01 17:10:00.660 2 DEBUG oslo_concurrency.lockutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquired lock "refresh_cache-e655026a-cec2-4cc0-97ae-6bde056da6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:10:00 np0005464891 nova_compute[259907]: 2025-10-01 17:10:00.660 2 DEBUG nova.network.neutron [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 13:10:00 np0005464891 nova_compute[259907]: 2025-10-01 17:10:00.742 2 DEBUG nova.compute.manager [req-af2ddc66-a6df-46ac-a726-cb28735e8115 req-2d2496b7-e829-41ae-861c-60ed56a0e2b9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Received event network-changed-7a26e0a1-31a3-4972-ae7d-6df86b28214c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:10:00 np0005464891 nova_compute[259907]: 2025-10-01 17:10:00.743 2 DEBUG nova.compute.manager [req-af2ddc66-a6df-46ac-a726-cb28735e8115 req-2d2496b7-e829-41ae-861c-60ed56a0e2b9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Refreshing instance network info cache due to event network-changed-7a26e0a1-31a3-4972-ae7d-6df86b28214c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 13:10:00 np0005464891 nova_compute[259907]: 2025-10-01 17:10:00.743 2 DEBUG oslo_concurrency.lockutils [req-af2ddc66-a6df-46ac-a726-cb28735e8115 req-2d2496b7-e829-41ae-861c-60ed56a0e2b9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-e655026a-cec2-4cc0-97ae-6bde056da6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:10:00 np0005464891 nova_compute[259907]: 2025-10-01 17:10:00.805 2 DEBUG nova.network.neutron [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 13:10:00 np0005464891 podman[311138]: 2025-10-01 17:10:00.984631029 +0000 UTC m=+0.084449804 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.537 2 DEBUG nova.network.neutron [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Updating instance_info_cache with network_info: [{"id": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "address": "fa:16:3e:7e:22:df", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a26e0a1-31", "ovs_interfaceid": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.554 2 DEBUG oslo_concurrency.lockutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Releasing lock "refresh_cache-e655026a-cec2-4cc0-97ae-6bde056da6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.554 2 DEBUG nova.compute.manager [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Instance network_info: |[{"id": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "address": "fa:16:3e:7e:22:df", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a26e0a1-31", "ovs_interfaceid": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.555 2 DEBUG oslo_concurrency.lockutils [req-af2ddc66-a6df-46ac-a726-cb28735e8115 req-2d2496b7-e829-41ae-861c-60ed56a0e2b9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-e655026a-cec2-4cc0-97ae-6bde056da6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.555 2 DEBUG nova.network.neutron [req-af2ddc66-a6df-46ac-a726-cb28735e8115 req-2d2496b7-e829-41ae-861c-60ed56a0e2b9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Refreshing network info cache for port 7a26e0a1-31a3-4972-ae7d-6df86b28214c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.558 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Start _get_guest_xml network_info=[{"id": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "address": "fa:16:3e:7e:22:df", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a26e0a1-31", "ovs_interfaceid": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'mount_device': '/dev/vda', 'attachment_id': 'adfb97f8-1e6f-4da8-a402-0a86459cdb3a', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-716796d4-34be-42fb-b848-e2b478eb2841', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '716796d4-34be-42fb-b848-e2b478eb2841', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'e655026a-cec2-4cc0-97ae-6bde056da6fb', 'attached_at': '', 'detached_at': '', 'volume_id': '716796d4-34be-42fb-b848-e2b478eb2841', 'serial': '716796d4-34be-42fb-b848-e2b478eb2841'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.565 2 WARNING nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.578 2 DEBUG nova.virt.libvirt.host [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.579 2 DEBUG nova.virt.libvirt.host [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.583 2 DEBUG nova.virt.libvirt.host [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.584 2 DEBUG nova.virt.libvirt.host [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.585 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.585 2 DEBUG nova.virt.hardware [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.585 2 DEBUG nova.virt.hardware [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.586 2 DEBUG nova.virt.hardware [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.586 2 DEBUG nova.virt.hardware [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.586 2 DEBUG nova.virt.hardware [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.587 2 DEBUG nova.virt.hardware [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.587 2 DEBUG nova.virt.hardware [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.587 2 DEBUG nova.virt.hardware [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.587 2 DEBUG nova.virt.hardware [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.588 2 DEBUG nova.virt.hardware [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.588 2 DEBUG nova.virt.hardware [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 13:10:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.613 2 DEBUG nova.storage.rbd_utils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] rbd image e655026a-cec2-4cc0-97ae-6bde056da6fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:10:01 np0005464891 nova_compute[259907]: 2025-10-01 17:10:01.617 2 DEBUG oslo_concurrency.processutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:10:02 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:10:02 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/652279667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.046 2 DEBUG oslo_concurrency.processutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.169 2 DEBUG os_brick.encryptors [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Using volume encryption metadata '{'encryption_key_id': '0db56896-25ac-4dcd-be52-abdab32cda6e', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-716796d4-34be-42fb-b848-e2b478eb2841', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '716796d4-34be-42fb-b848-e2b478eb2841', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'e655026a-cec2-4cc0-97ae-6bde056da6fb', 'attached_at': '', 'detached_at': '', 'volume_id': '716796d4-34be-42fb-b848-e2b478eb2841', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.173 2 DEBUG barbicanclient.client [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.192 2 DEBUG barbicanclient.v1.secrets [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/0db56896-25ac-4dcd-be52-abdab32cda6e get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.193 2 INFO barbicanclient.base [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/0db56896-25ac-4dcd-be52-abdab32cda6e#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.219 2 DEBUG barbicanclient.client [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.220 2 INFO barbicanclient.base [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/0db56896-25ac-4dcd-be52-abdab32cda6e#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.242 2 DEBUG barbicanclient.client [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.242 2 INFO barbicanclient.base [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/0db56896-25ac-4dcd-be52-abdab32cda6e#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.262 2 DEBUG barbicanclient.client [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.262 2 INFO barbicanclient.base [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/0db56896-25ac-4dcd-be52-abdab32cda6e#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.286 2 DEBUG barbicanclient.client [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.286 2 INFO barbicanclient.base [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/0db56896-25ac-4dcd-be52-abdab32cda6e#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.310 2 DEBUG barbicanclient.client [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.311 2 INFO barbicanclient.base [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/0db56896-25ac-4dcd-be52-abdab32cda6e#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.334 2 DEBUG barbicanclient.client [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.335 2 INFO barbicanclient.base [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/0db56896-25ac-4dcd-be52-abdab32cda6e#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.355 2 DEBUG barbicanclient.client [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.356 2 INFO barbicanclient.base [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/0db56896-25ac-4dcd-be52-abdab32cda6e#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.457 2 DEBUG barbicanclient.client [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.458 2 INFO barbicanclient.base [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/0db56896-25ac-4dcd-be52-abdab32cda6e#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.491 2 DEBUG barbicanclient.client [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.492 2 INFO barbicanclient.base [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/0db56896-25ac-4dcd-be52-abdab32cda6e#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.515 2 DEBUG barbicanclient.client [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.516 2 INFO barbicanclient.base [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/0db56896-25ac-4dcd-be52-abdab32cda6e#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.539 2 DEBUG barbicanclient.client [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.539 2 INFO barbicanclient.base [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/0db56896-25ac-4dcd-be52-abdab32cda6e#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.563 2 DEBUG barbicanclient.client [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.564 2 INFO barbicanclient.base [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/0db56896-25ac-4dcd-be52-abdab32cda6e#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.688 2 DEBUG barbicanclient.client [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.689 2 INFO barbicanclient.base [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/0db56896-25ac-4dcd-be52-abdab32cda6e#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.718 2 DEBUG barbicanclient.client [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.719 2 INFO barbicanclient.base [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/0db56896-25ac-4dcd-be52-abdab32cda6e#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.751 2 DEBUG barbicanclient.client [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.752 2 DEBUG nova.virt.libvirt.host [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  <usage type="volume">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <volume>716796d4-34be-42fb-b848-e2b478eb2841</volume>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  </usage>
Oct  1 13:10:02 np0005464891 nova_compute[259907]: </secret>
Oct  1 13:10:02 np0005464891 nova_compute[259907]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.793 2 DEBUG nova.virt.libvirt.vif [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T17:09:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-497211438',display_name='tempest-TransferEncryptedVolumeTest-server-497211438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-497211438',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDg5DQ7zjtFPcwFlQ5c4RmSqdiymCwHuuIH20+rbjv/O1v35DyytGl6//xUvotUS7Kzw36qLhLq5I09Wysu4SY0CkP602jXi/K2rnz98jI+qtsF54Xtpb6f0pP7J8Fn4NQ==',key_name='tempest-TransferEncryptedVolumeTest-797389342',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7101f2ff48f540a08f6ec15b324152c6',ramdisk_id='',reservation_id='r-9qrziz1k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1550217158',owner_user_name='tempest-TransferEncryptedVolumeTest-1550217158-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T17:09:57Z,user_data=None,user_id='c440275c1a1e4cf09fcf789374345bb2',uuid=e655026a-cec2-4cc0-97ae-6bde056da6fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "address": "fa:16:3e:7e:22:df", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a26e0a1-31", "ovs_interfaceid": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.794 2 DEBUG nova.network.os_vif_util [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converting VIF {"id": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "address": "fa:16:3e:7e:22:df", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a26e0a1-31", "ovs_interfaceid": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.795 2 DEBUG nova.network.os_vif_util [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:22:df,bridge_name='br-int',has_traffic_filtering=True,id=7a26e0a1-31a3-4972-ae7d-6df86b28214c,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a26e0a1-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.798 2 DEBUG nova.objects.instance [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid e655026a-cec2-4cc0-97ae-6bde056da6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.821 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] End _get_guest_xml xml=<domain type="kvm">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  <uuid>e655026a-cec2-4cc0-97ae-6bde056da6fb</uuid>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  <name>instance-0000001c</name>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-497211438</nova:name>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 17:10:01</nova:creationTime>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:        <nova:user uuid="c440275c1a1e4cf09fcf789374345bb2">tempest-TransferEncryptedVolumeTest-1550217158-project-member</nova:user>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:        <nova:project uuid="7101f2ff48f540a08f6ec15b324152c6">tempest-TransferEncryptedVolumeTest-1550217158</nova:project>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:        <nova:port uuid="7a26e0a1-31a3-4972-ae7d-6df86b28214c">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <system>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <entry name="serial">e655026a-cec2-4cc0-97ae-6bde056da6fb</entry>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <entry name="uuid">e655026a-cec2-4cc0-97ae-6bde056da6fb</entry>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    </system>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  <os>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  </os>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  <features>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  </features>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  </clock>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  <devices>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/e655026a-cec2-4cc0-97ae-6bde056da6fb_disk.config">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      </source>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      </auth>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    </disk>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="volumes/volume-716796d4-34be-42fb-b848-e2b478eb2841">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      </source>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      </auth>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <serial>716796d4-34be-42fb-b848-e2b478eb2841</serial>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <encryption format="luks">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:        <secret type="passphrase" uuid="6be8c25c-3951-453b-92a8-3472674c3eff"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      </encryption>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    </disk>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:7e:22:df"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <target dev="tap7a26e0a1-31"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    </interface>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/e655026a-cec2-4cc0-97ae-6bde056da6fb/console.log" append="off"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    </serial>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <video>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    </video>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    </rng>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 13:10:02 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 13:10:02 np0005464891 nova_compute[259907]:  </devices>
Oct  1 13:10:02 np0005464891 nova_compute[259907]: </domain>
Oct  1 13:10:02 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.824 2 DEBUG nova.compute.manager [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Preparing to wait for external event network-vif-plugged-7a26e0a1-31a3-4972-ae7d-6df86b28214c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.824 2 DEBUG oslo_concurrency.lockutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.825 2 DEBUG oslo_concurrency.lockutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.825 2 DEBUG oslo_concurrency.lockutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.826 2 DEBUG nova.virt.libvirt.vif [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T17:09:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-497211438',display_name='tempest-TransferEncryptedVolumeTest-server-497211438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-497211438',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDg5DQ7zjtFPcwFlQ5c4RmSqdiymCwHuuIH20+rbjv/O1v35DyytGl6//xUvotUS7Kzw36qLhLq5I09Wysu4SY0CkP602jXi/K2rnz98jI+qtsF54Xtpb6f0pP7J8Fn4NQ==',key_name='tempest-TransferEncryptedVolumeTest-797389342',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7101f2ff48f540a08f6ec15b324152c6',ramdisk_id='',reservation_id='r-9qrziz1k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1550217158',owner_user_name='tempest-TransferEncryptedVolumeTest-1550217158-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T17:09:57Z,user_data=None,user_id='c440275c1a1e4cf09fcf789374345bb2',uuid=e655026a-cec2-4cc0-97ae-6bde056da6fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "address": "fa:16:3e:7e:22:df", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a26e0a1-31", "ovs_interfaceid": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.827 2 DEBUG nova.network.os_vif_util [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converting VIF {"id": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "address": "fa:16:3e:7e:22:df", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a26e0a1-31", "ovs_interfaceid": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.828 2 DEBUG nova.network.os_vif_util [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:22:df,bridge_name='br-int',has_traffic_filtering=True,id=7a26e0a1-31a3-4972-ae7d-6df86b28214c,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a26e0a1-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.829 2 DEBUG os_vif [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:22:df,bridge_name='br-int',has_traffic_filtering=True,id=7a26e0a1-31a3-4972-ae7d-6df86b28214c,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a26e0a1-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.832 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.836 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a26e0a1-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.837 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7a26e0a1-31, col_values=(('external_ids', {'iface-id': '7a26e0a1-31a3-4972-ae7d-6df86b28214c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:22:df', 'vm-uuid': 'e655026a-cec2-4cc0-97ae-6bde056da6fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:02 np0005464891 NetworkManager[44940]: <info>  [1759338602.8408] manager: (tap7a26e0a1-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.847 2 INFO os_vif [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:22:df,bridge_name='br-int',has_traffic_filtering=True,id=7a26e0a1-31a3-4972-ae7d-6df86b28214c,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a26e0a1-31')#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.911 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.912 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.912 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] No VIF found with MAC fa:16:3e:7e:22:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.913 2 INFO nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Using config drive#033[00m
Oct  1 13:10:02 np0005464891 nova_compute[259907]: 2025-10-01 17:10:02.951 2 DEBUG nova.storage.rbd_utils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] rbd image e655026a-cec2-4cc0-97ae-6bde056da6fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:10:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:10:03 np0005464891 nova_compute[259907]: 2025-10-01 17:10:03.785 2 DEBUG nova.network.neutron [req-af2ddc66-a6df-46ac-a726-cb28735e8115 req-2d2496b7-e829-41ae-861c-60ed56a0e2b9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Updated VIF entry in instance network info cache for port 7a26e0a1-31a3-4972-ae7d-6df86b28214c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 13:10:03 np0005464891 nova_compute[259907]: 2025-10-01 17:10:03.786 2 DEBUG nova.network.neutron [req-af2ddc66-a6df-46ac-a726-cb28735e8115 req-2d2496b7-e829-41ae-861c-60ed56a0e2b9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Updating instance_info_cache with network_info: [{"id": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "address": "fa:16:3e:7e:22:df", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a26e0a1-31", "ovs_interfaceid": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:10:03 np0005464891 nova_compute[259907]: 2025-10-01 17:10:03.800 2 DEBUG oslo_concurrency.lockutils [req-af2ddc66-a6df-46ac-a726-cb28735e8115 req-2d2496b7-e829-41ae-861c-60ed56a0e2b9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-e655026a-cec2-4cc0-97ae-6bde056da6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:10:03 np0005464891 nova_compute[259907]: 2025-10-01 17:10:03.807 2 INFO nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Creating config drive at /var/lib/nova/instances/e655026a-cec2-4cc0-97ae-6bde056da6fb/disk.config#033[00m
Oct  1 13:10:03 np0005464891 nova_compute[259907]: 2025-10-01 17:10:03.815 2 DEBUG oslo_concurrency.processutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e655026a-cec2-4cc0-97ae-6bde056da6fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6gmnaz2x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:10:03 np0005464891 nova_compute[259907]: 2025-10-01 17:10:03.959 2 DEBUG oslo_concurrency.processutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e655026a-cec2-4cc0-97ae-6bde056da6fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6gmnaz2x" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:10:03 np0005464891 nova_compute[259907]: 2025-10-01 17:10:03.983 2 DEBUG nova.storage.rbd_utils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] rbd image e655026a-cec2-4cc0-97ae-6bde056da6fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:10:03 np0005464891 nova_compute[259907]: 2025-10-01 17:10:03.987 2 DEBUG oslo_concurrency.processutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e655026a-cec2-4cc0-97ae-6bde056da6fb/disk.config e655026a-cec2-4cc0-97ae-6bde056da6fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:10:04 np0005464891 nova_compute[259907]: 2025-10-01 17:10:04.149 2 DEBUG oslo_concurrency.processutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e655026a-cec2-4cc0-97ae-6bde056da6fb/disk.config e655026a-cec2-4cc0-97ae-6bde056da6fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:10:04 np0005464891 nova_compute[259907]: 2025-10-01 17:10:04.150 2 INFO nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Deleting local config drive /var/lib/nova/instances/e655026a-cec2-4cc0-97ae-6bde056da6fb/disk.config because it was imported into RBD.#033[00m
Oct  1 13:10:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:10:04 np0005464891 kernel: tap7a26e0a1-31: entered promiscuous mode
Oct  1 13:10:04 np0005464891 NetworkManager[44940]: <info>  [1759338604.2061] manager: (tap7a26e0a1-31): new Tun device (/org/freedesktop/NetworkManager/Devices/144)
Oct  1 13:10:04 np0005464891 nova_compute[259907]: 2025-10-01 17:10:04.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:04 np0005464891 ovn_controller[152409]: 2025-10-01T17:10:04Z|00279|binding|INFO|Claiming lport 7a26e0a1-31a3-4972-ae7d-6df86b28214c for this chassis.
Oct  1 13:10:04 np0005464891 ovn_controller[152409]: 2025-10-01T17:10:04Z|00280|binding|INFO|7a26e0a1-31a3-4972-ae7d-6df86b28214c: Claiming fa:16:3e:7e:22:df 10.100.0.9
Oct  1 13:10:04 np0005464891 nova_compute[259907]: 2025-10-01 17:10:04.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.216 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:22:df 10.100.0.9'], port_security=['fa:16:3e:7e:22:df 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e655026a-cec2-4cc0-97ae-6bde056da6fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d747029d-7cd7-4e92-a356-867cacbb54c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7101f2ff48f540a08f6ec15b324152c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd4fc8115-d40c-458e-b5f5-c46a5e06662d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ce41f0a-b119-4c05-a337-15f2fa115c91, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=7a26e0a1-31a3-4972-ae7d-6df86b28214c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.219 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 7a26e0a1-31a3-4972-ae7d-6df86b28214c in datapath d747029d-7cd7-4e92-a356-867cacbb54c4 bound to our chassis#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.223 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d747029d-7cd7-4e92-a356-867cacbb54c4#033[00m
Oct  1 13:10:04 np0005464891 nova_compute[259907]: 2025-10-01 17:10:04.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:04 np0005464891 ovn_controller[152409]: 2025-10-01T17:10:04Z|00281|binding|INFO|Setting lport 7a26e0a1-31a3-4972-ae7d-6df86b28214c ovn-installed in OVS
Oct  1 13:10:04 np0005464891 ovn_controller[152409]: 2025-10-01T17:10:04Z|00282|binding|INFO|Setting lport 7a26e0a1-31a3-4972-ae7d-6df86b28214c up in Southbound
Oct  1 13:10:04 np0005464891 nova_compute[259907]: 2025-10-01 17:10:04.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.239 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5bbfc993-9f03-4326-b934-a6cabf7a14a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.240 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd747029d-71 in ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.242 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd747029d-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.242 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ce9247af-3f0d-4671-a873-29dc3b0acaa7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.243 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[9210c4ae-b75a-45a5-8080-c5cdbc9e7a05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:04 np0005464891 systemd-machined[214891]: New machine qemu-28-instance-0000001c.
Oct  1 13:10:04 np0005464891 systemd[1]: Started Virtual Machine qemu-28-instance-0000001c.
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.263 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[bf5cf2ca-1af9-49d8-928e-b3253f52b4d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:04 np0005464891 systemd-udevd[311286]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.282 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[19b06fef-cd6e-42e1-a6f2-fc4f364048b4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:04 np0005464891 NetworkManager[44940]: <info>  [1759338604.2947] device (tap7a26e0a1-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 13:10:04 np0005464891 NetworkManager[44940]: <info>  [1759338604.2955] device (tap7a26e0a1-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.321 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[984016f5-ccb0-4e43-8d79-90c934d65f59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:04 np0005464891 NetworkManager[44940]: <info>  [1759338604.3334] manager: (tapd747029d-70): new Veth device (/org/freedesktop/NetworkManager/Devices/145)
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.332 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2e8eb41b-a038-4e48-9a70-45e57298b71f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:04 np0005464891 systemd-udevd[311296]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.373 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef93115-be9c-431b-8c13-f1c2458ba1f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.376 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[5507c21e-a9df-44e0-af1d-66c36d222689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:04 np0005464891 podman[311267]: 2025-10-01 17:10:04.379205618 +0000 UTC m=+0.147848023 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:10:04 np0005464891 NetworkManager[44940]: <info>  [1759338604.3973] device (tapd747029d-70): carrier: link connected
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.402 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0e12c2-7bc0-48d3-9f42-356452d4c6be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.417 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[65b1d581-6a3d-46e7-9c75-e3120c53624d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd747029d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:a1:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555297, 'reachable_time': 25503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311330, 'error': None, 'target': 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.430 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[afea3bce-b4b9-43de-98af-8d5abe6c0daf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:a1a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 555297, 'tstamp': 555297}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311331, 'error': None, 'target': 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.445 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[a0dffda6-6b36-4773-b241-7c4c7ffe904d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd747029d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:a1:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555297, 'reachable_time': 25503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311332, 'error': None, 'target': 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.476 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[edbba53a-3934-4899-af6c-8e97b7b27544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.538 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[771f6ca5-bfaf-4399-a429-81ae8159a069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.540 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd747029d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.540 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.541 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd747029d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:10:04 np0005464891 NetworkManager[44940]: <info>  [1759338604.5440] manager: (tapd747029d-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Oct  1 13:10:04 np0005464891 nova_compute[259907]: 2025-10-01 17:10:04.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:04 np0005464891 kernel: tapd747029d-70: entered promiscuous mode
Oct  1 13:10:04 np0005464891 nova_compute[259907]: 2025-10-01 17:10:04.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.547 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd747029d-70, col_values=(('external_ids', {'iface-id': '3454e5b0-0c54-4314-89c0-47c1b5603195'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:10:04 np0005464891 nova_compute[259907]: 2025-10-01 17:10:04.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:04 np0005464891 ovn_controller[152409]: 2025-10-01T17:10:04Z|00283|binding|INFO|Releasing lport 3454e5b0-0c54-4314-89c0-47c1b5603195 from this chassis (sb_readonly=0)
Oct  1 13:10:04 np0005464891 nova_compute[259907]: 2025-10-01 17:10:04.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.568 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d747029d-7cd7-4e92-a356-867cacbb54c4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d747029d-7cd7-4e92-a356-867cacbb54c4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.569 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[933dfa88-2943-4c04-83c6-5bf6045a1574]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.570 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-d747029d-7cd7-4e92-a356-867cacbb54c4
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/d747029d-7cd7-4e92-a356-867cacbb54c4.pid.haproxy
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID d747029d-7cd7-4e92-a356-867cacbb54c4
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 13:10:04 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:04.571 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'env', 'PROCESS_TAG=haproxy-d747029d-7cd7-4e92-a356-867cacbb54c4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d747029d-7cd7-4e92-a356-867cacbb54c4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 13:10:04 np0005464891 nova_compute[259907]: 2025-10-01 17:10:04.794 2 DEBUG nova.compute.manager [req-94127cb3-c4bc-4456-b20f-0a2cc1a3a172 req-103a6977-ffd6-4c12-b67d-77e23410ec8b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Received event network-vif-plugged-7a26e0a1-31a3-4972-ae7d-6df86b28214c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:10:04 np0005464891 nova_compute[259907]: 2025-10-01 17:10:04.794 2 DEBUG oslo_concurrency.lockutils [req-94127cb3-c4bc-4456-b20f-0a2cc1a3a172 req-103a6977-ffd6-4c12-b67d-77e23410ec8b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:10:04 np0005464891 nova_compute[259907]: 2025-10-01 17:10:04.794 2 DEBUG oslo_concurrency.lockutils [req-94127cb3-c4bc-4456-b20f-0a2cc1a3a172 req-103a6977-ffd6-4c12-b67d-77e23410ec8b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:10:04 np0005464891 nova_compute[259907]: 2025-10-01 17:10:04.795 2 DEBUG oslo_concurrency.lockutils [req-94127cb3-c4bc-4456-b20f-0a2cc1a3a172 req-103a6977-ffd6-4c12-b67d-77e23410ec8b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:10:04 np0005464891 nova_compute[259907]: 2025-10-01 17:10:04.795 2 DEBUG nova.compute.manager [req-94127cb3-c4bc-4456-b20f-0a2cc1a3a172 req-103a6977-ffd6-4c12-b67d-77e23410ec8b af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Processing event network-vif-plugged-7a26e0a1-31a3-4972-ae7d-6df86b28214c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 13:10:05 np0005464891 podman[311400]: 2025-10-01 17:10:05.018939717 +0000 UTC m=+0.053886580 container create 83a9e9b8398fc662e86fc54cdd8ad525003e0acd0303105a1b523e0f44ba2498 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  1 13:10:05 np0005464891 systemd[1]: Started libpod-conmon-83a9e9b8398fc662e86fc54cdd8ad525003e0acd0303105a1b523e0f44ba2498.scope.
Oct  1 13:10:05 np0005464891 podman[311400]: 2025-10-01 17:10:04.993187175 +0000 UTC m=+0.028134038 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 13:10:05 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:10:05 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c614e0c306fd157051f4d5b5f22037990a009baaf2e837dc7c42d9af115dff7e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 13:10:05 np0005464891 podman[311400]: 2025-10-01 17:10:05.117655033 +0000 UTC m=+0.152601926 container init 83a9e9b8398fc662e86fc54cdd8ad525003e0acd0303105a1b523e0f44ba2498 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  1 13:10:05 np0005464891 podman[311400]: 2025-10-01 17:10:05.124255175 +0000 UTC m=+0.159202038 container start 83a9e9b8398fc662e86fc54cdd8ad525003e0acd0303105a1b523e0f44ba2498 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 13:10:05 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[311415]: [NOTICE]   (311419) : New worker (311421) forked
Oct  1 13:10:05 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[311415]: [NOTICE]   (311419) : Loading success.
Oct  1 13:10:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:10:06 np0005464891 nova_compute[259907]: 2025-10-01 17:10:06.879 2 DEBUG nova.compute.manager [req-13dcfa7b-d9f3-49c1-b605-142b915c71da req-199cbfe8-cacd-4333-a307-fa311fbfb7be af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Received event network-vif-plugged-7a26e0a1-31a3-4972-ae7d-6df86b28214c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:10:06 np0005464891 nova_compute[259907]: 2025-10-01 17:10:06.880 2 DEBUG oslo_concurrency.lockutils [req-13dcfa7b-d9f3-49c1-b605-142b915c71da req-199cbfe8-cacd-4333-a307-fa311fbfb7be af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:10:06 np0005464891 nova_compute[259907]: 2025-10-01 17:10:06.880 2 DEBUG oslo_concurrency.lockutils [req-13dcfa7b-d9f3-49c1-b605-142b915c71da req-199cbfe8-cacd-4333-a307-fa311fbfb7be af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:10:06 np0005464891 nova_compute[259907]: 2025-10-01 17:10:06.881 2 DEBUG oslo_concurrency.lockutils [req-13dcfa7b-d9f3-49c1-b605-142b915c71da req-199cbfe8-cacd-4333-a307-fa311fbfb7be af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:10:06 np0005464891 nova_compute[259907]: 2025-10-01 17:10:06.881 2 DEBUG nova.compute.manager [req-13dcfa7b-d9f3-49c1-b605-142b915c71da req-199cbfe8-cacd-4333-a307-fa311fbfb7be af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] No waiting events found dispatching network-vif-plugged-7a26e0a1-31a3-4972-ae7d-6df86b28214c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:10:06 np0005464891 nova_compute[259907]: 2025-10-01 17:10:06.882 2 WARNING nova.compute.manager [req-13dcfa7b-d9f3-49c1-b605-142b915c71da req-199cbfe8-cacd-4333-a307-fa311fbfb7be af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Received unexpected event network-vif-plugged-7a26e0a1-31a3-4972-ae7d-6df86b28214c for instance with vm_state building and task_state spawning.#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 170 B/s wr, 2 op/s
Oct  1 13:10:07 np0005464891 podman[311460]: 2025-10-01 17:10:07.65943423 +0000 UTC m=+0.063616817 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3)
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.667 2 DEBUG nova.compute.manager [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.669 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338607.6682603, e655026a-cec2-4cc0-97ae-6bde056da6fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.669 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] VM Started (Lifecycle Event)#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.672 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.674 2 INFO nova.virt.libvirt.driver [-] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Instance spawned successfully.#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.674 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.688 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.692 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.696 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.696 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.696 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.697 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.697 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.697 2 DEBUG nova.virt.libvirt.driver [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.707 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.707 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338607.668358, e655026a-cec2-4cc0-97ae-6bde056da6fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.708 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] VM Paused (Lifecycle Event)#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.725 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.729 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338607.6694217, e655026a-cec2-4cc0-97ae-6bde056da6fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.729 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] VM Resumed (Lifecycle Event)#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.749 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.752 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.761 2 INFO nova.compute.manager [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Took 8.71 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.761 2 DEBUG nova.compute.manager [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.769 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.819 2 INFO nova.compute.manager [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Took 10.95 seconds to build instance.#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.835 2 DEBUG oslo_concurrency.lockutils [None req-27a0fcda-0099-48b4-a5fd-4e96155d384d c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "e655026a-cec2-4cc0-97ae-6bde056da6fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:10:07 np0005464891 nova_compute[259907]: 2025-10-01 17:10:07.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:10:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:10:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 13:10:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:10:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 13:10:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:10:08 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev de9bd655-3148-4b1b-a1f8-8522b357135a does not exist
Oct  1 13:10:08 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 5aac1726-4e00-4eca-854f-0338802d0ae4 does not exist
Oct  1 13:10:08 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 3a91cdd4-c498-42c7-9978-8ffea29321f2 does not exist
Oct  1 13:10:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 13:10:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 13:10:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 13:10:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:10:08 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:10:08 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:10:08 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:10:08 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:10:08 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:10:08 np0005464891 podman[311727]: 2025-10-01 17:10:08.991587622 +0000 UTC m=+0.037433406 container create 74b9f0a39ed150368e01f6bca1c9dfc02aef26fc66466900c792b47b61eb00e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 13:10:09 np0005464891 systemd[1]: Started libpod-conmon-74b9f0a39ed150368e01f6bca1c9dfc02aef26fc66466900c792b47b61eb00e3.scope.
Oct  1 13:10:09 np0005464891 podman[311727]: 2025-10-01 17:10:08.974901011 +0000 UTC m=+0.020746815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:10:09 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:10:09 np0005464891 podman[311727]: 2025-10-01 17:10:09.092728885 +0000 UTC m=+0.138574699 container init 74b9f0a39ed150368e01f6bca1c9dfc02aef26fc66466900c792b47b61eb00e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ganguly, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 13:10:09 np0005464891 podman[311727]: 2025-10-01 17:10:09.10339711 +0000 UTC m=+0.149242914 container start 74b9f0a39ed150368e01f6bca1c9dfc02aef26fc66466900c792b47b61eb00e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:10:09 np0005464891 podman[311727]: 2025-10-01 17:10:09.10741847 +0000 UTC m=+0.153264274 container attach 74b9f0a39ed150368e01f6bca1c9dfc02aef26fc66466900c792b47b61eb00e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 13:10:09 np0005464891 cranky_ganguly[311744]: 167 167
Oct  1 13:10:09 np0005464891 systemd[1]: libpod-74b9f0a39ed150368e01f6bca1c9dfc02aef26fc66466900c792b47b61eb00e3.scope: Deactivated successfully.
Oct  1 13:10:09 np0005464891 conmon[311744]: conmon 74b9f0a39ed150368e01 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-74b9f0a39ed150368e01f6bca1c9dfc02aef26fc66466900c792b47b61eb00e3.scope/container/memory.events
Oct  1 13:10:09 np0005464891 podman[311727]: 2025-10-01 17:10:09.110343171 +0000 UTC m=+0.156188945 container died 74b9f0a39ed150368e01f6bca1c9dfc02aef26fc66466900c792b47b61eb00e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ganguly, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:10:09 np0005464891 systemd[1]: var-lib-containers-storage-overlay-f2aede618632fe4444e5b21e050de95b4ba93e8cfac5f8f47871157518372aa6-merged.mount: Deactivated successfully.
Oct  1 13:10:09 np0005464891 podman[311727]: 2025-10-01 17:10:09.157337369 +0000 UTC m=+0.203183153 container remove 74b9f0a39ed150368e01f6bca1c9dfc02aef26fc66466900c792b47b61eb00e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ganguly, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 13:10:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:10:09 np0005464891 systemd[1]: libpod-conmon-74b9f0a39ed150368e01f6bca1c9dfc02aef26fc66466900c792b47b61eb00e3.scope: Deactivated successfully.
Oct  1 13:10:09 np0005464891 podman[311767]: 2025-10-01 17:10:09.336742474 +0000 UTC m=+0.045415555 container create a27b183714a55e8b8a49d99215d9f223fb703b760a16512bcc9eb7dd67bb2645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:10:09 np0005464891 systemd[1]: Started libpod-conmon-a27b183714a55e8b8a49d99215d9f223fb703b760a16512bcc9eb7dd67bb2645.scope.
Oct  1 13:10:09 np0005464891 podman[311767]: 2025-10-01 17:10:09.317894383 +0000 UTC m=+0.026567494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:10:09 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:10:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6291bfde417046e50d5d72c42612e72e47319ac9b7b7d0df717f361d7bf96fa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:10:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6291bfde417046e50d5d72c42612e72e47319ac9b7b7d0df717f361d7bf96fa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:10:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6291bfde417046e50d5d72c42612e72e47319ac9b7b7d0df717f361d7bf96fa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:10:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6291bfde417046e50d5d72c42612e72e47319ac9b7b7d0df717f361d7bf96fa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:10:09 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6291bfde417046e50d5d72c42612e72e47319ac9b7b7d0df717f361d7bf96fa9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 13:10:09 np0005464891 podman[311767]: 2025-10-01 17:10:09.449182019 +0000 UTC m=+0.157855120 container init a27b183714a55e8b8a49d99215d9f223fb703b760a16512bcc9eb7dd67bb2645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:10:09 np0005464891 podman[311767]: 2025-10-01 17:10:09.45791665 +0000 UTC m=+0.166589741 container start a27b183714a55e8b8a49d99215d9f223fb703b760a16512bcc9eb7dd67bb2645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 13:10:09 np0005464891 podman[311767]: 2025-10-01 17:10:09.461480739 +0000 UTC m=+0.170153840 container attach a27b183714a55e8b8a49d99215d9f223fb703b760a16512bcc9eb7dd67bb2645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:10:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 143 KiB/s rd, 13 KiB/s wr, 13 op/s
Oct  1 13:10:10 np0005464891 pensive_golick[311784]: --> passed data devices: 0 physical, 3 LVM
Oct  1 13:10:10 np0005464891 pensive_golick[311784]: --> relative data size: 1.0
Oct  1 13:10:10 np0005464891 pensive_golick[311784]: --> All data devices are unavailable
Oct  1 13:10:10 np0005464891 systemd[1]: libpod-a27b183714a55e8b8a49d99215d9f223fb703b760a16512bcc9eb7dd67bb2645.scope: Deactivated successfully.
Oct  1 13:10:10 np0005464891 systemd[1]: libpod-a27b183714a55e8b8a49d99215d9f223fb703b760a16512bcc9eb7dd67bb2645.scope: Consumed 1.119s CPU time.
Oct  1 13:10:10 np0005464891 podman[311767]: 2025-10-01 17:10:10.640406517 +0000 UTC m=+1.349079638 container died a27b183714a55e8b8a49d99215d9f223fb703b760a16512bcc9eb7dd67bb2645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:10:10 np0005464891 systemd[1]: var-lib-containers-storage-overlay-6291bfde417046e50d5d72c42612e72e47319ac9b7b7d0df717f361d7bf96fa9-merged.mount: Deactivated successfully.
Oct  1 13:10:10 np0005464891 podman[311767]: 2025-10-01 17:10:10.709846195 +0000 UTC m=+1.418519296 container remove a27b183714a55e8b8a49d99215d9f223fb703b760a16512bcc9eb7dd67bb2645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:10:10 np0005464891 systemd[1]: libpod-conmon-a27b183714a55e8b8a49d99215d9f223fb703b760a16512bcc9eb7dd67bb2645.scope: Deactivated successfully.
Oct  1 13:10:10 np0005464891 podman[311814]: 2025-10-01 17:10:10.765309117 +0000 UTC m=+0.077287646 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  1 13:10:11 np0005464891 podman[311987]: 2025-10-01 17:10:11.353369008 +0000 UTC m=+0.045558789 container create ee6084556aaafe0f02deca45206841e2d1130dd61a243e6a0aaca6c8a1be28d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:10:11 np0005464891 systemd[1]: Started libpod-conmon-ee6084556aaafe0f02deca45206841e2d1130dd61a243e6a0aaca6c8a1be28d8.scope.
Oct  1 13:10:11 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:10:11 np0005464891 podman[311987]: 2025-10-01 17:10:11.42333 +0000 UTC m=+0.115519801 container init ee6084556aaafe0f02deca45206841e2d1130dd61a243e6a0aaca6c8a1be28d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:10:11 np0005464891 podman[311987]: 2025-10-01 17:10:11.335797973 +0000 UTC m=+0.027987774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:10:11 np0005464891 podman[311987]: 2025-10-01 17:10:11.434566611 +0000 UTC m=+0.126756402 container start ee6084556aaafe0f02deca45206841e2d1130dd61a243e6a0aaca6c8a1be28d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:10:11 np0005464891 podman[311987]: 2025-10-01 17:10:11.437406959 +0000 UTC m=+0.129596760 container attach ee6084556aaafe0f02deca45206841e2d1130dd61a243e6a0aaca6c8a1be28d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:10:11 np0005464891 compassionate_chatterjee[312003]: 167 167
Oct  1 13:10:11 np0005464891 podman[311987]: 2025-10-01 17:10:11.439934849 +0000 UTC m=+0.132124630 container died ee6084556aaafe0f02deca45206841e2d1130dd61a243e6a0aaca6c8a1be28d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 13:10:11 np0005464891 systemd[1]: libpod-ee6084556aaafe0f02deca45206841e2d1130dd61a243e6a0aaca6c8a1be28d8.scope: Deactivated successfully.
Oct  1 13:10:11 np0005464891 systemd[1]: var-lib-containers-storage-overlay-de621af45779e161416756b29e94e5df7f800365934d948f80fa36cc737be29c-merged.mount: Deactivated successfully.
Oct  1 13:10:11 np0005464891 podman[311987]: 2025-10-01 17:10:11.501590262 +0000 UTC m=+0.193780053 container remove ee6084556aaafe0f02deca45206841e2d1130dd61a243e6a0aaca6c8a1be28d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 13:10:11 np0005464891 systemd[1]: libpod-conmon-ee6084556aaafe0f02deca45206841e2d1130dd61a243e6a0aaca6c8a1be28d8.scope: Deactivated successfully.
Oct  1 13:10:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 952 KiB/s rd, 13 KiB/s wr, 40 op/s
Oct  1 13:10:11 np0005464891 podman[312027]: 2025-10-01 17:10:11.71337465 +0000 UTC m=+0.069174290 container create 62833712eff5abbc0e63db348e3d62ce7d0183905062fbeb11de6dc113403c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 13:10:11 np0005464891 systemd[1]: Started libpod-conmon-62833712eff5abbc0e63db348e3d62ce7d0183905062fbeb11de6dc113403c29.scope.
Oct  1 13:10:11 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:10:11 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08d54172d0296f3b8bbc81c14d76d05618d28e029b2ffe112c02e1c456d3577/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:10:11 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08d54172d0296f3b8bbc81c14d76d05618d28e029b2ffe112c02e1c456d3577/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:10:11 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08d54172d0296f3b8bbc81c14d76d05618d28e029b2ffe112c02e1c456d3577/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:10:11 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08d54172d0296f3b8bbc81c14d76d05618d28e029b2ffe112c02e1c456d3577/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:10:11 np0005464891 podman[312027]: 2025-10-01 17:10:11.685395098 +0000 UTC m=+0.041194778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:10:11 np0005464891 podman[312027]: 2025-10-01 17:10:11.807314115 +0000 UTC m=+0.163113755 container init 62833712eff5abbc0e63db348e3d62ce7d0183905062fbeb11de6dc113403c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_taussig, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:10:11 np0005464891 podman[312027]: 2025-10-01 17:10:11.819122951 +0000 UTC m=+0.174922541 container start 62833712eff5abbc0e63db348e3d62ce7d0183905062fbeb11de6dc113403c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:10:11 np0005464891 podman[312027]: 2025-10-01 17:10:11.824690065 +0000 UTC m=+0.180489715 container attach 62833712eff5abbc0e63db348e3d62ce7d0183905062fbeb11de6dc113403c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 13:10:12 np0005464891 nova_compute[259907]: 2025-10-01 17:10:12.111 2 DEBUG nova.compute.manager [req-9a0db8a2-86c5-4cbb-aadb-273267644fe8 req-df483cde-eb84-424b-9e28-e53d4b3f7f8f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Received event network-changed-7a26e0a1-31a3-4972-ae7d-6df86b28214c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:10:12 np0005464891 nova_compute[259907]: 2025-10-01 17:10:12.111 2 DEBUG nova.compute.manager [req-9a0db8a2-86c5-4cbb-aadb-273267644fe8 req-df483cde-eb84-424b-9e28-e53d4b3f7f8f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Refreshing instance network info cache due to event network-changed-7a26e0a1-31a3-4972-ae7d-6df86b28214c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 13:10:12 np0005464891 nova_compute[259907]: 2025-10-01 17:10:12.112 2 DEBUG oslo_concurrency.lockutils [req-9a0db8a2-86c5-4cbb-aadb-273267644fe8 req-df483cde-eb84-424b-9e28-e53d4b3f7f8f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-e655026a-cec2-4cc0-97ae-6bde056da6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:10:12 np0005464891 nova_compute[259907]: 2025-10-01 17:10:12.112 2 DEBUG oslo_concurrency.lockutils [req-9a0db8a2-86c5-4cbb-aadb-273267644fe8 req-df483cde-eb84-424b-9e28-e53d4b3f7f8f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-e655026a-cec2-4cc0-97ae-6bde056da6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:10:12 np0005464891 nova_compute[259907]: 2025-10-01 17:10:12.112 2 DEBUG nova.network.neutron [req-9a0db8a2-86c5-4cbb-aadb-273267644fe8 req-df483cde-eb84-424b-9e28-e53d4b3f7f8f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Refreshing network info cache for port 7a26e0a1-31a3-4972-ae7d-6df86b28214c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_17:10:12
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'vms', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.control']
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 13:10:12 np0005464891 nova_compute[259907]: 2025-10-01 17:10:12.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:12.470 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:10:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:12.471 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:10:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:12.472 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:10:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]: {
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:    "0": [
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:        {
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "devices": [
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "/dev/loop3"
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            ],
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "lv_name": "ceph_lv0",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "lv_size": "21470642176",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "name": "ceph_lv0",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "tags": {
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.cluster_name": "ceph",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.crush_device_class": "",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.encrypted": "0",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.osd_id": "0",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.type": "block",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.vdo": "0"
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            },
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "type": "block",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "vg_name": "ceph_vg0"
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:        }
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:    ],
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:    "1": [
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:        {
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "devices": [
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "/dev/loop4"
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            ],
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "lv_name": "ceph_lv1",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "lv_size": "21470642176",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "name": "ceph_lv1",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "tags": {
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.cluster_name": "ceph",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.crush_device_class": "",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.encrypted": "0",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.osd_id": "1",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.type": "block",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.vdo": "0"
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            },
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "type": "block",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "vg_name": "ceph_vg1"
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:        }
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:    ],
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:    "2": [
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:        {
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "devices": [
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "/dev/loop5"
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            ],
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "lv_name": "ceph_lv2",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "lv_size": "21470642176",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "name": "ceph_lv2",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "tags": {
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.cluster_name": "ceph",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.crush_device_class": "",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.encrypted": "0",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.osd_id": "2",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.type": "block",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:                "ceph.vdo": "0"
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            },
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "type": "block",
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:            "vg_name": "ceph_vg2"
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:        }
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]:    ]
Oct  1 13:10:12 np0005464891 youthful_taussig[312044]: }
Oct  1 13:10:12 np0005464891 systemd[1]: libpod-62833712eff5abbc0e63db348e3d62ce7d0183905062fbeb11de6dc113403c29.scope: Deactivated successfully.
Oct  1 13:10:12 np0005464891 podman[312027]: 2025-10-01 17:10:12.679607035 +0000 UTC m=+1.035406625 container died 62833712eff5abbc0e63db348e3d62ce7d0183905062fbeb11de6dc113403c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_taussig, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 13:10:12 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d08d54172d0296f3b8bbc81c14d76d05618d28e029b2ffe112c02e1c456d3577-merged.mount: Deactivated successfully.
Oct  1 13:10:12 np0005464891 podman[312027]: 2025-10-01 17:10:12.735754476 +0000 UTC m=+1.091554066 container remove 62833712eff5abbc0e63db348e3d62ce7d0183905062fbeb11de6dc113403c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_taussig, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct  1 13:10:12 np0005464891 systemd[1]: libpod-conmon-62833712eff5abbc0e63db348e3d62ce7d0183905062fbeb11de6dc113403c29.scope: Deactivated successfully.
Oct  1 13:10:12 np0005464891 nova_compute[259907]: 2025-10-01 17:10:12.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:13 np0005464891 podman[312206]: 2025-10-01 17:10:13.523965995 +0000 UTC m=+0.039361908 container create 1400317adc815bb2e4cb6d02d15822738df1995cd55d9ea8279e8c7a6e7181a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 13:10:13 np0005464891 systemd[1]: Started libpod-conmon-1400317adc815bb2e4cb6d02d15822738df1995cd55d9ea8279e8c7a6e7181a0.scope.
Oct  1 13:10:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct  1 13:10:13 np0005464891 podman[312206]: 2025-10-01 17:10:13.50784814 +0000 UTC m=+0.023244083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:10:13 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:10:13 np0005464891 podman[312206]: 2025-10-01 17:10:13.626943768 +0000 UTC m=+0.142339731 container init 1400317adc815bb2e4cb6d02d15822738df1995cd55d9ea8279e8c7a6e7181a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 13:10:13 np0005464891 podman[312206]: 2025-10-01 17:10:13.635311979 +0000 UTC m=+0.150707892 container start 1400317adc815bb2e4cb6d02d15822738df1995cd55d9ea8279e8c7a6e7181a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 13:10:13 np0005464891 suspicious_hertz[312222]: 167 167
Oct  1 13:10:13 np0005464891 systemd[1]: libpod-1400317adc815bb2e4cb6d02d15822738df1995cd55d9ea8279e8c7a6e7181a0.scope: Deactivated successfully.
Oct  1 13:10:13 np0005464891 podman[312206]: 2025-10-01 17:10:13.639954118 +0000 UTC m=+0.155350041 container attach 1400317adc815bb2e4cb6d02d15822738df1995cd55d9ea8279e8c7a6e7181a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 13:10:13 np0005464891 podman[312206]: 2025-10-01 17:10:13.64039111 +0000 UTC m=+0.155787023 container died 1400317adc815bb2e4cb6d02d15822738df1995cd55d9ea8279e8c7a6e7181a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct  1 13:10:13 np0005464891 systemd[1]: var-lib-containers-storage-overlay-70ea93aa3c151d397357f93ee58c81d5733d5cf17892e3f6d0504b86e505071b-merged.mount: Deactivated successfully.
Oct  1 13:10:13 np0005464891 podman[312206]: 2025-10-01 17:10:13.677268468 +0000 UTC m=+0.192664381 container remove 1400317adc815bb2e4cb6d02d15822738df1995cd55d9ea8279e8c7a6e7181a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 13:10:13 np0005464891 systemd[1]: libpod-conmon-1400317adc815bb2e4cb6d02d15822738df1995cd55d9ea8279e8c7a6e7181a0.scope: Deactivated successfully.
Oct  1 13:10:13 np0005464891 podman[312245]: 2025-10-01 17:10:13.885708825 +0000 UTC m=+0.053559460 container create fa398d01543dde2ddcca414db7e3923d13b6e9fab9fc1eb5d50776258288c634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shtern, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:10:13 np0005464891 systemd[1]: Started libpod-conmon-fa398d01543dde2ddcca414db7e3923d13b6e9fab9fc1eb5d50776258288c634.scope.
Oct  1 13:10:13 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:10:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffe886df67b0d5615d7d9540ad7d1682581bff81db9bca4e1d8ae2c0cf3095d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:10:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffe886df67b0d5615d7d9540ad7d1682581bff81db9bca4e1d8ae2c0cf3095d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:10:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffe886df67b0d5615d7d9540ad7d1682581bff81db9bca4e1d8ae2c0cf3095d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:10:13 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffe886df67b0d5615d7d9540ad7d1682581bff81db9bca4e1d8ae2c0cf3095d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:10:13 np0005464891 podman[312245]: 2025-10-01 17:10:13.863403699 +0000 UTC m=+0.031254334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:10:13 np0005464891 podman[312245]: 2025-10-01 17:10:13.96011071 +0000 UTC m=+0.127961365 container init fa398d01543dde2ddcca414db7e3923d13b6e9fab9fc1eb5d50776258288c634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 13:10:13 np0005464891 podman[312245]: 2025-10-01 17:10:13.966522777 +0000 UTC m=+0.134373382 container start fa398d01543dde2ddcca414db7e3923d13b6e9fab9fc1eb5d50776258288c634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:10:13 np0005464891 podman[312245]: 2025-10-01 17:10:13.972541284 +0000 UTC m=+0.140391909 container attach fa398d01543dde2ddcca414db7e3923d13b6e9fab9fc1eb5d50776258288c634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 13:10:14 np0005464891 nova_compute[259907]: 2025-10-01 17:10:14.012 2 DEBUG nova.network.neutron [req-9a0db8a2-86c5-4cbb-aadb-273267644fe8 req-df483cde-eb84-424b-9e28-e53d4b3f7f8f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Updated VIF entry in instance network info cache for port 7a26e0a1-31a3-4972-ae7d-6df86b28214c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 13:10:14 np0005464891 nova_compute[259907]: 2025-10-01 17:10:14.013 2 DEBUG nova.network.neutron [req-9a0db8a2-86c5-4cbb-aadb-273267644fe8 req-df483cde-eb84-424b-9e28-e53d4b3f7f8f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Updating instance_info_cache with network_info: [{"id": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "address": "fa:16:3e:7e:22:df", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a26e0a1-31", "ovs_interfaceid": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:10:14 np0005464891 nova_compute[259907]: 2025-10-01 17:10:14.029 2 DEBUG oslo_concurrency.lockutils [req-9a0db8a2-86c5-4cbb-aadb-273267644fe8 req-df483cde-eb84-424b-9e28-e53d4b3f7f8f af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-e655026a-cec2-4cc0-97ae-6bde056da6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:10:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]: {
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:        "osd_id": 2,
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:        "type": "bluestore"
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:    },
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:        "osd_id": 0,
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:        "type": "bluestore"
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:    },
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:        "osd_id": 1,
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:        "type": "bluestore"
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]:    }
Oct  1 13:10:15 np0005464891 hopeful_shtern[312261]: }
Oct  1 13:10:15 np0005464891 systemd[1]: libpod-fa398d01543dde2ddcca414db7e3923d13b6e9fab9fc1eb5d50776258288c634.scope: Deactivated successfully.
Oct  1 13:10:15 np0005464891 systemd[1]: libpod-fa398d01543dde2ddcca414db7e3923d13b6e9fab9fc1eb5d50776258288c634.scope: Consumed 1.069s CPU time.
Oct  1 13:10:15 np0005464891 podman[312245]: 2025-10-01 17:10:15.033938416 +0000 UTC m=+1.201789021 container died fa398d01543dde2ddcca414db7e3923d13b6e9fab9fc1eb5d50776258288c634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shtern, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 13:10:15 np0005464891 systemd[1]: var-lib-containers-storage-overlay-7ffe886df67b0d5615d7d9540ad7d1682581bff81db9bca4e1d8ae2c0cf3095d-merged.mount: Deactivated successfully.
Oct  1 13:10:15 np0005464891 podman[312245]: 2025-10-01 17:10:15.456862057 +0000 UTC m=+1.624712662 container remove fa398d01543dde2ddcca414db7e3923d13b6e9fab9fc1eb5d50776258288c634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 13:10:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:10:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:10:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:10:15 np0005464891 systemd[1]: libpod-conmon-fa398d01543dde2ddcca414db7e3923d13b6e9fab9fc1eb5d50776258288c634.scope: Deactivated successfully.
Oct  1 13:10:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:10:15 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev e1bfba38-bb40-4e92-85d3-55dd0481e2c5 does not exist
Oct  1 13:10:15 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev d7e14aa8-e175-4791-bd89-58934e9a627a does not exist
Oct  1 13:10:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct  1 13:10:16 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:10:16 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:10:17 np0005464891 nova_compute[259907]: 2025-10-01 17:10:17.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct  1 13:10:17 np0005464891 nova_compute[259907]: 2025-10-01 17:10:17.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:18 np0005464891 ovn_controller[152409]: 2025-10-01T17:10:18Z|00066|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.9
Oct  1 13:10:18 np0005464891 ovn_controller[152409]: 2025-10-01T17:10:18Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:7e:22:df 10.100.0.9
Oct  1 13:10:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:10:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct  1 13:10:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 767 B/s wr, 81 op/s
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0054373870629029104 of space, bias 1.0, pg target 1.6312161188708731 quantized to 32 (current 32)
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.9013621638340822e-05 quantized to 32 (current 32)
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.19918670028325844 quantized to 32 (current 32)
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:10:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct  1 13:10:22 np0005464891 nova_compute[259907]: 2025-10-01 17:10:22.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:22 np0005464891 nova_compute[259907]: 2025-10-01 17:10:22.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:23 np0005464891 ovn_controller[152409]: 2025-10-01T17:10:23Z|00068|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.9
Oct  1 13:10:23 np0005464891 ovn_controller[152409]: 2025-10-01T17:10:23Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:7e:22:df 10.100.0.9
Oct  1 13:10:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 7.4 KiB/s wr, 77 op/s
Oct  1 13:10:23 np0005464891 ovn_controller[152409]: 2025-10-01T17:10:23Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:22:df 10.100.0.9
Oct  1 13:10:23 np0005464891 ovn_controller[152409]: 2025-10-01T17:10:23Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:22:df 10.100.0.9
Oct  1 13:10:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:10:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 7.4 KiB/s wr, 44 op/s
Oct  1 13:10:27 np0005464891 nova_compute[259907]: 2025-10-01 17:10:27.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 7.4 KiB/s wr, 44 op/s
Oct  1 13:10:27 np0005464891 nova_compute[259907]: 2025-10-01 17:10:27.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:10:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 44 op/s
Oct  1 13:10:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 568 KiB/s rd, 21 KiB/s wr, 40 op/s
Oct  1 13:10:31 np0005464891 podman[312357]: 2025-10-01 17:10:31.979185703 +0000 UTC m=+0.075761123 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 13:10:32 np0005464891 nova_compute[259907]: 2025-10-01 17:10:32.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:32 np0005464891 nova_compute[259907]: 2025-10-01 17:10:32.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 24 KiB/s wr, 23 op/s
Oct  1 13:10:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:10:34 np0005464891 podman[312377]: 2025-10-01 17:10:34.966828243 +0000 UTC m=+0.076393701 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  1 13:10:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s wr, 0 op/s
Oct  1 13:10:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:10:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1335910578' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:10:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:10:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1335910578' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:10:37 np0005464891 nova_compute[259907]: 2025-10-01 17:10:37.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s wr, 1 op/s
Oct  1 13:10:37 np0005464891 nova_compute[259907]: 2025-10-01 17:10:37.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:37 np0005464891 podman[312405]: 2025-10-01 17:10:37.951836081 +0000 UTC m=+0.067103494 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  1 13:10:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:10:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s wr, 2 op/s
Oct  1 13:10:39 np0005464891 nova_compute[259907]: 2025-10-01 17:10:39.803 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:10:40 np0005464891 podman[312425]: 2025-10-01 17:10:40.971707523 +0000 UTC m=+0.080802012 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct  1 13:10:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Oct  1 13:10:41 np0005464891 ovn_controller[152409]: 2025-10-01T17:10:41Z|00284|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Oct  1 13:10:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:10:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:10:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:10:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:10:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:10:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:10:42 np0005464891 nova_compute[259907]: 2025-10-01 17:10:42.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:42 np0005464891 nova_compute[259907]: 2025-10-01 17:10:42.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Oct  1 13:10:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.245 2 DEBUG oslo_concurrency.lockutils [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "e655026a-cec2-4cc0-97ae-6bde056da6fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.246 2 DEBUG oslo_concurrency.lockutils [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "e655026a-cec2-4cc0-97ae-6bde056da6fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.246 2 DEBUG oslo_concurrency.lockutils [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.246 2 DEBUG oslo_concurrency.lockutils [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.247 2 DEBUG oslo_concurrency.lockutils [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.248 2 INFO nova.compute.manager [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Terminating instance#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.248 2 DEBUG nova.compute.manager [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 13:10:44 np0005464891 kernel: tap7a26e0a1-31 (unregistering): left promiscuous mode
Oct  1 13:10:44 np0005464891 NetworkManager[44940]: <info>  [1759338644.3053] device (tap7a26e0a1-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:44 np0005464891 ovn_controller[152409]: 2025-10-01T17:10:44Z|00285|binding|INFO|Releasing lport 7a26e0a1-31a3-4972-ae7d-6df86b28214c from this chassis (sb_readonly=0)
Oct  1 13:10:44 np0005464891 ovn_controller[152409]: 2025-10-01T17:10:44Z|00286|binding|INFO|Setting lport 7a26e0a1-31a3-4972-ae7d-6df86b28214c down in Southbound
Oct  1 13:10:44 np0005464891 ovn_controller[152409]: 2025-10-01T17:10:44Z|00287|binding|INFO|Removing iface tap7a26e0a1-31 ovn-installed in OVS
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.323 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:22:df 10.100.0.9'], port_security=['fa:16:3e:7e:22:df 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e655026a-cec2-4cc0-97ae-6bde056da6fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d747029d-7cd7-4e92-a356-867cacbb54c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7101f2ff48f540a08f6ec15b324152c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd4fc8115-d40c-458e-b5f5-c46a5e06662d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ce41f0a-b119-4c05-a337-15f2fa115c91, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=7a26e0a1-31a3-4972-ae7d-6df86b28214c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.324 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 7a26e0a1-31a3-4972-ae7d-6df86b28214c in datapath d747029d-7cd7-4e92-a356-867cacbb54c4 unbound from our chassis#033[00m
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.325 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d747029d-7cd7-4e92-a356-867cacbb54c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.326 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[1a673229-1444-405d-870d-93c40f89346d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.327 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 namespace which is not needed anymore#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:44 np0005464891 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Oct  1 13:10:44 np0005464891 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Consumed 16.036s CPU time.
Oct  1 13:10:44 np0005464891 systemd-machined[214891]: Machine qemu-28-instance-0000001c terminated.
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.489 2 INFO nova.virt.libvirt.driver [-] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Instance destroyed successfully.#033[00m
Oct  1 13:10:44 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[311415]: [NOTICE]   (311419) : haproxy version is 2.8.14-c23fe91
Oct  1 13:10:44 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[311415]: [NOTICE]   (311419) : path to executable is /usr/sbin/haproxy
Oct  1 13:10:44 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[311415]: [WARNING]  (311419) : Exiting Master process...
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.491 2 DEBUG nova.objects.instance [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lazy-loading 'resources' on Instance uuid e655026a-cec2-4cc0-97ae-6bde056da6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:10:44 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[311415]: [ALERT]    (311419) : Current worker (311421) exited with code 143 (Terminated)
Oct  1 13:10:44 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[311415]: [WARNING]  (311419) : All workers exited. Exiting... (0)
Oct  1 13:10:44 np0005464891 systemd[1]: libpod-83a9e9b8398fc662e86fc54cdd8ad525003e0acd0303105a1b523e0f44ba2498.scope: Deactivated successfully.
Oct  1 13:10:44 np0005464891 podman[312470]: 2025-10-01 17:10:44.502009681 +0000 UTC m=+0.058097895 container died 83a9e9b8398fc662e86fc54cdd8ad525003e0acd0303105a1b523e0f44ba2498 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.505 2 DEBUG nova.virt.libvirt.vif [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T17:09:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-497211438',display_name='tempest-TransferEncryptedVolumeTest-server-497211438',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-497211438',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDg5DQ7zjtFPcwFlQ5c4RmSqdiymCwHuuIH20+rbjv/O1v35DyytGl6//xUvotUS7Kzw36qLhLq5I09Wysu4SY0CkP602jXi/K2rnz98jI+qtsF54Xtpb6f0pP7J8Fn4NQ==',key_name='tempest-TransferEncryptedVolumeTest-797389342',keypairs=<?>,launch_index=0,launched_at=2025-10-01T17:10:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7101f2ff48f540a08f6ec15b324152c6',ramdisk_id='',reservation_id='r-9qrziz1k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1550217158',owner_user_name='tempest-TransferEncryptedVolumeTest-1550217158-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T17:10:07Z,user_data=None,user_id='c440275c1a1e4cf09fcf789374345bb2',uuid=e655026a-cec2-4cc0-97ae-6bde056da6fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "address": "fa:16:3e:7e:22:df", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a26e0a1-31", "ovs_interfaceid": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.506 2 DEBUG nova.network.os_vif_util [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converting VIF {"id": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "address": "fa:16:3e:7e:22:df", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a26e0a1-31", "ovs_interfaceid": "7a26e0a1-31a3-4972-ae7d-6df86b28214c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.506 2 DEBUG nova.network.os_vif_util [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7e:22:df,bridge_name='br-int',has_traffic_filtering=True,id=7a26e0a1-31a3-4972-ae7d-6df86b28214c,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a26e0a1-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.507 2 DEBUG os_vif [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:22:df,bridge_name='br-int',has_traffic_filtering=True,id=7a26e0a1-31a3-4972-ae7d-6df86b28214c,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a26e0a1-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a26e0a1-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.515 2 INFO os_vif [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:22:df,bridge_name='br-int',has_traffic_filtering=True,id=7a26e0a1-31a3-4972-ae7d-6df86b28214c,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a26e0a1-31')#033[00m
Oct  1 13:10:44 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83a9e9b8398fc662e86fc54cdd8ad525003e0acd0303105a1b523e0f44ba2498-userdata-shm.mount: Deactivated successfully.
Oct  1 13:10:44 np0005464891 systemd[1]: var-lib-containers-storage-overlay-c614e0c306fd157051f4d5b5f22037990a009baaf2e837dc7c42d9af115dff7e-merged.mount: Deactivated successfully.
Oct  1 13:10:44 np0005464891 podman[312470]: 2025-10-01 17:10:44.549502113 +0000 UTC m=+0.105590327 container cleanup 83a9e9b8398fc662e86fc54cdd8ad525003e0acd0303105a1b523e0f44ba2498 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:10:44 np0005464891 systemd[1]: libpod-conmon-83a9e9b8398fc662e86fc54cdd8ad525003e0acd0303105a1b523e0f44ba2498.scope: Deactivated successfully.
Oct  1 13:10:44 np0005464891 podman[312523]: 2025-10-01 17:10:44.619128706 +0000 UTC m=+0.042476745 container remove 83a9e9b8398fc662e86fc54cdd8ad525003e0acd0303105a1b523e0f44ba2498 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.624 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[c4cf839b-6e89-41a8-8b50-5cf025493746]: (4, ('Wed Oct  1 05:10:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 (83a9e9b8398fc662e86fc54cdd8ad525003e0acd0303105a1b523e0f44ba2498)\n83a9e9b8398fc662e86fc54cdd8ad525003e0acd0303105a1b523e0f44ba2498\nWed Oct  1 05:10:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 (83a9e9b8398fc662e86fc54cdd8ad525003e0acd0303105a1b523e0f44ba2498)\n83a9e9b8398fc662e86fc54cdd8ad525003e0acd0303105a1b523e0f44ba2498\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.625 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[359f6180-f7fb-4218-a591-e8c6cdca077d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.626 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd747029d-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:10:44 np0005464891 kernel: tapd747029d-70: left promiscuous mode
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.645 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[cf98fd68-4830-4a84-989d-6d614bed3561]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.671 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b3da2de8-ea46-4b20-9e2d-c24c681b8476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.673 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[197b79c5-9bf3-401f-921a-174e79048f86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.690 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[718b527a-4b75-4a3c-946b-f060ef352030]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555288, 'reachable_time': 34499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312541, 'error': None, 'target': 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.692 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.692 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[dc101a76-3ebb-41d3-9bd7-fa0bccdfd6b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:10:44 np0005464891 systemd[1]: run-netns-ovnmeta\x2dd747029d\x2d7cd7\x2d4e92\x2da356\x2d867cacbb54c4.mount: Deactivated successfully.
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.736 2 INFO nova.virt.libvirt.driver [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Deleting instance files /var/lib/nova/instances/e655026a-cec2-4cc0-97ae-6bde056da6fb_del#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.737 2 INFO nova.virt.libvirt.driver [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Deletion of /var/lib/nova/instances/e655026a-cec2-4cc0-97ae-6bde056da6fb_del complete#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.815 2 INFO nova.compute.manager [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.816 2 DEBUG oslo.service.loopingcall [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.817 2 DEBUG nova.compute.manager [-] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.817 2 DEBUG nova.network.neutron [-] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.881 2 DEBUG nova.compute.manager [req-f33f7209-8c71-45de-8d2d-82ef33659bbe req-edebed4f-62fa-4b41-ac11-329aac6e9873 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Received event network-vif-unplugged-7a26e0a1-31a3-4972-ae7d-6df86b28214c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.881 2 DEBUG oslo_concurrency.lockutils [req-f33f7209-8c71-45de-8d2d-82ef33659bbe req-edebed4f-62fa-4b41-ac11-329aac6e9873 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.882 2 DEBUG oslo_concurrency.lockutils [req-f33f7209-8c71-45de-8d2d-82ef33659bbe req-edebed4f-62fa-4b41-ac11-329aac6e9873 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.882 2 DEBUG oslo_concurrency.lockutils [req-f33f7209-8c71-45de-8d2d-82ef33659bbe req-edebed4f-62fa-4b41-ac11-329aac6e9873 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.882 2 DEBUG nova.compute.manager [req-f33f7209-8c71-45de-8d2d-82ef33659bbe req-edebed4f-62fa-4b41-ac11-329aac6e9873 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] No waiting events found dispatching network-vif-unplugged-7a26e0a1-31a3-4972-ae7d-6df86b28214c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.883 2 DEBUG nova.compute.manager [req-f33f7209-8c71-45de-8d2d-82ef33659bbe req-edebed4f-62fa-4b41-ac11-329aac6e9873 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Received event network-vif-unplugged-7a26e0a1-31a3-4972-ae7d-6df86b28214c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.945 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:10:44 np0005464891 nova_compute[259907]: 2025-10-01 17:10:44.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.946 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 13:10:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:10:44.947 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:10:45 np0005464891 nova_compute[259907]: 2025-10-01 17:10:45.575 2 DEBUG nova.network.neutron [-] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:10:45 np0005464891 nova_compute[259907]: 2025-10-01 17:10:45.616 2 INFO nova.compute.manager [-] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Took 0.80 seconds to deallocate network for instance.#033[00m
Oct  1 13:10:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s wr, 1 op/s
Oct  1 13:10:45 np0005464891 nova_compute[259907]: 2025-10-01 17:10:45.658 2 DEBUG nova.compute.manager [req-3fe8ea46-0d99-44be-93b9-3ef592cd371d req-05ae80ec-5054-4531-8994-ee1161553653 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Received event network-vif-deleted-7a26e0a1-31a3-4972-ae7d-6df86b28214c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:10:45 np0005464891 nova_compute[259907]: 2025-10-01 17:10:45.777 2 INFO nova.compute.manager [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Took 0.16 seconds to detach 1 volumes for instance.#033[00m
Oct  1 13:10:45 np0005464891 nova_compute[259907]: 2025-10-01 17:10:45.820 2 DEBUG oslo_concurrency.lockutils [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:10:45 np0005464891 nova_compute[259907]: 2025-10-01 17:10:45.820 2 DEBUG oslo_concurrency.lockutils [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:10:45 np0005464891 nova_compute[259907]: 2025-10-01 17:10:45.936 2 DEBUG oslo_concurrency.processutils [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:10:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:10:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1444591267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:10:46 np0005464891 nova_compute[259907]: 2025-10-01 17:10:46.472 2 DEBUG oslo_concurrency.processutils [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:10:46 np0005464891 nova_compute[259907]: 2025-10-01 17:10:46.478 2 DEBUG nova.compute.provider_tree [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:10:46 np0005464891 nova_compute[259907]: 2025-10-01 17:10:46.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:10:47 np0005464891 nova_compute[259907]: 2025-10-01 17:10:47.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 11 KiB/s wr, 3 op/s
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:10:48.725172) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338648725419, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 2099, "num_deletes": 257, "total_data_size": 3407771, "memory_usage": 3478032, "flush_reason": "Manual Compaction"}
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338648886616, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 3340356, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41728, "largest_seqno": 43825, "table_properties": {"data_size": 3330831, "index_size": 6019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19806, "raw_average_key_size": 20, "raw_value_size": 3311615, "raw_average_value_size": 3428, "num_data_blocks": 266, "num_entries": 966, "num_filter_entries": 966, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338420, "oldest_key_time": 1759338420, "file_creation_time": 1759338648, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 161522 microseconds, and 6747 cpu microseconds.
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:10:48.886684) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 3340356 bytes OK
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:10:48.886713) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:10:48.982899) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:10:48.982969) EVENT_LOG_v1 {"time_micros": 1759338648982954, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:10:48.983002) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 3398917, prev total WAL file size 3398917, number of live WAL files 2.
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:10:48.984712) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(3262KB)], [89(10161KB)]
Oct  1 13:10:48 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338648984811, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 13746212, "oldest_snapshot_seqno": -1}
Oct  1 13:10:49 np0005464891 nova_compute[259907]: 2025-10-01 17:10:49.148 2 DEBUG nova.scheduler.client.report [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 7490 keys, 11997602 bytes, temperature: kUnknown
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338649329811, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 11997602, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11941481, "index_size": 36297, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18757, "raw_key_size": 189775, "raw_average_key_size": 25, "raw_value_size": 11801207, "raw_average_value_size": 1575, "num_data_blocks": 1438, "num_entries": 7490, "num_filter_entries": 7490, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759338648, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:10:49.330335) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 11997602 bytes
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:10:49.376568) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 39.8 rd, 34.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.9 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 8023, records dropped: 533 output_compression: NoCompression
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:10:49.376636) EVENT_LOG_v1 {"time_micros": 1759338649376611, "job": 52, "event": "compaction_finished", "compaction_time_micros": 345188, "compaction_time_cpu_micros": 55667, "output_level": 6, "num_output_files": 1, "total_output_size": 11997602, "num_input_records": 8023, "num_output_records": 7490, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338649378587, "job": 52, "event": "table_file_deletion", "file_number": 91}
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338649382994, "job": 52, "event": "table_file_deletion", "file_number": 89}
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:10:48.984520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:10:49.383164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:10:49.383173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:10:49.383175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:10:49.383178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:10:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:10:49.383180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:10:49 np0005464891 nova_compute[259907]: 2025-10-01 17:10:49.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:49 np0005464891 nova_compute[259907]: 2025-10-01 17:10:49.620 2 DEBUG nova.compute.manager [req-8b5ce6ee-45bd-413f-89f5-6bdfda1eafcd req-7c234d9a-0a67-4ac9-b414-100c926a9674 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Received event network-vif-plugged-7a26e0a1-31a3-4972-ae7d-6df86b28214c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:10:49 np0005464891 nova_compute[259907]: 2025-10-01 17:10:49.621 2 DEBUG oslo_concurrency.lockutils [req-8b5ce6ee-45bd-413f-89f5-6bdfda1eafcd req-7c234d9a-0a67-4ac9-b414-100c926a9674 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:10:49 np0005464891 nova_compute[259907]: 2025-10-01 17:10:49.621 2 DEBUG oslo_concurrency.lockutils [req-8b5ce6ee-45bd-413f-89f5-6bdfda1eafcd req-7c234d9a-0a67-4ac9-b414-100c926a9674 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:10:49 np0005464891 nova_compute[259907]: 2025-10-01 17:10:49.621 2 DEBUG oslo_concurrency.lockutils [req-8b5ce6ee-45bd-413f-89f5-6bdfda1eafcd req-7c234d9a-0a67-4ac9-b414-100c926a9674 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "e655026a-cec2-4cc0-97ae-6bde056da6fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:10:49 np0005464891 nova_compute[259907]: 2025-10-01 17:10:49.621 2 DEBUG nova.compute.manager [req-8b5ce6ee-45bd-413f-89f5-6bdfda1eafcd req-7c234d9a-0a67-4ac9-b414-100c926a9674 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] No waiting events found dispatching network-vif-plugged-7a26e0a1-31a3-4972-ae7d-6df86b28214c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:10:49 np0005464891 nova_compute[259907]: 2025-10-01 17:10:49.622 2 WARNING nova.compute.manager [req-8b5ce6ee-45bd-413f-89f5-6bdfda1eafcd req-7c234d9a-0a67-4ac9-b414-100c926a9674 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Received unexpected event network-vif-plugged-7a26e0a1-31a3-4972-ae7d-6df86b28214c for instance with vm_state deleted and task_state None.#033[00m
Oct  1 13:10:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 6.2 KiB/s wr, 19 op/s
Oct  1 13:10:49 np0005464891 nova_compute[259907]: 2025-10-01 17:10:49.628 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:10:50 np0005464891 nova_compute[259907]: 2025-10-01 17:10:50.294 2 DEBUG oslo_concurrency.lockutils [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 4.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:10:50 np0005464891 nova_compute[259907]: 2025-10-01 17:10:50.297 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:10:50 np0005464891 nova_compute[259907]: 2025-10-01 17:10:50.297 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:10:50 np0005464891 nova_compute[259907]: 2025-10-01 17:10:50.297 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 13:10:50 np0005464891 nova_compute[259907]: 2025-10-01 17:10:50.297 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:10:50 np0005464891 nova_compute[259907]: 2025-10-01 17:10:50.438 2 INFO nova.scheduler.client.report [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Deleted allocations for instance e655026a-cec2-4cc0-97ae-6bde056da6fb#033[00m
Oct  1 13:10:50 np0005464891 nova_compute[259907]: 2025-10-01 17:10:50.669 2 DEBUG oslo_concurrency.lockutils [None req-19fd5ff6-5b1f-4fd5-bf6e-c5b971e19df1 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "e655026a-cec2-4cc0-97ae-6bde056da6fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:10:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:10:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3344683205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:10:50 np0005464891 nova_compute[259907]: 2025-10-01 17:10:50.799 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:10:50 np0005464891 nova_compute[259907]: 2025-10-01 17:10:50.955 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:10:50 np0005464891 nova_compute[259907]: 2025-10-01 17:10:50.956 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4331MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 13:10:50 np0005464891 nova_compute[259907]: 2025-10-01 17:10:50.956 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:10:50 np0005464891 nova_compute[259907]: 2025-10-01 17:10:50.956 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:10:51 np0005464891 nova_compute[259907]: 2025-10-01 17:10:51.009 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 13:10:51 np0005464891 nova_compute[259907]: 2025-10-01 17:10:51.010 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 13:10:51 np0005464891 nova_compute[259907]: 2025-10-01 17:10:51.040 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:10:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:10:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3846659212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:10:51 np0005464891 nova_compute[259907]: 2025-10-01 17:10:51.485 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:10:51 np0005464891 nova_compute[259907]: 2025-10-01 17:10:51.491 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:10:51 np0005464891 nova_compute[259907]: 2025-10-01 17:10:51.525 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:10:51 np0005464891 nova_compute[259907]: 2025-10-01 17:10:51.568 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 13:10:51 np0005464891 nova_compute[259907]: 2025-10-01 17:10:51.568 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:10:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 597 B/s wr, 18 op/s
Oct  1 13:10:52 np0005464891 nova_compute[259907]: 2025-10-01 17:10:52.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:52 np0005464891 nova_compute[259907]: 2025-10-01 17:10:52.567 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:10:52 np0005464891 nova_compute[259907]: 2025-10-01 17:10:52.568 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:10:52 np0005464891 nova_compute[259907]: 2025-10-01 17:10:52.568 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 13:10:52 np0005464891 nova_compute[259907]: 2025-10-01 17:10:52.569 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 13:10:52 np0005464891 nova_compute[259907]: 2025-10-01 17:10:52.582 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 13:10:52 np0005464891 nova_compute[259907]: 2025-10-01 17:10:52.583 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:10:52 np0005464891 nova_compute[259907]: 2025-10-01 17:10:52.583 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:10:52 np0005464891 nova_compute[259907]: 2025-10-01 17:10:52.584 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:10:52 np0005464891 nova_compute[259907]: 2025-10-01 17:10:52.584 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 13:10:52 np0005464891 nova_compute[259907]: 2025-10-01 17:10:52.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:10:52 np0005464891 nova_compute[259907]: 2025-10-01 17:10:52.806 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:10:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 597 B/s wr, 18 op/s
Oct  1 13:10:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:10:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:10:54 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4189919949' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:10:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:10:54 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4189919949' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:10:54 np0005464891 nova_compute[259907]: 2025-10-01 17:10:54.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 453 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 596 B/s wr, 18 op/s
Oct  1 13:10:55 np0005464891 nova_compute[259907]: 2025-10-01 17:10:55.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:10:55 np0005464891 nova_compute[259907]: 2025-10-01 17:10:55.805 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  1 13:10:55 np0005464891 nova_compute[259907]: 2025-10-01 17:10:55.820 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  1 13:10:57 np0005464891 nova_compute[259907]: 2025-10-01 17:10:57.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 385 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 852 B/s wr, 30 op/s
Oct  1 13:10:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:10:59 np0005464891 nova_compute[259907]: 2025-10-01 17:10:59.487 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759338644.4863026, e655026a-cec2-4cc0-97ae-6bde056da6fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:10:59 np0005464891 nova_compute[259907]: 2025-10-01 17:10:59.487 2 INFO nova.compute.manager [-] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] VM Stopped (Lifecycle Event)#033[00m
Oct  1 13:10:59 np0005464891 nova_compute[259907]: 2025-10-01 17:10:59.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:10:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 271 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 183 KiB/s rd, 1.2 KiB/s wr, 34 op/s
Oct  1 13:10:59 np0005464891 nova_compute[259907]: 2025-10-01 17:10:59.715 2 DEBUG nova.compute.manager [None req-3eaba939-13c3-4105-8570-e91cbf2731aa - - - - - -] [instance: e655026a-cec2-4cc0-97ae-6bde056da6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:10:59 np0005464891 nova_compute[259907]: 2025-10-01 17:10:59.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:11:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 271 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 596 B/s wr, 18 op/s
Oct  1 13:11:02 np0005464891 nova_compute[259907]: 2025-10-01 17:11:02.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:02 np0005464891 podman[312612]: 2025-10-01 17:11:02.946209773 +0000 UTC m=+0.057870810 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 13:11:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 271 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 18 op/s
Oct  1 13:11:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:11:04 np0005464891 nova_compute[259907]: 2025-10-01 17:11:04.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 271 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 596 B/s wr, 18 op/s
Oct  1 13:11:06 np0005464891 podman[312632]: 2025-10-01 17:11:06.003333901 +0000 UTC m=+0.098101060 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 13:11:06 np0005464891 nova_compute[259907]: 2025-10-01 17:11:06.040 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:11:06 np0005464891 nova_compute[259907]: 2025-10-01 17:11:06.040 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  1 13:11:07 np0005464891 nova_compute[259907]: 2025-10-01 17:11:07.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 271 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 18 op/s
Oct  1 13:11:08 np0005464891 podman[312659]: 2025-10-01 17:11:08.946396171 +0000 UTC m=+0.060922553 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  1 13:11:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:11:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 426 B/s wr, 13 op/s
Oct  1 13:11:09 np0005464891 nova_compute[259907]: 2025-10-01 17:11:09.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct  1 13:11:11 np0005464891 podman[312679]: 2025-10-01 17:11:11.971670842 +0000 UTC m=+0.080030502 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_17:11:12
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.meta']
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 13:11:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:12.471 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:11:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:12.471 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:11:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:12.471 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:11:12 np0005464891 nova_compute[259907]: 2025-10-01 17:11:12.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:11:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:11:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 10 op/s
Oct  1 13:11:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:11:14 np0005464891 nova_compute[259907]: 2025-10-01 17:11:14.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:15 np0005464891 ovn_controller[152409]: 2025-10-01T17:11:15Z|00288|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Oct  1 13:11:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 10 op/s
Oct  1 13:11:16 np0005464891 podman[312869]: 2025-10-01 17:11:16.550446456 +0000 UTC m=+0.157607244 container exec 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 13:11:16 np0005464891 podman[312869]: 2025-10-01 17:11:16.642735745 +0000 UTC m=+0.249896533 container exec_died 154be41beae48f9129350e215e3193089abfafbc8b9b6eeb74c512e8fbe52a8e (image=quay.io/ceph/ceph:v18, name=ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:11:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:11:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:11:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:11:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:11:17 np0005464891 nova_compute[259907]: 2025-10-01 17:11:17.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 596 B/s wr, 11 op/s
Oct  1 13:11:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:11:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:11:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 13:11:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:11:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 13:11:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:11:17 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 2e3e1f3b-83a0-4760-8cab-24929278eaee does not exist
Oct  1 13:11:17 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev b413ba92-042c-4b73-8beb-e16fff6199d1 does not exist
Oct  1 13:11:17 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev c74eb584-af6a-49d0-a89a-4979b6ea9e5a does not exist
Oct  1 13:11:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 13:11:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 13:11:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 13:11:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:11:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:11:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:11:18 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:11:18 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:11:18 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:11:18 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:11:18 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:11:18 np0005464891 podman[313290]: 2025-10-01 17:11:18.592874972 +0000 UTC m=+0.107051987 container create f942bc36f8d81bc68e0cbe81e0a8da7887561bb8e66edabf18d81c0d4365dfa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bose, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:11:18 np0005464891 podman[313290]: 2025-10-01 17:11:18.518731075 +0000 UTC m=+0.032908120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:11:18 np0005464891 systemd[1]: Started libpod-conmon-f942bc36f8d81bc68e0cbe81e0a8da7887561bb8e66edabf18d81c0d4365dfa3.scope.
Oct  1 13:11:18 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:11:18 np0005464891 podman[313290]: 2025-10-01 17:11:18.929386176 +0000 UTC m=+0.443563191 container init f942bc36f8d81bc68e0cbe81e0a8da7887561bb8e66edabf18d81c0d4365dfa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bose, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 13:11:18 np0005464891 podman[313290]: 2025-10-01 17:11:18.937336876 +0000 UTC m=+0.451513891 container start f942bc36f8d81bc68e0cbe81e0a8da7887561bb8e66edabf18d81c0d4365dfa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:11:18 np0005464891 nostalgic_bose[313306]: 167 167
Oct  1 13:11:18 np0005464891 systemd[1]: libpod-f942bc36f8d81bc68e0cbe81e0a8da7887561bb8e66edabf18d81c0d4365dfa3.scope: Deactivated successfully.
Oct  1 13:11:19 np0005464891 podman[313290]: 2025-10-01 17:11:19.058651876 +0000 UTC m=+0.572828911 container attach f942bc36f8d81bc68e0cbe81e0a8da7887561bb8e66edabf18d81c0d4365dfa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bose, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 13:11:19 np0005464891 podman[313290]: 2025-10-01 17:11:19.059658514 +0000 UTC m=+0.573835539 container died f942bc36f8d81bc68e0cbe81e0a8da7887561bb8e66edabf18d81c0d4365dfa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bose, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:11:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:11:19 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e842c77863f251e736d4f347c40370b54f699c8da51d9a8b2d2564ea93963d0e-merged.mount: Deactivated successfully.
Oct  1 13:11:19 np0005464891 podman[313290]: 2025-10-01 17:11:19.5570341 +0000 UTC m=+1.071211135 container remove f942bc36f8d81bc68e0cbe81e0a8da7887561bb8e66edabf18d81c0d4365dfa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bose, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:11:19 np0005464891 systemd[1]: libpod-conmon-f942bc36f8d81bc68e0cbe81e0a8da7887561bb8e66edabf18d81c0d4365dfa3.scope: Deactivated successfully.
Oct  1 13:11:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Oct  1 13:11:19 np0005464891 nova_compute[259907]: 2025-10-01 17:11:19.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:19 np0005464891 podman[313330]: 2025-10-01 17:11:19.701394427 +0000 UTC m=+0.022600535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:11:19 np0005464891 podman[313330]: 2025-10-01 17:11:19.862556608 +0000 UTC m=+0.183762726 container create 99697c21703023d303f024f8858cf1a83f976005aa3c57797c0349258058c073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 13:11:20 np0005464891 systemd[1]: Started libpod-conmon-99697c21703023d303f024f8858cf1a83f976005aa3c57797c0349258058c073.scope.
Oct  1 13:11:20 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:11:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9fea7f3ed1cd0caf63cc87514d3e8a6615b0b0f6188d78cc33b150c9c3a1437/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:11:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9fea7f3ed1cd0caf63cc87514d3e8a6615b0b0f6188d78cc33b150c9c3a1437/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:11:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9fea7f3ed1cd0caf63cc87514d3e8a6615b0b0f6188d78cc33b150c9c3a1437/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:11:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9fea7f3ed1cd0caf63cc87514d3e8a6615b0b0f6188d78cc33b150c9c3a1437/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:11:20 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9fea7f3ed1cd0caf63cc87514d3e8a6615b0b0f6188d78cc33b150c9c3a1437/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 13:11:20 np0005464891 podman[313330]: 2025-10-01 17:11:20.141642225 +0000 UTC m=+0.462848363 container init 99697c21703023d303f024f8858cf1a83f976005aa3c57797c0349258058c073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:11:20 np0005464891 podman[313330]: 2025-10-01 17:11:20.157125323 +0000 UTC m=+0.478331481 container start 99697c21703023d303f024f8858cf1a83f976005aa3c57797c0349258058c073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mcnulty, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:11:20 np0005464891 podman[313330]: 2025-10-01 17:11:20.16353068 +0000 UTC m=+0.484736808 container attach 99697c21703023d303f024f8858cf1a83f976005aa3c57797c0349258058c073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:11:21 np0005464891 hopeful_mcnulty[313346]: --> passed data devices: 0 physical, 3 LVM
Oct  1 13:11:21 np0005464891 hopeful_mcnulty[313346]: --> relative data size: 1.0
Oct  1 13:11:21 np0005464891 hopeful_mcnulty[313346]: --> All data devices are unavailable
Oct  1 13:11:21 np0005464891 systemd[1]: libpod-99697c21703023d303f024f8858cf1a83f976005aa3c57797c0349258058c073.scope: Deactivated successfully.
Oct  1 13:11:21 np0005464891 systemd[1]: libpod-99697c21703023d303f024f8858cf1a83f976005aa3c57797c0349258058c073.scope: Consumed 1.026s CPU time.
Oct  1 13:11:21 np0005464891 podman[313330]: 2025-10-01 17:11:21.246723145 +0000 UTC m=+1.567929223 container died 99697c21703023d303f024f8858cf1a83f976005aa3c57797c0349258058c073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:11:21 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b9fea7f3ed1cd0caf63cc87514d3e8a6615b0b0f6188d78cc33b150c9c3a1437-merged.mount: Deactivated successfully.
Oct  1 13:11:21 np0005464891 podman[313330]: 2025-10-01 17:11:21.31063301 +0000 UTC m=+1.631839098 container remove 99697c21703023d303f024f8858cf1a83f976005aa3c57797c0349258058c073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 13:11:21 np0005464891 systemd[1]: libpod-conmon-99697c21703023d303f024f8858cf1a83f976005aa3c57797c0349258058c073.scope: Deactivated successfully.
Oct  1 13:11:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 22 KiB/s wr, 5 op/s
Oct  1 13:11:21 np0005464891 podman[313526]: 2025-10-01 17:11:21.917617104 +0000 UTC m=+0.048931083 container create 2c8173c09fb2d4c31348d8163867af8d9e132bb41dac4c581d28c2db2871e42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hugle, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 13:11:21 np0005464891 systemd[1]: Started libpod-conmon-2c8173c09fb2d4c31348d8163867af8d9e132bb41dac4c581d28c2db2871e42d.scope.
Oct  1 13:11:21 np0005464891 podman[313526]: 2025-10-01 17:11:21.893735564 +0000 UTC m=+0.025049523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:11:21 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:11:22 np0005464891 podman[313526]: 2025-10-01 17:11:22.012213446 +0000 UTC m=+0.143527385 container init 2c8173c09fb2d4c31348d8163867af8d9e132bb41dac4c581d28c2db2871e42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hugle, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 13:11:22 np0005464891 podman[313526]: 2025-10-01 17:11:22.020083573 +0000 UTC m=+0.151397512 container start 2c8173c09fb2d4c31348d8163867af8d9e132bb41dac4c581d28c2db2871e42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hugle, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 13:11:22 np0005464891 podman[313526]: 2025-10-01 17:11:22.023142338 +0000 UTC m=+0.154456367 container attach 2c8173c09fb2d4c31348d8163867af8d9e132bb41dac4c581d28c2db2871e42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:11:22 np0005464891 objective_hugle[313542]: 167 167
Oct  1 13:11:22 np0005464891 systemd[1]: libpod-2c8173c09fb2d4c31348d8163867af8d9e132bb41dac4c581d28c2db2871e42d.scope: Deactivated successfully.
Oct  1 13:11:22 np0005464891 podman[313526]: 2025-10-01 17:11:22.027333564 +0000 UTC m=+0.158647503 container died 2c8173c09fb2d4c31348d8163867af8d9e132bb41dac4c581d28c2db2871e42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hugle, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 13:11:22 np0005464891 systemd[1]: var-lib-containers-storage-overlay-8165661b2d8b8263c56b3f650b153420b97b110283cfdde8fc1e45aaef309655-merged.mount: Deactivated successfully.
Oct  1 13:11:22 np0005464891 podman[313526]: 2025-10-01 17:11:22.069433666 +0000 UTC m=+0.200747605 container remove 2c8173c09fb2d4c31348d8163867af8d9e132bb41dac4c581d28c2db2871e42d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hugle, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 13:11:22 np0005464891 systemd[1]: libpod-conmon-2c8173c09fb2d4c31348d8163867af8d9e132bb41dac4c581d28c2db2871e42d.scope: Deactivated successfully.
Oct  1 13:11:22 np0005464891 podman[313564]: 2025-10-01 17:11:22.247503054 +0000 UTC m=+0.063999968 container create 9d3b2ba17c3d2fdbdb68ff2d799ae2d11f9de07e9577b1d63a392ca8a04d451f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hawking, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 13:11:22 np0005464891 podman[313564]: 2025-10-01 17:11:22.20427228 +0000 UTC m=+0.020769214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:11:22 np0005464891 systemd[1]: Started libpod-conmon-9d3b2ba17c3d2fdbdb68ff2d799ae2d11f9de07e9577b1d63a392ca8a04d451f.scope.
Oct  1 13:11:22 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:11:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70d40f7222af56a963df784e5d8d5c0acd7cf908f43c2abef583930d38835c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:11:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70d40f7222af56a963df784e5d8d5c0acd7cf908f43c2abef583930d38835c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:11:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70d40f7222af56a963df784e5d8d5c0acd7cf908f43c2abef583930d38835c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:11:22 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70d40f7222af56a963df784e5d8d5c0acd7cf908f43c2abef583930d38835c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:11:22 np0005464891 podman[313564]: 2025-10-01 17:11:22.37118244 +0000 UTC m=+0.187679384 container init 9d3b2ba17c3d2fdbdb68ff2d799ae2d11f9de07e9577b1d63a392ca8a04d451f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hawking, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:11:22 np0005464891 podman[313564]: 2025-10-01 17:11:22.381194686 +0000 UTC m=+0.197691600 container start 9d3b2ba17c3d2fdbdb68ff2d799ae2d11f9de07e9577b1d63a392ca8a04d451f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hawking, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0028986552345835774 of space, bias 1.0, pg target 0.8695965703750732 quantized to 32 (current 32)
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:11:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 13:11:22 np0005464891 podman[313564]: 2025-10-01 17:11:22.385269729 +0000 UTC m=+0.201766663 container attach 9d3b2ba17c3d2fdbdb68ff2d799ae2d11f9de07e9577b1d63a392ca8a04d451f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hawking, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 13:11:22 np0005464891 nova_compute[259907]: 2025-10-01 17:11:22.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]: {
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:    "0": [
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:        {
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "devices": [
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "/dev/loop3"
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            ],
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "lv_name": "ceph_lv0",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "lv_size": "21470642176",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "name": "ceph_lv0",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "tags": {
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.cluster_name": "ceph",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.crush_device_class": "",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.encrypted": "0",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.osd_id": "0",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.type": "block",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.vdo": "0"
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            },
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "type": "block",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "vg_name": "ceph_vg0"
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:        }
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:    ],
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:    "1": [
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:        {
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "devices": [
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "/dev/loop4"
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            ],
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "lv_name": "ceph_lv1",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "lv_size": "21470642176",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "name": "ceph_lv1",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "tags": {
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.cluster_name": "ceph",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.crush_device_class": "",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.encrypted": "0",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.osd_id": "1",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.type": "block",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.vdo": "0"
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            },
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "type": "block",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "vg_name": "ceph_vg1"
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:        }
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:    ],
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:    "2": [
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:        {
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "devices": [
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "/dev/loop5"
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            ],
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "lv_name": "ceph_lv2",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "lv_size": "21470642176",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "name": "ceph_lv2",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "tags": {
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.cluster_name": "ceph",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.crush_device_class": "",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.encrypted": "0",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.osd_id": "2",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.type": "block",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:                "ceph.vdo": "0"
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            },
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "type": "block",
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:            "vg_name": "ceph_vg2"
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:        }
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]:    ]
Oct  1 13:11:23 np0005464891 naughty_hawking[313580]: }
Oct  1 13:11:23 np0005464891 systemd[1]: libpod-9d3b2ba17c3d2fdbdb68ff2d799ae2d11f9de07e9577b1d63a392ca8a04d451f.scope: Deactivated successfully.
Oct  1 13:11:23 np0005464891 podman[313564]: 2025-10-01 17:11:23.191871836 +0000 UTC m=+1.008368740 container died 9d3b2ba17c3d2fdbdb68ff2d799ae2d11f9de07e9577b1d63a392ca8a04d451f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 13:11:23 np0005464891 systemd[1]: var-lib-containers-storage-overlay-e70d40f7222af56a963df784e5d8d5c0acd7cf908f43c2abef583930d38835c3-merged.mount: Deactivated successfully.
Oct  1 13:11:23 np0005464891 podman[313564]: 2025-10-01 17:11:23.249105696 +0000 UTC m=+1.065602610 container remove 9d3b2ba17c3d2fdbdb68ff2d799ae2d11f9de07e9577b1d63a392ca8a04d451f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hawking, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:11:23 np0005464891 systemd[1]: libpod-conmon-9d3b2ba17c3d2fdbdb68ff2d799ae2d11f9de07e9577b1d63a392ca8a04d451f.scope: Deactivated successfully.
Oct  1 13:11:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct  1 13:11:23 np0005464891 podman[313743]: 2025-10-01 17:11:23.888007301 +0000 UTC m=+0.041271340 container create f0855c7cc25dcb2aba67adfb0813eda3c9fb7cfeea382737a6df4b9873bbd047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 13:11:23 np0005464891 systemd[1]: Started libpod-conmon-f0855c7cc25dcb2aba67adfb0813eda3c9fb7cfeea382737a6df4b9873bbd047.scope.
Oct  1 13:11:23 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:11:23 np0005464891 podman[313743]: 2025-10-01 17:11:23.868683957 +0000 UTC m=+0.021948026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:11:23 np0005464891 podman[313743]: 2025-10-01 17:11:23.971198618 +0000 UTC m=+0.124462667 container init f0855c7cc25dcb2aba67adfb0813eda3c9fb7cfeea382737a6df4b9873bbd047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 13:11:23 np0005464891 podman[313743]: 2025-10-01 17:11:23.981881664 +0000 UTC m=+0.135145733 container start f0855c7cc25dcb2aba67adfb0813eda3c9fb7cfeea382737a6df4b9873bbd047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 13:11:23 np0005464891 eager_hopper[313759]: 167 167
Oct  1 13:11:23 np0005464891 podman[313743]: 2025-10-01 17:11:23.98572899 +0000 UTC m=+0.138993059 container attach f0855c7cc25dcb2aba67adfb0813eda3c9fb7cfeea382737a6df4b9873bbd047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:11:23 np0005464891 systemd[1]: libpod-f0855c7cc25dcb2aba67adfb0813eda3c9fb7cfeea382737a6df4b9873bbd047.scope: Deactivated successfully.
Oct  1 13:11:23 np0005464891 podman[313743]: 2025-10-01 17:11:23.986567292 +0000 UTC m=+0.139831361 container died f0855c7cc25dcb2aba67adfb0813eda3c9fb7cfeea382737a6df4b9873bbd047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:11:24 np0005464891 systemd[1]: var-lib-containers-storage-overlay-16fe58fc554857ad9df7187887843200fd2fe8292282957c62d882b6dcfc133b-merged.mount: Deactivated successfully.
Oct  1 13:11:24 np0005464891 podman[313743]: 2025-10-01 17:11:24.021788816 +0000 UTC m=+0.175052865 container remove f0855c7cc25dcb2aba67adfb0813eda3c9fb7cfeea382737a6df4b9873bbd047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 13:11:24 np0005464891 systemd[1]: libpod-conmon-f0855c7cc25dcb2aba67adfb0813eda3c9fb7cfeea382737a6df4b9873bbd047.scope: Deactivated successfully.
Oct  1 13:11:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:11:24 np0005464891 podman[313783]: 2025-10-01 17:11:24.195563545 +0000 UTC m=+0.046337551 container create 28efcc0058dbc9ba3d3469a2f35dff73c3a7939bd9e43a01dadc6809c7a82f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  1 13:11:24 np0005464891 systemd[1]: Started libpod-conmon-28efcc0058dbc9ba3d3469a2f35dff73c3a7939bd9e43a01dadc6809c7a82f16.scope.
Oct  1 13:11:24 np0005464891 podman[313783]: 2025-10-01 17:11:24.171611013 +0000 UTC m=+0.022385029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:11:24 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:11:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ec1e0106f1e15d6a0f1603c9f499c6cbc3017ba4b6a97312c48ad1412072b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:11:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ec1e0106f1e15d6a0f1603c9f499c6cbc3017ba4b6a97312c48ad1412072b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:11:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ec1e0106f1e15d6a0f1603c9f499c6cbc3017ba4b6a97312c48ad1412072b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:11:24 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ec1e0106f1e15d6a0f1603c9f499c6cbc3017ba4b6a97312c48ad1412072b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:11:24 np0005464891 podman[313783]: 2025-10-01 17:11:24.295916196 +0000 UTC m=+0.146690252 container init 28efcc0058dbc9ba3d3469a2f35dff73c3a7939bd9e43a01dadc6809c7a82f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bohr, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:11:24 np0005464891 podman[313783]: 2025-10-01 17:11:24.310630813 +0000 UTC m=+0.161404779 container start 28efcc0058dbc9ba3d3469a2f35dff73c3a7939bd9e43a01dadc6809c7a82f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:11:24 np0005464891 podman[313783]: 2025-10-01 17:11:24.313778749 +0000 UTC m=+0.164552815 container attach 28efcc0058dbc9ba3d3469a2f35dff73c3a7939bd9e43a01dadc6809c7a82f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bohr, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:11:24 np0005464891 nova_compute[259907]: 2025-10-01 17:11:24.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]: {
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:        "osd_id": 2,
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:        "type": "bluestore"
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:    },
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:        "osd_id": 0,
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:        "type": "bluestore"
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:    },
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:        "osd_id": 1,
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:        "type": "bluestore"
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]:    }
Oct  1 13:11:25 np0005464891 jolly_bohr[313799]: }
Oct  1 13:11:25 np0005464891 systemd[1]: libpod-28efcc0058dbc9ba3d3469a2f35dff73c3a7939bd9e43a01dadc6809c7a82f16.scope: Deactivated successfully.
Oct  1 13:11:25 np0005464891 podman[313832]: 2025-10-01 17:11:25.301809687 +0000 UTC m=+0.021951758 container died 28efcc0058dbc9ba3d3469a2f35dff73c3a7939bd9e43a01dadc6809c7a82f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:11:25 np0005464891 systemd[1]: var-lib-containers-storage-overlay-28ec1e0106f1e15d6a0f1603c9f499c6cbc3017ba4b6a97312c48ad1412072b3-merged.mount: Deactivated successfully.
Oct  1 13:11:25 np0005464891 podman[313832]: 2025-10-01 17:11:25.355104298 +0000 UTC m=+0.075246339 container remove 28efcc0058dbc9ba3d3469a2f35dff73c3a7939bd9e43a01dadc6809c7a82f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bohr, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 13:11:25 np0005464891 systemd[1]: libpod-conmon-28efcc0058dbc9ba3d3469a2f35dff73c3a7939bd9e43a01dadc6809c7a82f16.scope: Deactivated successfully.
Oct  1 13:11:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:11:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:11:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:11:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:11:25 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev f3227e12-39aa-4e06-b82f-37962e1f5897 does not exist
Oct  1 13:11:25 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 03bc3851-8c14-423c-a7b9-6e138be57e0c does not exist
Oct  1 13:11:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 21 KiB/s wr, 0 op/s
Oct  1 13:11:26 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:11:26 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:11:27 np0005464891 nova_compute[259907]: 2025-10-01 17:11:27.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 21 KiB/s wr, 0 op/s
Oct  1 13:11:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:11:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 345 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.0 MiB/s wr, 42 op/s
Oct  1 13:11:29 np0005464891 nova_compute[259907]: 2025-10-01 17:11:29.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:31 np0005464891 nova_compute[259907]: 2025-10-01 17:11:31.222 2 DEBUG oslo_concurrency.lockutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "8575c03c-88ab-44f0-8b99-c5e3874c9610" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:11:31 np0005464891 nova_compute[259907]: 2025-10-01 17:11:31.222 2 DEBUG oslo_concurrency.lockutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "8575c03c-88ab-44f0-8b99-c5e3874c9610" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:11:31 np0005464891 nova_compute[259907]: 2025-10-01 17:11:31.235 2 DEBUG nova.compute.manager [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 13:11:31 np0005464891 nova_compute[259907]: 2025-10-01 17:11:31.303 2 DEBUG oslo_concurrency.lockutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:11:31 np0005464891 nova_compute[259907]: 2025-10-01 17:11:31.304 2 DEBUG oslo_concurrency.lockutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:11:31 np0005464891 nova_compute[259907]: 2025-10-01 17:11:31.312 2 DEBUG nova.virt.hardware [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 13:11:31 np0005464891 nova_compute[259907]: 2025-10-01 17:11:31.313 2 INFO nova.compute.claims [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 13:11:31 np0005464891 nova_compute[259907]: 2025-10-01 17:11:31.419 2 DEBUG oslo_concurrency.processutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:11:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct  1 13:11:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:11:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1341877552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:11:31 np0005464891 nova_compute[259907]: 2025-10-01 17:11:31.882 2 DEBUG oslo_concurrency.processutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:11:31 np0005464891 nova_compute[259907]: 2025-10-01 17:11:31.893 2 DEBUG nova.compute.provider_tree [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:11:32 np0005464891 nova_compute[259907]: 2025-10-01 17:11:31.971 2 DEBUG nova.scheduler.client.report [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:11:32 np0005464891 nova_compute[259907]: 2025-10-01 17:11:32.228 2 DEBUG oslo_concurrency.lockutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:11:32 np0005464891 nova_compute[259907]: 2025-10-01 17:11:32.229 2 DEBUG nova.compute.manager [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 13:11:32 np0005464891 nova_compute[259907]: 2025-10-01 17:11:32.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:32 np0005464891 nova_compute[259907]: 2025-10-01 17:11:32.638 2 DEBUG nova.compute.manager [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 13:11:32 np0005464891 nova_compute[259907]: 2025-10-01 17:11:32.639 2 DEBUG nova.network.neutron [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 13:11:32 np0005464891 nova_compute[259907]: 2025-10-01 17:11:32.797 2 INFO nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.186 2 DEBUG nova.compute.manager [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.372 2 INFO nova.virt.block_device [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Booting with volume 30f6581c-af66-4115-b288-8e22fa5808f0 at /dev/vda#033[00m
Oct  1 13:11:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.746 2 DEBUG os_brick.utils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.747 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.760 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.761 741 DEBUG oslo.privsep.daemon [-] privsep: reply[49272ef2-7788-424f-8b4a-14c06d618eb2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.762 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.771 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.771 741 DEBUG oslo.privsep.daemon [-] privsep: reply[ca44706e-b194-448b-bf1f-f225cab09f1f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.772 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.788 2 DEBUG nova.policy [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c440275c1a1e4cf09fcf789374345bb2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7101f2ff48f540a08f6ec15b324152c6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.798 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.798 741 DEBUG oslo.privsep.daemon [-] privsep: reply[17d19442-965c-4398-a06e-d7aff58d74d7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.800 741 DEBUG oslo.privsep.daemon [-] privsep: reply[47140182-9960-4de1-a3bd-598ae6f8a63a]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.800 2 DEBUG oslo_concurrency.processutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.824 2 DEBUG oslo_concurrency.processutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.827 2 DEBUG os_brick.initiator.connectors.lightos [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.827 2 DEBUG os_brick.initiator.connectors.lightos [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.827 2 DEBUG os_brick.initiator.connectors.lightos [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.827 2 DEBUG os_brick.utils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] <== get_connector_properties: return (80ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 13:11:33 np0005464891 nova_compute[259907]: 2025-10-01 17:11:33.828 2 DEBUG nova.virt.block_device [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Updating existing volume attachment record: 6689329f-9826-41a4-aa59-479e84233747 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 13:11:33 np0005464891 podman[313931]: 2025-10-01 17:11:33.968997125 +0000 UTC m=+0.074897680 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  1 13:11:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:11:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:11:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2449432927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:11:34 np0005464891 nova_compute[259907]: 2025-10-01 17:11:34.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:35 np0005464891 nova_compute[259907]: 2025-10-01 17:11:35.456 2 DEBUG nova.compute.manager [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 13:11:35 np0005464891 nova_compute[259907]: 2025-10-01 17:11:35.457 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 13:11:35 np0005464891 nova_compute[259907]: 2025-10-01 17:11:35.457 2 INFO nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Creating image(s)#033[00m
Oct  1 13:11:35 np0005464891 nova_compute[259907]: 2025-10-01 17:11:35.458 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  1 13:11:35 np0005464891 nova_compute[259907]: 2025-10-01 17:11:35.458 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Ensure instance console log exists: /var/lib/nova/instances/8575c03c-88ab-44f0-8b99-c5e3874c9610/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 13:11:35 np0005464891 nova_compute[259907]: 2025-10-01 17:11:35.458 2 DEBUG oslo_concurrency.lockutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:11:35 np0005464891 nova_compute[259907]: 2025-10-01 17:11:35.459 2 DEBUG oslo_concurrency.lockutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:11:35 np0005464891 nova_compute[259907]: 2025-10-01 17:11:35.459 2 DEBUG oslo_concurrency.lockutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:11:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct  1 13:11:36 np0005464891 nova_compute[259907]: 2025-10-01 17:11:36.689 2 DEBUG nova.network.neutron [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Successfully created port: 29f8a6e5-778f-42d5-a859-12396732abe6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 13:11:36 np0005464891 podman[313950]: 2025-10-01 17:11:36.98141472 +0000 UTC m=+0.097023740 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  1 13:11:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:11:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/851114947' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:11:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:11:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/851114947' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:11:37 np0005464891 nova_compute[259907]: 2025-10-01 17:11:37.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct  1 13:11:37 np0005464891 nova_compute[259907]: 2025-10-01 17:11:37.757 2 DEBUG nova.network.neutron [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Successfully updated port: 29f8a6e5-778f-42d5-a859-12396732abe6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 13:11:37 np0005464891 nova_compute[259907]: 2025-10-01 17:11:37.812 2 DEBUG oslo_concurrency.lockutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "refresh_cache-8575c03c-88ab-44f0-8b99-c5e3874c9610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:11:37 np0005464891 nova_compute[259907]: 2025-10-01 17:11:37.812 2 DEBUG oslo_concurrency.lockutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquired lock "refresh_cache-8575c03c-88ab-44f0-8b99-c5e3874c9610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:11:37 np0005464891 nova_compute[259907]: 2025-10-01 17:11:37.812 2 DEBUG nova.network.neutron [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 13:11:37 np0005464891 nova_compute[259907]: 2025-10-01 17:11:37.909 2 DEBUG nova.compute.manager [req-dca09b6c-36e7-45b0-9a1c-28dc19d4f64f req-1ceb0c59-ff2a-4fae-9939-c6fb47d5d0fd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Received event network-changed-29f8a6e5-778f-42d5-a859-12396732abe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:11:37 np0005464891 nova_compute[259907]: 2025-10-01 17:11:37.910 2 DEBUG nova.compute.manager [req-dca09b6c-36e7-45b0-9a1c-28dc19d4f64f req-1ceb0c59-ff2a-4fae-9939-c6fb47d5d0fd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Refreshing instance network info cache due to event network-changed-29f8a6e5-778f-42d5-a859-12396732abe6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 13:11:37 np0005464891 nova_compute[259907]: 2025-10-01 17:11:37.910 2 DEBUG oslo_concurrency.lockutils [req-dca09b6c-36e7-45b0-9a1c-28dc19d4f64f req-1ceb0c59-ff2a-4fae-9939-c6fb47d5d0fd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-8575c03c-88ab-44f0-8b99-c5e3874c9610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:11:38 np0005464891 nova_compute[259907]: 2025-10-01 17:11:38.001 2 DEBUG nova.network.neutron [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 13:11:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:11:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.668 2 DEBUG nova.network.neutron [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Updating instance_info_cache with network_info: [{"id": "29f8a6e5-778f-42d5-a859-12396732abe6", "address": "fa:16:3e:9e:3a:d4", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f8a6e5-77", "ovs_interfaceid": "29f8a6e5-778f-42d5-a859-12396732abe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.844 2 DEBUG oslo_concurrency.lockutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Releasing lock "refresh_cache-8575c03c-88ab-44f0-8b99-c5e3874c9610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.844 2 DEBUG nova.compute.manager [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Instance network_info: |[{"id": "29f8a6e5-778f-42d5-a859-12396732abe6", "address": "fa:16:3e:9e:3a:d4", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f8a6e5-77", "ovs_interfaceid": "29f8a6e5-778f-42d5-a859-12396732abe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.845 2 DEBUG oslo_concurrency.lockutils [req-dca09b6c-36e7-45b0-9a1c-28dc19d4f64f req-1ceb0c59-ff2a-4fae-9939-c6fb47d5d0fd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-8575c03c-88ab-44f0-8b99-c5e3874c9610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.845 2 DEBUG nova.network.neutron [req-dca09b6c-36e7-45b0-9a1c-28dc19d4f64f req-1ceb0c59-ff2a-4fae-9939-c6fb47d5d0fd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Refreshing network info cache for port 29f8a6e5-778f-42d5-a859-12396732abe6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.849 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Start _get_guest_xml network_info=[{"id": "29f8a6e5-778f-42d5-a859-12396732abe6", "address": "fa:16:3e:9e:3a:d4", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f8a6e5-77", "ovs_interfaceid": "29f8a6e5-778f-42d5-a859-12396732abe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'mount_device': '/dev/vda', 'attachment_id': '6689329f-9826-41a4-aa59-479e84233747', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-30f6581c-af66-4115-b288-8e22fa5808f0', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '30f6581c-af66-4115-b288-8e22fa5808f0', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '8575c03c-88ab-44f0-8b99-c5e3874c9610', 'attached_at': '', 'detached_at': '', 'volume_id': '30f6581c-af66-4115-b288-8e22fa5808f0', 'serial': '30f6581c-af66-4115-b288-8e22fa5808f0'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.854 2 WARNING nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.861 2 DEBUG nova.virt.libvirt.host [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.862 2 DEBUG nova.virt.libvirt.host [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.866 2 DEBUG nova.virt.libvirt.host [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.867 2 DEBUG nova.virt.libvirt.host [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.868 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.868 2 DEBUG nova.virt.hardware [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.869 2 DEBUG nova.virt.hardware [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.869 2 DEBUG nova.virt.hardware [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.870 2 DEBUG nova.virt.hardware [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.870 2 DEBUG nova.virt.hardware [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.871 2 DEBUG nova.virt.hardware [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.871 2 DEBUG nova.virt.hardware [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.872 2 DEBUG nova.virt.hardware [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.872 2 DEBUG nova.virt.hardware [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.873 2 DEBUG nova.virt.hardware [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.873 2 DEBUG nova.virt.hardware [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.943 2 DEBUG nova.storage.rbd_utils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] rbd image 8575c03c-88ab-44f0-8b99-c5e3874c9610_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:11:39 np0005464891 podman[313976]: 2025-10-01 17:11:39.943878556 +0000 UTC m=+0.060394209 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.950 2 DEBUG oslo_concurrency.processutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:11:39 np0005464891 nova_compute[259907]: 2025-10-01 17:11:39.990 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:11:40 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:11:40 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2103761129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.519 2 DEBUG oslo_concurrency.processutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.661 2 DEBUG os_brick.encryptors [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Using volume encryption metadata '{'encryption_key_id': '00f520dc-4d69-44d5-b062-4f97b5573e0b', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-30f6581c-af66-4115-b288-8e22fa5808f0', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '30f6581c-af66-4115-b288-8e22fa5808f0', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '8575c03c-88ab-44f0-8b99-c5e3874c9610', 'attached_at': '', 'detached_at': '', 'volume_id': '30f6581c-af66-4115-b288-8e22fa5808f0', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.663 2 DEBUG barbicanclient.client [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.677 2 DEBUG barbicanclient.v1.secrets [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/00f520dc-4d69-44d5-b062-4f97b5573e0b get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.678 2 INFO barbicanclient.base [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/00f520dc-4d69-44d5-b062-4f97b5573e0b#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.705 2 DEBUG barbicanclient.client [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.706 2 INFO barbicanclient.base [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/00f520dc-4d69-44d5-b062-4f97b5573e0b#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.739 2 DEBUG barbicanclient.client [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.740 2 INFO barbicanclient.base [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/00f520dc-4d69-44d5-b062-4f97b5573e0b#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.763 2 DEBUG barbicanclient.client [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.765 2 INFO barbicanclient.base [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/00f520dc-4d69-44d5-b062-4f97b5573e0b#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.793 2 DEBUG barbicanclient.client [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.793 2 INFO barbicanclient.base [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/00f520dc-4d69-44d5-b062-4f97b5573e0b#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.816 2 DEBUG barbicanclient.client [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.816 2 INFO barbicanclient.base [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/00f520dc-4d69-44d5-b062-4f97b5573e0b#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.842 2 DEBUG barbicanclient.client [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.843 2 INFO barbicanclient.base [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/00f520dc-4d69-44d5-b062-4f97b5573e0b#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.881 2 DEBUG barbicanclient.client [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.882 2 INFO barbicanclient.base [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/00f520dc-4d69-44d5-b062-4f97b5573e0b#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.899 2 DEBUG barbicanclient.client [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.900 2 INFO barbicanclient.base [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/00f520dc-4d69-44d5-b062-4f97b5573e0b#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.934 2 DEBUG barbicanclient.client [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.934 2 INFO barbicanclient.base [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/00f520dc-4d69-44d5-b062-4f97b5573e0b#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.961 2 DEBUG barbicanclient.client [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.961 2 INFO barbicanclient.base [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/00f520dc-4d69-44d5-b062-4f97b5573e0b#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.989 2 DEBUG barbicanclient.client [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:11:40 np0005464891 nova_compute[259907]: 2025-10-01 17:11:40.989 2 INFO barbicanclient.base [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/00f520dc-4d69-44d5-b062-4f97b5573e0b#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.009 2 DEBUG barbicanclient.client [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.010 2 INFO barbicanclient.base [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/00f520dc-4d69-44d5-b062-4f97b5573e0b#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.049 2 DEBUG barbicanclient.client [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.050 2 INFO barbicanclient.base [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/00f520dc-4d69-44d5-b062-4f97b5573e0b#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.077 2 DEBUG barbicanclient.client [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.078 2 INFO barbicanclient.base [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/00f520dc-4d69-44d5-b062-4f97b5573e0b#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.094 2 DEBUG barbicanclient.client [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.095 2 DEBUG nova.virt.libvirt.host [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  <usage type="volume">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <volume>30f6581c-af66-4115-b288-8e22fa5808f0</volume>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  </usage>
Oct  1 13:11:41 np0005464891 nova_compute[259907]: </secret>
Oct  1 13:11:41 np0005464891 nova_compute[259907]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.236 2 DEBUG nova.virt.libvirt.vif [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T17:11:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1398705824',display_name='tempest-TransferEncryptedVolumeTest-server-1398705824',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1398705824',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCggI6gfQD/mHYmOm6rDTYT2bX0sAibZLzDEC2B5xpj9ltJuTla2hy5xtYjkh93bJjJwE1iJj8Z6crMN4OBz57+5pkjfi89vm+UxnL1pqlzGufhUcmighPnzcAkPF9ezXQ==',key_name='tempest-TransferEncryptedVolumeTest-1396714412',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7101f2ff48f540a08f6ec15b324152c6',ramdisk_id='',reservation_id='r-y22io2wf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1550217158',owner_user_name='tempest-TransferEncryptedVolumeTest-1550217158-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T17:11:33Z,user_data=None,user_id='c440275c1a1e4cf09fcf789374345bb2',uuid=8575c03c-88ab-44f0-8b99-c5e3874c9610,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29f8a6e5-778f-42d5-a859-12396732abe6", "address": "fa:16:3e:9e:3a:d4", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f8a6e5-77", "ovs_interfaceid": "29f8a6e5-778f-42d5-a859-12396732abe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.236 2 DEBUG nova.network.os_vif_util [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converting VIF {"id": "29f8a6e5-778f-42d5-a859-12396732abe6", "address": "fa:16:3e:9e:3a:d4", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f8a6e5-77", "ovs_interfaceid": "29f8a6e5-778f-42d5-a859-12396732abe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.237 2 DEBUG nova.network.os_vif_util [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:d4,bridge_name='br-int',has_traffic_filtering=True,id=29f8a6e5-778f-42d5-a859-12396732abe6,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29f8a6e5-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.239 2 DEBUG nova.objects.instance [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8575c03c-88ab-44f0-8b99-c5e3874c9610 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.258 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] End _get_guest_xml xml=<domain type="kvm">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  <uuid>8575c03c-88ab-44f0-8b99-c5e3874c9610</uuid>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  <name>instance-0000001d</name>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-1398705824</nova:name>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 17:11:39</nova:creationTime>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:        <nova:user uuid="c440275c1a1e4cf09fcf789374345bb2">tempest-TransferEncryptedVolumeTest-1550217158-project-member</nova:user>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:        <nova:project uuid="7101f2ff48f540a08f6ec15b324152c6">tempest-TransferEncryptedVolumeTest-1550217158</nova:project>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:        <nova:port uuid="29f8a6e5-778f-42d5-a859-12396732abe6">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <system>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <entry name="serial">8575c03c-88ab-44f0-8b99-c5e3874c9610</entry>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <entry name="uuid">8575c03c-88ab-44f0-8b99-c5e3874c9610</entry>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    </system>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  <os>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  </os>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  <features>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  </features>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  </clock>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  <devices>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/8575c03c-88ab-44f0-8b99-c5e3874c9610_disk.config">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      </source>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      </auth>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    </disk>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="volumes/volume-30f6581c-af66-4115-b288-8e22fa5808f0">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      </source>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      </auth>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <serial>30f6581c-af66-4115-b288-8e22fa5808f0</serial>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <encryption format="luks">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:        <secret type="passphrase" uuid="a8240080-d133-4373-b0fd-e42de871c9f3"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      </encryption>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    </disk>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:9e:3a:d4"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <target dev="tap29f8a6e5-77"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    </interface>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/8575c03c-88ab-44f0-8b99-c5e3874c9610/console.log" append="off"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    </serial>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <video>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    </video>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    </rng>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 13:11:41 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 13:11:41 np0005464891 nova_compute[259907]:  </devices>
Oct  1 13:11:41 np0005464891 nova_compute[259907]: </domain>
Oct  1 13:11:41 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.259 2 DEBUG nova.compute.manager [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Preparing to wait for external event network-vif-plugged-29f8a6e5-778f-42d5-a859-12396732abe6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.259 2 DEBUG oslo_concurrency.lockutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.259 2 DEBUG oslo_concurrency.lockutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.260 2 DEBUG oslo_concurrency.lockutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.260 2 DEBUG nova.virt.libvirt.vif [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T17:11:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1398705824',display_name='tempest-TransferEncryptedVolumeTest-server-1398705824',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1398705824',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCggI6gfQD/mHYmOm6rDTYT2bX0sAibZLzDEC2B5xpj9ltJuTla2hy5xtYjkh93bJjJwE1iJj8Z6crMN4OBz57+5pkjfi89vm+UxnL1pqlzGufhUcmighPnzcAkPF9ezXQ==',key_name='tempest-TransferEncryptedVolumeTest-1396714412',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7101f2ff48f540a08f6ec15b324152c6',ramdisk_id='',reservation_id='r-y22io2wf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1550217158',owner_user_name='tempest-TransferEncryptedVolumeTest-1550217158-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T17:11:33Z,user_data=None,user_id='c440275c1a1e4cf09fcf789374345bb2',uuid=8575c03c-88ab-44f0-8b99-c5e3874c9610,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29f8a6e5-778f-42d5-a859-12396732abe6", "address": "fa:16:3e:9e:3a:d4", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f8a6e5-77", "ovs_interfaceid": "29f8a6e5-778f-42d5-a859-12396732abe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.261 2 DEBUG nova.network.os_vif_util [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converting VIF {"id": "29f8a6e5-778f-42d5-a859-12396732abe6", "address": "fa:16:3e:9e:3a:d4", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f8a6e5-77", "ovs_interfaceid": "29f8a6e5-778f-42d5-a859-12396732abe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.261 2 DEBUG nova.network.os_vif_util [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:d4,bridge_name='br-int',has_traffic_filtering=True,id=29f8a6e5-778f-42d5-a859-12396732abe6,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29f8a6e5-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.262 2 DEBUG os_vif [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:d4,bridge_name='br-int',has_traffic_filtering=True,id=29f8a6e5-778f-42d5-a859-12396732abe6,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29f8a6e5-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.263 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.264 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29f8a6e5-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29f8a6e5-77, col_values=(('external_ids', {'iface-id': '29f8a6e5-778f-42d5-a859-12396732abe6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:3a:d4', 'vm-uuid': '8575c03c-88ab-44f0-8b99-c5e3874c9610'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:41 np0005464891 NetworkManager[44940]: <info>  [1759338701.2701] manager: (tap29f8a6e5-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.280 2 INFO os_vif [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:d4,bridge_name='br-int',has_traffic_filtering=True,id=29f8a6e5-778f-42d5-a859-12396732abe6,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29f8a6e5-77')#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.349 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.350 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.350 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] No VIF found with MAC fa:16:3e:9e:3a:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.350 2 INFO nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Using config drive#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.368 2 DEBUG nova.storage.rbd_utils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] rbd image 8575c03c-88ab-44f0-8b99-c5e3874c9610_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:11:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 3.3 MiB/s wr, 2 op/s
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.692 2 DEBUG nova.network.neutron [req-dca09b6c-36e7-45b0-9a1c-28dc19d4f64f req-1ceb0c59-ff2a-4fae-9939-c6fb47d5d0fd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Updated VIF entry in instance network info cache for port 29f8a6e5-778f-42d5-a859-12396732abe6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.693 2 DEBUG nova.network.neutron [req-dca09b6c-36e7-45b0-9a1c-28dc19d4f64f req-1ceb0c59-ff2a-4fae-9939-c6fb47d5d0fd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Updating instance_info_cache with network_info: [{"id": "29f8a6e5-778f-42d5-a859-12396732abe6", "address": "fa:16:3e:9e:3a:d4", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f8a6e5-77", "ovs_interfaceid": "29f8a6e5-778f-42d5-a859-12396732abe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.756 2 DEBUG oslo_concurrency.lockutils [req-dca09b6c-36e7-45b0-9a1c-28dc19d4f64f req-1ceb0c59-ff2a-4fae-9939-c6fb47d5d0fd af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-8575c03c-88ab-44f0-8b99-c5e3874c9610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.917 2 INFO nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Creating config drive at /var/lib/nova/instances/8575c03c-88ab-44f0-8b99-c5e3874c9610/disk.config#033[00m
Oct  1 13:11:41 np0005464891 nova_compute[259907]: 2025-10-01 17:11:41.927 2 DEBUG oslo_concurrency.processutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8575c03c-88ab-44f0-8b99-c5e3874c9610/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7xypkz7h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:11:42 np0005464891 nova_compute[259907]: 2025-10-01 17:11:42.065 2 DEBUG oslo_concurrency.processutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8575c03c-88ab-44f0-8b99-c5e3874c9610/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7xypkz7h" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:11:42 np0005464891 nova_compute[259907]: 2025-10-01 17:11:42.103 2 DEBUG nova.storage.rbd_utils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] rbd image 8575c03c-88ab-44f0-8b99-c5e3874c9610_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:11:42 np0005464891 nova_compute[259907]: 2025-10-01 17:11:42.106 2 DEBUG oslo_concurrency.processutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8575c03c-88ab-44f0-8b99-c5e3874c9610/disk.config 8575c03c-88ab-44f0-8b99-c5e3874c9610_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:11:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:11:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:11:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:11:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:11:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:11:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:11:42 np0005464891 nova_compute[259907]: 2025-10-01 17:11:42.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:42 np0005464891 podman[314093]: 2025-10-01 17:11:42.930132689 +0000 UTC m=+0.050867287 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid)
Oct  1 13:11:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct  1 13:11:44 np0005464891 nova_compute[259907]: 2025-10-01 17:11:44.066 2 DEBUG oslo_concurrency.processutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8575c03c-88ab-44f0-8b99-c5e3874c9610/disk.config 8575c03c-88ab-44f0-8b99-c5e3874c9610_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.960s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:11:44 np0005464891 nova_compute[259907]: 2025-10-01 17:11:44.066 2 INFO nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Deleting local config drive /var/lib/nova/instances/8575c03c-88ab-44f0-8b99-c5e3874c9610/disk.config because it was imported into RBD.#033[00m
Oct  1 13:11:44 np0005464891 kernel: tap29f8a6e5-77: entered promiscuous mode
Oct  1 13:11:44 np0005464891 NetworkManager[44940]: <info>  [1759338704.1187] manager: (tap29f8a6e5-77): new Tun device (/org/freedesktop/NetworkManager/Devices/148)
Oct  1 13:11:44 np0005464891 ovn_controller[152409]: 2025-10-01T17:11:44Z|00289|binding|INFO|Claiming lport 29f8a6e5-778f-42d5-a859-12396732abe6 for this chassis.
Oct  1 13:11:44 np0005464891 ovn_controller[152409]: 2025-10-01T17:11:44Z|00290|binding|INFO|29f8a6e5-778f-42d5-a859-12396732abe6: Claiming fa:16:3e:9e:3a:d4 10.100.0.11
Oct  1 13:11:44 np0005464891 nova_compute[259907]: 2025-10-01 17:11:44.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:44 np0005464891 ovn_controller[152409]: 2025-10-01T17:11:44Z|00291|binding|INFO|Setting lport 29f8a6e5-778f-42d5-a859-12396732abe6 ovn-installed in OVS
Oct  1 13:11:44 np0005464891 nova_compute[259907]: 2025-10-01 17:11:44.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:44 np0005464891 nova_compute[259907]: 2025-10-01 17:11:44.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:44 np0005464891 systemd-machined[214891]: New machine qemu-29-instance-0000001d.
Oct  1 13:11:44 np0005464891 systemd-udevd[314130]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 13:11:44 np0005464891 NetworkManager[44940]: <info>  [1759338704.1663] device (tap29f8a6e5-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 13:11:44 np0005464891 NetworkManager[44940]: <info>  [1759338704.1678] device (tap29f8a6e5-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 13:11:44 np0005464891 systemd[1]: Started Virtual Machine qemu-29-instance-0000001d.
Oct  1 13:11:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.243 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:3a:d4 10.100.0.11'], port_security=['fa:16:3e:9e:3a:d4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8575c03c-88ab-44f0-8b99-c5e3874c9610', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d747029d-7cd7-4e92-a356-867cacbb54c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7101f2ff48f540a08f6ec15b324152c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd9e37ca0-9284-404d-8dbf-7a2a022ea664', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ce41f0a-b119-4c05-a337-15f2fa115c91, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=29f8a6e5-778f-42d5-a859-12396732abe6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:11:44 np0005464891 ovn_controller[152409]: 2025-10-01T17:11:44Z|00292|binding|INFO|Setting lport 29f8a6e5-778f-42d5-a859-12396732abe6 up in Southbound
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.244 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 29f8a6e5-778f-42d5-a859-12396732abe6 in datapath d747029d-7cd7-4e92-a356-867cacbb54c4 bound to our chassis#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.245 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d747029d-7cd7-4e92-a356-867cacbb54c4#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.263 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5bb1bb-e91f-4905-b5d4-e6dca6afa857]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.264 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd747029d-71 in ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.266 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd747029d-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.266 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f16f60-3544-42f3-b24a-8d9f9b80fa17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.268 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d81461cd-c25d-456a-a0df-8035ab3c4d15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.280 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[fddc4dc9-533e-40af-8db7-521840818645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.304 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0ee853-0571-4c77-8692-6e484b6530ac]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.342 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[32969980-d693-4169-b020-ce5e56735563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:44 np0005464891 systemd-udevd[314132]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 13:11:44 np0005464891 NetworkManager[44940]: <info>  [1759338704.3502] manager: (tapd747029d-70): new Veth device (/org/freedesktop/NetworkManager/Devices/149)
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.348 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[40b237ab-4976-424e-99fd-c062fd4b272d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.381 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[58c112f2-1441-46ff-9abe-90d4c88d7de5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.384 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[2c58882e-8b55-4181-b010-d4e4eb6c455b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:44 np0005464891 NetworkManager[44940]: <info>  [1759338704.4065] device (tapd747029d-70): carrier: link connected
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.412 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[8d45b03b-36e1-4fc0-8b86-3e11ea8deba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.429 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3e2d1b-bfe8-4aff-9f88-ad6063a66b75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd747029d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:a1:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565297, 'reachable_time': 35534, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314163, 'error': None, 'target': 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.447 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a88e0f-09e6-430f-bb1d-05ba03baf2a1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:a1a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565297, 'tstamp': 565297}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314164, 'error': None, 'target': 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.467 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[6410b13e-b283-4e8e-a759-d0b8c2d6e853]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd747029d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:a1:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565297, 'reachable_time': 35534, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314165, 'error': None, 'target': 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.504 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[74b9e119-0534-4fc7-a542-311349a70590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.568 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[446cc434-3bab-4ec8-a04e-11090beefc14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.569 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd747029d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.569 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.570 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd747029d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:11:44 np0005464891 nova_compute[259907]: 2025-10-01 17:11:44.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:44 np0005464891 NetworkManager[44940]: <info>  [1759338704.5732] manager: (tapd747029d-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Oct  1 13:11:44 np0005464891 kernel: tapd747029d-70: entered promiscuous mode
Oct  1 13:11:44 np0005464891 nova_compute[259907]: 2025-10-01 17:11:44.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.576 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd747029d-70, col_values=(('external_ids', {'iface-id': '3454e5b0-0c54-4314-89c0-47c1b5603195'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:11:44 np0005464891 nova_compute[259907]: 2025-10-01 17:11:44.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:44 np0005464891 ovn_controller[152409]: 2025-10-01T17:11:44Z|00293|binding|INFO|Releasing lport 3454e5b0-0c54-4314-89c0-47c1b5603195 from this chassis (sb_readonly=0)
Oct  1 13:11:44 np0005464891 nova_compute[259907]: 2025-10-01 17:11:44.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.581 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d747029d-7cd7-4e92-a356-867cacbb54c4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d747029d-7cd7-4e92-a356-867cacbb54c4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.582 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[82017fd9-112c-4e5e-8e1a-375e19db5347]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.583 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-d747029d-7cd7-4e92-a356-867cacbb54c4
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/d747029d-7cd7-4e92-a356-867cacbb54c4.pid.haproxy
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID d747029d-7cd7-4e92-a356-867cacbb54c4
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 13:11:44 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:44.585 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'env', 'PROCESS_TAG=haproxy-d747029d-7cd7-4e92-a356-867cacbb54c4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d747029d-7cd7-4e92-a356-867cacbb54c4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 13:11:44 np0005464891 nova_compute[259907]: 2025-10-01 17:11:44.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:45 np0005464891 podman[314234]: 2025-10-01 17:11:44.923981934 +0000 UTC m=+0.025538606 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 13:11:45 np0005464891 nova_compute[259907]: 2025-10-01 17:11:45.180 2 DEBUG nova.compute.manager [req-e948eb30-445a-48f7-bb9b-c027ef38b8d8 req-81408000-24ea-4859-8b1b-b524470d9ed9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Received event network-vif-plugged-29f8a6e5-778f-42d5-a859-12396732abe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:11:45 np0005464891 nova_compute[259907]: 2025-10-01 17:11:45.180 2 DEBUG oslo_concurrency.lockutils [req-e948eb30-445a-48f7-bb9b-c027ef38b8d8 req-81408000-24ea-4859-8b1b-b524470d9ed9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:11:45 np0005464891 nova_compute[259907]: 2025-10-01 17:11:45.181 2 DEBUG oslo_concurrency.lockutils [req-e948eb30-445a-48f7-bb9b-c027ef38b8d8 req-81408000-24ea-4859-8b1b-b524470d9ed9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:11:45 np0005464891 nova_compute[259907]: 2025-10-01 17:11:45.181 2 DEBUG oslo_concurrency.lockutils [req-e948eb30-445a-48f7-bb9b-c027ef38b8d8 req-81408000-24ea-4859-8b1b-b524470d9ed9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:11:45 np0005464891 nova_compute[259907]: 2025-10-01 17:11:45.181 2 DEBUG nova.compute.manager [req-e948eb30-445a-48f7-bb9b-c027ef38b8d8 req-81408000-24ea-4859-8b1b-b524470d9ed9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Processing event network-vif-plugged-29f8a6e5-778f-42d5-a859-12396732abe6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 13:11:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct  1 13:11:45 np0005464891 podman[314234]: 2025-10-01 17:11:45.720502442 +0000 UTC m=+0.822059094 container create d58df6855c811c9a5744f9f16fea9a6a68d740dd6fdeff6d37870ce61fb568e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  1 13:11:45 np0005464891 systemd[1]: Started libpod-conmon-d58df6855c811c9a5744f9f16fea9a6a68d740dd6fdeff6d37870ce61fb568e4.scope.
Oct  1 13:11:46 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:11:46 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56376ead0d1dbfb2fe180de735a2b3276454e9a3470449f92f4b6c2824f4bb6b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 13:11:46 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:46.066 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:11:46 np0005464891 nova_compute[259907]: 2025-10-01 17:11:46.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:46 np0005464891 podman[314234]: 2025-10-01 17:11:46.218185307 +0000 UTC m=+1.319741949 container init d58df6855c811c9a5744f9f16fea9a6a68d740dd6fdeff6d37870ce61fb568e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:11:46 np0005464891 podman[314234]: 2025-10-01 17:11:46.225739406 +0000 UTC m=+1.327296048 container start d58df6855c811c9a5744f9f16fea9a6a68d740dd6fdeff6d37870ce61fb568e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 13:11:46 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[314249]: [NOTICE]   (314253) : New worker (314255) forked
Oct  1 13:11:46 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[314249]: [NOTICE]   (314253) : Loading success.
Oct  1 13:11:46 np0005464891 nova_compute[259907]: 2025-10-01 17:11:46.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:46 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:46.496 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 13:11:46 np0005464891 nova_compute[259907]: 2025-10-01 17:11:46.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:11:46 np0005464891 nova_compute[259907]: 2025-10-01 17:11:46.864 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:11:46 np0005464891 nova_compute[259907]: 2025-10-01 17:11:46.865 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:11:46 np0005464891 nova_compute[259907]: 2025-10-01 17:11:46.865 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:11:46 np0005464891 nova_compute[259907]: 2025-10-01 17:11:46.865 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 13:11:46 np0005464891 nova_compute[259907]: 2025-10-01 17:11:46.866 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:11:47 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:11:47 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3413114249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.318 2 DEBUG nova.compute.manager [req-49c409d1-8b41-4aa6-984d-79109f9e8f48 req-d7866c07-18a5-45c3-84b3-614e7da534da af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Received event network-vif-plugged-29f8a6e5-778f-42d5-a859-12396732abe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.318 2 DEBUG oslo_concurrency.lockutils [req-49c409d1-8b41-4aa6-984d-79109f9e8f48 req-d7866c07-18a5-45c3-84b3-614e7da534da af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.319 2 DEBUG oslo_concurrency.lockutils [req-49c409d1-8b41-4aa6-984d-79109f9e8f48 req-d7866c07-18a5-45c3-84b3-614e7da534da af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.319 2 DEBUG oslo_concurrency.lockutils [req-49c409d1-8b41-4aa6-984d-79109f9e8f48 req-d7866c07-18a5-45c3-84b3-614e7da534da af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.319 2 DEBUG nova.compute.manager [req-49c409d1-8b41-4aa6-984d-79109f9e8f48 req-d7866c07-18a5-45c3-84b3-614e7da534da af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] No waiting events found dispatching network-vif-plugged-29f8a6e5-778f-42d5-a859-12396732abe6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.319 2 WARNING nova.compute.manager [req-49c409d1-8b41-4aa6-984d-79109f9e8f48 req-d7866c07-18a5-45c3-84b3-614e7da534da af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Received unexpected event network-vif-plugged-29f8a6e5-778f-42d5-a859-12396732abe6 for instance with vm_state building and task_state spawning.#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.334 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:11:47 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:11:47.499 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.531 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.532 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 13:11:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.695 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.696 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4266MB free_disk=59.98827362060547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.696 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.696 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.829 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance 8575c03c-88ab-44f0-8b99-c5e3874c9610 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.830 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.830 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.883 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.943 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338707.9425495, 8575c03c-88ab-44f0-8b99-c5e3874c9610 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.943 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] VM Started (Lifecycle Event)#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.946 2 DEBUG nova.compute.manager [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.950 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.954 2 INFO nova.virt.libvirt.driver [-] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Instance spawned successfully.#033[00m
Oct  1 13:11:47 np0005464891 nova_compute[259907]: 2025-10-01 17:11:47.954 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.036 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.039 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.053 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.054 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.054 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.055 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.056 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.057 2 DEBUG nova.virt.libvirt.driver [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.195 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.195 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338707.944043, 8575c03c-88ab-44f0-8b99-c5e3874c9610 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.195 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] VM Paused (Lifecycle Event)#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.241 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.244 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338707.9493468, 8575c03c-88ab-44f0-8b99-c5e3874c9610 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.244 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] VM Resumed (Lifecycle Event)#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.252 2 INFO nova.compute.manager [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Took 12.80 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.252 2 DEBUG nova.compute.manager [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.332 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.335 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 13:11:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:11:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/853978617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.408 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.413 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.491 2 INFO nova.compute.manager [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Took 17.22 seconds to build instance.#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.495 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.564 2 DEBUG oslo_concurrency.lockutils [None req-a5a2425c-ea95-473d-9903-637f1dec719a c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "8575c03c-88ab-44f0-8b99-c5e3874c9610" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.635 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 13:11:48 np0005464891 nova_compute[259907]: 2025-10-01 17:11:48.636 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:11:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:11:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Oct  1 13:11:50 np0005464891 nova_compute[259907]: 2025-10-01 17:11:50.633 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:11:50 np0005464891 nova_compute[259907]: 2025-10-01 17:11:50.795 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:11:50 np0005464891 nova_compute[259907]: 2025-10-01 17:11:50.795 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 13:11:50 np0005464891 nova_compute[259907]: 2025-10-01 17:11:50.795 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 13:11:51 np0005464891 nova_compute[259907]: 2025-10-01 17:11:51.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 830 KiB/s rd, 12 KiB/s wr, 35 op/s
Oct  1 13:11:51 np0005464891 nova_compute[259907]: 2025-10-01 17:11:51.666 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "refresh_cache-8575c03c-88ab-44f0-8b99-c5e3874c9610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:11:51 np0005464891 nova_compute[259907]: 2025-10-01 17:11:51.666 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquired lock "refresh_cache-8575c03c-88ab-44f0-8b99-c5e3874c9610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:11:51 np0005464891 nova_compute[259907]: 2025-10-01 17:11:51.667 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  1 13:11:51 np0005464891 nova_compute[259907]: 2025-10-01 17:11:51.667 2 DEBUG nova.objects.instance [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8575c03c-88ab-44f0-8b99-c5e3874c9610 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:11:52 np0005464891 nova_compute[259907]: 2025-10-01 17:11:52.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 321 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  1 13:11:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:11:55 np0005464891 nova_compute[259907]: 2025-10-01 17:11:55.057 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Updating instance_info_cache with network_info: [{"id": "29f8a6e5-778f-42d5-a859-12396732abe6", "address": "fa:16:3e:9e:3a:d4", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f8a6e5-77", "ovs_interfaceid": "29f8a6e5-778f-42d5-a859-12396732abe6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:11:55 np0005464891 nova_compute[259907]: 2025-10-01 17:11:55.148 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Releasing lock "refresh_cache-8575c03c-88ab-44f0-8b99-c5e3874c9610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:11:55 np0005464891 nova_compute[259907]: 2025-10-01 17:11:55.148 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  1 13:11:55 np0005464891 nova_compute[259907]: 2025-10-01 17:11:55.149 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:11:55 np0005464891 nova_compute[259907]: 2025-10-01 17:11:55.149 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:11:55 np0005464891 nova_compute[259907]: 2025-10-01 17:11:55.149 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:11:55 np0005464891 nova_compute[259907]: 2025-10-01 17:11:55.150 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:11:55 np0005464891 nova_compute[259907]: 2025-10-01 17:11:55.150 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:11:55 np0005464891 nova_compute[259907]: 2025-10-01 17:11:55.150 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 13:11:55 np0005464891 nova_compute[259907]: 2025-10-01 17:11:55.317 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:11:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 321 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  1 13:11:56 np0005464891 nova_compute[259907]: 2025-10-01 17:11:56.078 2 DEBUG nova.compute.manager [req-dc18242c-e0d9-48f0-9ba4-9e7d47a0055e req-57e04758-11d2-4c68-92ca-7e87f357e1df af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Received event network-changed-29f8a6e5-778f-42d5-a859-12396732abe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:11:56 np0005464891 nova_compute[259907]: 2025-10-01 17:11:56.078 2 DEBUG nova.compute.manager [req-dc18242c-e0d9-48f0-9ba4-9e7d47a0055e req-57e04758-11d2-4c68-92ca-7e87f357e1df af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Refreshing instance network info cache due to event network-changed-29f8a6e5-778f-42d5-a859-12396732abe6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 13:11:56 np0005464891 nova_compute[259907]: 2025-10-01 17:11:56.078 2 DEBUG oslo_concurrency.lockutils [req-dc18242c-e0d9-48f0-9ba4-9e7d47a0055e req-57e04758-11d2-4c68-92ca-7e87f357e1df af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-8575c03c-88ab-44f0-8b99-c5e3874c9610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:11:56 np0005464891 nova_compute[259907]: 2025-10-01 17:11:56.079 2 DEBUG oslo_concurrency.lockutils [req-dc18242c-e0d9-48f0-9ba4-9e7d47a0055e req-57e04758-11d2-4c68-92ca-7e87f357e1df af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-8575c03c-88ab-44f0-8b99-c5e3874c9610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:11:56 np0005464891 nova_compute[259907]: 2025-10-01 17:11:56.079 2 DEBUG nova.network.neutron [req-dc18242c-e0d9-48f0-9ba4-9e7d47a0055e req-57e04758-11d2-4c68-92ca-7e87f357e1df af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Refreshing network info cache for port 29f8a6e5-778f-42d5-a859-12396732abe6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 13:11:56 np0005464891 nova_compute[259907]: 2025-10-01 17:11:56.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:57 np0005464891 nova_compute[259907]: 2025-10-01 17:11:57.487 2 DEBUG nova.network.neutron [req-dc18242c-e0d9-48f0-9ba4-9e7d47a0055e req-57e04758-11d2-4c68-92ca-7e87f357e1df af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Updated VIF entry in instance network info cache for port 29f8a6e5-778f-42d5-a859-12396732abe6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 13:11:57 np0005464891 nova_compute[259907]: 2025-10-01 17:11:57.487 2 DEBUG nova.network.neutron [req-dc18242c-e0d9-48f0-9ba4-9e7d47a0055e req-57e04758-11d2-4c68-92ca-7e87f357e1df af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Updating instance_info_cache with network_info: [{"id": "29f8a6e5-778f-42d5-a859-12396732abe6", "address": "fa:16:3e:9e:3a:d4", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f8a6e5-77", "ovs_interfaceid": "29f8a6e5-778f-42d5-a859-12396732abe6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:11:57 np0005464891 nova_compute[259907]: 2025-10-01 17:11:57.509 2 DEBUG oslo_concurrency.lockutils [req-dc18242c-e0d9-48f0-9ba4-9e7d47a0055e req-57e04758-11d2-4c68-92ca-7e87f357e1df af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-8575c03c-88ab-44f0-8b99-c5e3874c9610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:11:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  1 13:11:57 np0005464891 nova_compute[259907]: 2025-10-01 17:11:57.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:11:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:11:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 67 op/s
Oct  1 13:12:00 np0005464891 ovn_controller[152409]: 2025-10-01T17:12:00Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:3a:d4 10.100.0.11
Oct  1 13:12:00 np0005464891 ovn_controller[152409]: 2025-10-01T17:12:00Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:3a:d4 10.100.0.11
Oct  1 13:12:01 np0005464891 nova_compute[259907]: 2025-10-01 17:12:01.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 385 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 940 KiB/s wr, 77 op/s
Oct  1 13:12:02 np0005464891 nova_compute[259907]: 2025-10-01 17:12:02.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 432 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.8 MiB/s wr, 111 op/s
Oct  1 13:12:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:12:04 np0005464891 podman[314318]: 2025-10-01 17:12:04.97314627 +0000 UTC m=+0.074265991 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:12:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 432 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 539 KiB/s rd, 4.8 MiB/s wr, 73 op/s
Oct  1 13:12:06 np0005464891 nova_compute[259907]: 2025-10-01 17:12:06.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 453 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 5.8 MiB/s wr, 76 op/s
Oct  1 13:12:07 np0005464891 nova_compute[259907]: 2025-10-01 17:12:07.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:08 np0005464891 podman[314337]: 2025-10-01 17:12:08.000524878 +0000 UTC m=+0.106405340 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  1 13:12:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:12:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 453 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 5.8 MiB/s wr, 76 op/s
Oct  1 13:12:10 np0005464891 podman[314363]: 2025-10-01 17:12:10.942751855 +0000 UTC m=+0.059903355 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  1 13:12:11 np0005464891 nova_compute[259907]: 2025-10-01 17:12:11.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 453 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 540 KiB/s rd, 5.8 MiB/s wr, 75 op/s
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_17:12:12
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'backups', 'images', 'vms', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'volumes']
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 13:12:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:12.473 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:12:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:12.474 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:12:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:12.474 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:12:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:12:12 np0005464891 nova_compute[259907]: 2025-10-01 17:12:12.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:12 np0005464891 nova_compute[259907]: 2025-10-01 17:12:12.923 2 DEBUG oslo_concurrency.lockutils [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "8575c03c-88ab-44f0-8b99-c5e3874c9610" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:12:12 np0005464891 nova_compute[259907]: 2025-10-01 17:12:12.923 2 DEBUG oslo_concurrency.lockutils [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "8575c03c-88ab-44f0-8b99-c5e3874c9610" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:12:12 np0005464891 nova_compute[259907]: 2025-10-01 17:12:12.924 2 DEBUG oslo_concurrency.lockutils [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:12:12 np0005464891 nova_compute[259907]: 2025-10-01 17:12:12.924 2 DEBUG oslo_concurrency.lockutils [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:12:12 np0005464891 nova_compute[259907]: 2025-10-01 17:12:12.924 2 DEBUG oslo_concurrency.lockutils [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:12:12 np0005464891 nova_compute[259907]: 2025-10-01 17:12:12.925 2 INFO nova.compute.manager [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Terminating instance#033[00m
Oct  1 13:12:12 np0005464891 nova_compute[259907]: 2025-10-01 17:12:12.926 2 DEBUG nova.compute.manager [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 13:12:13 np0005464891 kernel: tap29f8a6e5-77 (unregistering): left promiscuous mode
Oct  1 13:12:13 np0005464891 NetworkManager[44940]: <info>  [1759338733.0592] device (tap29f8a6e5-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:13 np0005464891 ovn_controller[152409]: 2025-10-01T17:12:13Z|00294|binding|INFO|Releasing lport 29f8a6e5-778f-42d5-a859-12396732abe6 from this chassis (sb_readonly=0)
Oct  1 13:12:13 np0005464891 ovn_controller[152409]: 2025-10-01T17:12:13Z|00295|binding|INFO|Setting lport 29f8a6e5-778f-42d5-a859-12396732abe6 down in Southbound
Oct  1 13:12:13 np0005464891 ovn_controller[152409]: 2025-10-01T17:12:13Z|00296|binding|INFO|Removing iface tap29f8a6e5-77 ovn-installed in OVS
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:13 np0005464891 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Oct  1 13:12:13 np0005464891 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Consumed 16.120s CPU time.
Oct  1 13:12:13 np0005464891 systemd-machined[214891]: Machine qemu-29-instance-0000001d terminated.
Oct  1 13:12:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:13.136 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:3a:d4 10.100.0.11'], port_security=['fa:16:3e:9e:3a:d4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8575c03c-88ab-44f0-8b99-c5e3874c9610', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d747029d-7cd7-4e92-a356-867cacbb54c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7101f2ff48f540a08f6ec15b324152c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd9e37ca0-9284-404d-8dbf-7a2a022ea664', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.194'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ce41f0a-b119-4c05-a337-15f2fa115c91, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=29f8a6e5-778f-42d5-a859-12396732abe6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:12:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:13.137 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 29f8a6e5-778f-42d5-a859-12396732abe6 in datapath d747029d-7cd7-4e92-a356-867cacbb54c4 unbound from our chassis#033[00m
Oct  1 13:12:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:13.138 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d747029d-7cd7-4e92-a356-867cacbb54c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 13:12:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:13.139 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6927b5-3ff7-4a3c-ae1a-5b437671849b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:13.161 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 namespace which is not needed anymore#033[00m
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.202 2 INFO nova.virt.libvirt.driver [-] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Instance destroyed successfully.#033[00m
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.203 2 DEBUG nova.objects.instance [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lazy-loading 'resources' on Instance uuid 8575c03c-88ab-44f0-8b99-c5e3874c9610 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:12:13 np0005464891 podman[314383]: 2025-10-01 17:12:13.225912689 +0000 UTC m=+0.129403315 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.271 2 DEBUG nova.virt.libvirt.vif [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T17:11:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1398705824',display_name='tempest-TransferEncryptedVolumeTest-server-1398705824',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1398705824',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCggI6gfQD/mHYmOm6rDTYT2bX0sAibZLzDEC2B5xpj9ltJuTla2hy5xtYjkh93bJjJwE1iJj8Z6crMN4OBz57+5pkjfi89vm+UxnL1pqlzGufhUcmighPnzcAkPF9ezXQ==',key_name='tempest-TransferEncryptedVolumeTest-1396714412',keypairs=<?>,launch_index=0,launched_at=2025-10-01T17:11:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7101f2ff48f540a08f6ec15b324152c6',ramdisk_id='',reservation_id='r-y22io2wf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1550217158',owner_user_name='tempest-TransferEncryptedVolumeTest-1550217158-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T17:11:48Z,user_data=None,user_id='c440275c1a1e4cf09fcf789374345bb2',uuid=8575c03c-88ab-44f0-8b99-c5e3874c9610,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29f8a6e5-778f-42d5-a859-12396732abe6", "address": "fa:16:3e:9e:3a:d4", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f8a6e5-77", "ovs_interfaceid": "29f8a6e5-778f-42d5-a859-12396732abe6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.272 2 DEBUG nova.network.os_vif_util [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converting VIF {"id": "29f8a6e5-778f-42d5-a859-12396732abe6", "address": "fa:16:3e:9e:3a:d4", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f8a6e5-77", "ovs_interfaceid": "29f8a6e5-778f-42d5-a859-12396732abe6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.273 2 DEBUG nova.network.os_vif_util [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:3a:d4,bridge_name='br-int',has_traffic_filtering=True,id=29f8a6e5-778f-42d5-a859-12396732abe6,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29f8a6e5-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.273 2 DEBUG os_vif [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:3a:d4,bridge_name='br-int',has_traffic_filtering=True,id=29f8a6e5-778f-42d5-a859-12396732abe6,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29f8a6e5-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.275 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29f8a6e5-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:13 np0005464891 nova_compute[259907]: 2025-10-01 17:12:13.280 2 INFO os_vif [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:3a:d4,bridge_name='br-int',has_traffic_filtering=True,id=29f8a6e5-778f-42d5-a859-12396732abe6,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29f8a6e5-77')#033[00m
Oct  1 13:12:13 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[314249]: [NOTICE]   (314253) : haproxy version is 2.8.14-c23fe91
Oct  1 13:12:13 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[314249]: [NOTICE]   (314253) : path to executable is /usr/sbin/haproxy
Oct  1 13:12:13 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[314249]: [WARNING]  (314253) : Exiting Master process...
Oct  1 13:12:13 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[314249]: [ALERT]    (314253) : Current worker (314255) exited with code 143 (Terminated)
Oct  1 13:12:13 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[314249]: [WARNING]  (314253) : All workers exited. Exiting... (0)
Oct  1 13:12:13 np0005464891 systemd[1]: libpod-d58df6855c811c9a5744f9f16fea9a6a68d740dd6fdeff6d37870ce61fb568e4.scope: Deactivated successfully.
Oct  1 13:12:13 np0005464891 podman[314438]: 2025-10-01 17:12:13.443648982 +0000 UTC m=+0.153231633 container died d58df6855c811c9a5744f9f16fea9a6a68d740dd6fdeff6d37870ce61fb568e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:12:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 453 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 500 KiB/s rd, 4.9 MiB/s wr, 65 op/s
Oct  1 13:12:13 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d58df6855c811c9a5744f9f16fea9a6a68d740dd6fdeff6d37870ce61fb568e4-userdata-shm.mount: Deactivated successfully.
Oct  1 13:12:13 np0005464891 systemd[1]: var-lib-containers-storage-overlay-56376ead0d1dbfb2fe180de735a2b3276454e9a3470449f92f4b6c2824f4bb6b-merged.mount: Deactivated successfully.
Oct  1 13:12:14 np0005464891 nova_compute[259907]: 2025-10-01 17:12:14.067 2 DEBUG nova.compute.manager [req-bbb0312f-054a-456b-9a1a-65ab4329d442 req-cbfe0f54-bd1d-4f87-9614-b09cc298b021 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Received event network-vif-unplugged-29f8a6e5-778f-42d5-a859-12396732abe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:12:14 np0005464891 nova_compute[259907]: 2025-10-01 17:12:14.068 2 DEBUG oslo_concurrency.lockutils [req-bbb0312f-054a-456b-9a1a-65ab4329d442 req-cbfe0f54-bd1d-4f87-9614-b09cc298b021 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:12:14 np0005464891 nova_compute[259907]: 2025-10-01 17:12:14.069 2 DEBUG oslo_concurrency.lockutils [req-bbb0312f-054a-456b-9a1a-65ab4329d442 req-cbfe0f54-bd1d-4f87-9614-b09cc298b021 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:12:14 np0005464891 nova_compute[259907]: 2025-10-01 17:12:14.069 2 DEBUG oslo_concurrency.lockutils [req-bbb0312f-054a-456b-9a1a-65ab4329d442 req-cbfe0f54-bd1d-4f87-9614-b09cc298b021 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:12:14 np0005464891 nova_compute[259907]: 2025-10-01 17:12:14.069 2 DEBUG nova.compute.manager [req-bbb0312f-054a-456b-9a1a-65ab4329d442 req-cbfe0f54-bd1d-4f87-9614-b09cc298b021 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] No waiting events found dispatching network-vif-unplugged-29f8a6e5-778f-42d5-a859-12396732abe6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:12:14 np0005464891 nova_compute[259907]: 2025-10-01 17:12:14.069 2 DEBUG nova.compute.manager [req-bbb0312f-054a-456b-9a1a-65ab4329d442 req-cbfe0f54-bd1d-4f87-9614-b09cc298b021 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Received event network-vif-unplugged-29f8a6e5-778f-42d5-a859-12396732abe6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 13:12:14 np0005464891 podman[314438]: 2025-10-01 17:12:14.129819942 +0000 UTC m=+0.839402623 container cleanup d58df6855c811c9a5744f9f16fea9a6a68d740dd6fdeff6d37870ce61fb568e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  1 13:12:14 np0005464891 systemd[1]: libpod-conmon-d58df6855c811c9a5744f9f16fea9a6a68d740dd6fdeff6d37870ce61fb568e4.scope: Deactivated successfully.
Oct  1 13:12:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:12:14 np0005464891 podman[314484]: 2025-10-01 17:12:14.491290025 +0000 UTC m=+0.335728493 container remove d58df6855c811c9a5744f9f16fea9a6a68d740dd6fdeff6d37870ce61fb568e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:12:14 np0005464891 kernel: tapd747029d-70: left promiscuous mode
Oct  1 13:12:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:14.497 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[58da0e59-1c93-4008-968a-9a7a319ef90f]: (4, ('Wed Oct  1 05:12:13 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 (d58df6855c811c9a5744f9f16fea9a6a68d740dd6fdeff6d37870ce61fb568e4)\nd58df6855c811c9a5744f9f16fea9a6a68d740dd6fdeff6d37870ce61fb568e4\nWed Oct  1 05:12:14 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 (d58df6855c811c9a5744f9f16fea9a6a68d740dd6fdeff6d37870ce61fb568e4)\nd58df6855c811c9a5744f9f16fea9a6a68d740dd6fdeff6d37870ce61fb568e4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:14.500 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b8a894d9-acf4-4a76-9c2a-af1adac6bee7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:14.502 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd747029d-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:12:14 np0005464891 nova_compute[259907]: 2025-10-01 17:12:14.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:14 np0005464891 nova_compute[259907]: 2025-10-01 17:12:14.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:14.536 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[63d1c8d9-5dec-48f7-a7bf-a25bb57762d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:14.565 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[39debef2-695d-419b-8569-d414de80573a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:14.567 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[315c5c6e-8bb7-489f-b304-efac3e177aac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:14.581 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[6c73bb8a-b4fd-4272-bb37-c97432672087]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565290, 'reachable_time': 19881, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314498, 'error': None, 'target': 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:14 np0005464891 systemd[1]: run-netns-ovnmeta\x2dd747029d\x2d7cd7\x2d4e92\x2da356\x2d867cacbb54c4.mount: Deactivated successfully.
Oct  1 13:12:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:14.585 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 13:12:14 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:14.585 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[48d25cf8-8bb8-4be3-9b9e-8be46f1153e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:15 np0005464891 nova_compute[259907]: 2025-10-01 17:12:15.272 2 INFO nova.virt.libvirt.driver [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Deleting instance files /var/lib/nova/instances/8575c03c-88ab-44f0-8b99-c5e3874c9610_del#033[00m
Oct  1 13:12:15 np0005464891 nova_compute[259907]: 2025-10-01 17:12:15.274 2 INFO nova.virt.libvirt.driver [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Deletion of /var/lib/nova/instances/8575c03c-88ab-44f0-8b99-c5e3874c9610_del complete#033[00m
Oct  1 13:12:15 np0005464891 nova_compute[259907]: 2025-10-01 17:12:15.323 2 INFO nova.compute.manager [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Took 2.40 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 13:12:15 np0005464891 nova_compute[259907]: 2025-10-01 17:12:15.324 2 DEBUG oslo.service.loopingcall [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 13:12:15 np0005464891 nova_compute[259907]: 2025-10-01 17:12:15.324 2 DEBUG nova.compute.manager [-] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 13:12:15 np0005464891 nova_compute[259907]: 2025-10-01 17:12:15.325 2 DEBUG nova.network.neutron [-] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 13:12:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 453 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 1.0 MiB/s wr, 5 op/s
Oct  1 13:12:16 np0005464891 nova_compute[259907]: 2025-10-01 17:12:16.196 2 DEBUG nova.compute.manager [req-0ed3b0cc-f45c-4666-89b9-69f5e99fa721 req-bf7d9356-d190-4089-b8f1-3f16b0213e9c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Received event network-vif-plugged-29f8a6e5-778f-42d5-a859-12396732abe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:12:16 np0005464891 nova_compute[259907]: 2025-10-01 17:12:16.197 2 DEBUG oslo_concurrency.lockutils [req-0ed3b0cc-f45c-4666-89b9-69f5e99fa721 req-bf7d9356-d190-4089-b8f1-3f16b0213e9c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:12:16 np0005464891 nova_compute[259907]: 2025-10-01 17:12:16.197 2 DEBUG oslo_concurrency.lockutils [req-0ed3b0cc-f45c-4666-89b9-69f5e99fa721 req-bf7d9356-d190-4089-b8f1-3f16b0213e9c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:12:16 np0005464891 nova_compute[259907]: 2025-10-01 17:12:16.197 2 DEBUG oslo_concurrency.lockutils [req-0ed3b0cc-f45c-4666-89b9-69f5e99fa721 req-bf7d9356-d190-4089-b8f1-3f16b0213e9c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "8575c03c-88ab-44f0-8b99-c5e3874c9610-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:12:16 np0005464891 nova_compute[259907]: 2025-10-01 17:12:16.198 2 DEBUG nova.compute.manager [req-0ed3b0cc-f45c-4666-89b9-69f5e99fa721 req-bf7d9356-d190-4089-b8f1-3f16b0213e9c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] No waiting events found dispatching network-vif-plugged-29f8a6e5-778f-42d5-a859-12396732abe6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:12:16 np0005464891 nova_compute[259907]: 2025-10-01 17:12:16.198 2 WARNING nova.compute.manager [req-0ed3b0cc-f45c-4666-89b9-69f5e99fa721 req-bf7d9356-d190-4089-b8f1-3f16b0213e9c af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Received unexpected event network-vif-plugged-29f8a6e5-778f-42d5-a859-12396732abe6 for instance with vm_state active and task_state deleting.#033[00m
Oct  1 13:12:16 np0005464891 nova_compute[259907]: 2025-10-01 17:12:16.247 2 DEBUG nova.network.neutron [-] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:12:16 np0005464891 nova_compute[259907]: 2025-10-01 17:12:16.272 2 INFO nova.compute.manager [-] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Took 0.95 seconds to deallocate network for instance.#033[00m
Oct  1 13:12:16 np0005464891 nova_compute[259907]: 2025-10-01 17:12:16.306 2 DEBUG nova.compute.manager [req-e0338b6b-6ae5-4fd4-9678-72334ac6fc3b req-b7c52a65-c850-45cc-befc-11e0694ee4db af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Received event network-vif-deleted-29f8a6e5-778f-42d5-a859-12396732abe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:12:16 np0005464891 nova_compute[259907]: 2025-10-01 17:12:16.452 2 INFO nova.compute.manager [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Took 0.18 seconds to detach 1 volumes for instance.#033[00m
Oct  1 13:12:16 np0005464891 nova_compute[259907]: 2025-10-01 17:12:16.506 2 DEBUG oslo_concurrency.lockutils [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:12:16 np0005464891 nova_compute[259907]: 2025-10-01 17:12:16.507 2 DEBUG oslo_concurrency.lockutils [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:12:16 np0005464891 nova_compute[259907]: 2025-10-01 17:12:16.563 2 DEBUG oslo_concurrency.processutils [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:12:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:12:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2743409765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:12:16 np0005464891 nova_compute[259907]: 2025-10-01 17:12:16.997 2 DEBUG oslo_concurrency.processutils [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:12:17 np0005464891 nova_compute[259907]: 2025-10-01 17:12:17.006 2 DEBUG nova.compute.provider_tree [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:12:17 np0005464891 nova_compute[259907]: 2025-10-01 17:12:17.031 2 DEBUG nova.scheduler.client.report [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:12:17 np0005464891 nova_compute[259907]: 2025-10-01 17:12:17.056 2 DEBUG oslo_concurrency.lockutils [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:12:17 np0005464891 nova_compute[259907]: 2025-10-01 17:12:17.096 2 INFO nova.scheduler.client.report [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Deleted allocations for instance 8575c03c-88ab-44f0-8b99-c5e3874c9610#033[00m
Oct  1 13:12:17 np0005464891 nova_compute[259907]: 2025-10-01 17:12:17.159 2 DEBUG oslo_concurrency.lockutils [None req-ed792a7a-bfb1-41b9-88de-33026445ac44 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "8575c03c-88ab-44f0-8b99-c5e3874c9610" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:12:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.0 MiB/s wr, 17 op/s
Oct  1 13:12:17 np0005464891 nova_compute[259907]: 2025-10-01 17:12:17.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:18 np0005464891 nova_compute[259907]: 2025-10-01 17:12:18.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:12:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 17 KiB/s wr, 15 op/s
Oct  1 13:12:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 17 KiB/s wr, 15 op/s
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0054373870629029104 of space, bias 1.0, pg target 1.6312161188708731 quantized to 32 (current 32)
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.9013621638340822e-05 quantized to 32 (current 32)
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.19918670028325844 quantized to 32 (current 32)
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:12:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct  1 13:12:22 np0005464891 nova_compute[259907]: 2025-10-01 17:12:22.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:23 np0005464891 nova_compute[259907]: 2025-10-01 17:12:23.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 17 KiB/s wr, 15 op/s
Oct  1 13:12:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:12:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 597 B/s wr, 12 op/s
Oct  1 13:12:25 np0005464891 nova_compute[259907]: 2025-10-01 17:12:25.969 2 DEBUG oslo_concurrency.lockutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:12:25 np0005464891 nova_compute[259907]: 2025-10-01 17:12:25.970 2 DEBUG oslo_concurrency.lockutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:12:25 np0005464891 nova_compute[259907]: 2025-10-01 17:12:25.988 2 DEBUG nova.compute.manager [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.081 2 DEBUG oslo_concurrency.lockutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.081 2 DEBUG oslo_concurrency.lockutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.090 2 DEBUG nova.virt.hardware [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.091 2 INFO nova.compute.claims [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.175 2 DEBUG oslo_concurrency.processutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:12:26 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 58486357-4a79-4b49-9f7a-764fbb2c607b does not exist
Oct  1 13:12:26 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev c68a7258-1105-4311-874d-63c104ffe284 does not exist
Oct  1 13:12:26 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 91150b59-f4da-4326-b84b-d43b59e4ed3f does not exist
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3278825407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.653 2 DEBUG oslo_concurrency.processutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.663 2 DEBUG nova.compute.provider_tree [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.684 2 DEBUG nova.scheduler.client.report [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.713 2 DEBUG oslo_concurrency.lockutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.714 2 DEBUG nova.compute.manager [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:12:26 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.760 2 DEBUG nova.compute.manager [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.761 2 DEBUG nova.network.neutron [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.783 2 INFO nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.803 2 DEBUG nova.compute.manager [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.844 2 INFO nova.virt.block_device [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Booting with volume 30f6581c-af66-4115-b288-8e22fa5808f0 at /dev/vda#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.952 2 DEBUG os_brick.utils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.954 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.971 741 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.971 741 DEBUG oslo.privsep.daemon [-] privsep: reply[aef2071a-f346-4cc1-80f2-23f86530375c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.974 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.982 741 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.982 741 DEBUG oslo.privsep.daemon [-] privsep: reply[b20eaa8d-8f5e-45b9-bc2c-a4206ae45904]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b2b944312e6f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.985 741 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.994 741 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.994 741 DEBUG oslo.privsep.daemon [-] privsep: reply[01204fe9-0577-45d0-b5ff-a0abea171d60]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.996 741 DEBUG oslo.privsep.daemon [-] privsep: reply[5f769736-b73e-41c3-a88a-0ee92839ac88]: (4, '9659e747-1637-4bf9-8b69-aeb4fd4304e0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:26 np0005464891 nova_compute[259907]: 2025-10-01 17:12:26.997 2 DEBUG oslo_concurrency.processutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.032 2 DEBUG oslo_concurrency.processutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.034 2 DEBUG os_brick.initiator.connectors.lightos [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.034 2 DEBUG os_brick.initiator.connectors.lightos [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.034 2 DEBUG os_brick.initiator.connectors.lightos [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.035 2 DEBUG os_brick.utils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] <== get_connector_properties: return (82ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b2b944312e6f', 'do_local_attach': False, 'nvme_hostid': 'abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'system uuid': '9659e747-1637-4bf9-8b69-aeb4fd4304e0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:abc6dbd1-bb80-4444-a621-a0ff0df4b0b1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.035 2 DEBUG nova.virt.block_device [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Updating existing volume attachment record: 9191812b-9f98-4e4c-9faf-5bfaea6bcb85 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  1 13:12:27 np0005464891 podman[314823]: 2025-10-01 17:12:27.074567915 +0000 UTC m=+0.042203347 container create 2d15a4d786ce814c0aab34610739b27040d0d44d4bc255151475a71d488f1906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.083 2 DEBUG nova.policy [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c440275c1a1e4cf09fcf789374345bb2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7101f2ff48f540a08f6ec15b324152c6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  1 13:12:27 np0005464891 systemd[1]: Started libpod-conmon-2d15a4d786ce814c0aab34610739b27040d0d44d4bc255151475a71d488f1906.scope.
Oct  1 13:12:27 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:12:27 np0005464891 podman[314823]: 2025-10-01 17:12:27.056200787 +0000 UTC m=+0.023836229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:12:27 np0005464891 podman[314823]: 2025-10-01 17:12:27.16061609 +0000 UTC m=+0.128251542 container init 2d15a4d786ce814c0aab34610739b27040d0d44d4bc255151475a71d488f1906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:12:27 np0005464891 podman[314823]: 2025-10-01 17:12:27.171686986 +0000 UTC m=+0.139322408 container start 2d15a4d786ce814c0aab34610739b27040d0d44d4bc255151475a71d488f1906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_edison, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:12:27 np0005464891 podman[314823]: 2025-10-01 17:12:27.174555665 +0000 UTC m=+0.142191087 container attach 2d15a4d786ce814c0aab34610739b27040d0d44d4bc255151475a71d488f1906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_edison, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:12:27 np0005464891 intelligent_edison[314839]: 167 167
Oct  1 13:12:27 np0005464891 systemd[1]: libpod-2d15a4d786ce814c0aab34610739b27040d0d44d4bc255151475a71d488f1906.scope: Deactivated successfully.
Oct  1 13:12:27 np0005464891 podman[314823]: 2025-10-01 17:12:27.177605579 +0000 UTC m=+0.145241001 container died 2d15a4d786ce814c0aab34610739b27040d0d44d4bc255151475a71d488f1906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 13:12:27 np0005464891 systemd[1]: var-lib-containers-storage-overlay-92976297a4a827033989a365a1276567f8c98dd7967e256e0b7cb0162aabfc53-merged.mount: Deactivated successfully.
Oct  1 13:12:27 np0005464891 podman[314823]: 2025-10-01 17:12:27.239779817 +0000 UTC m=+0.207415279 container remove 2d15a4d786ce814c0aab34610739b27040d0d44d4bc255151475a71d488f1906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_edison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 13:12:27 np0005464891 systemd[1]: libpod-conmon-2d15a4d786ce814c0aab34610739b27040d0d44d4bc255151475a71d488f1906.scope: Deactivated successfully.
Oct  1 13:12:27 np0005464891 podman[314861]: 2025-10-01 17:12:27.454523937 +0000 UTC m=+0.072361898 container create 298aeb0e46680da3b88c30e670fdd09b940eb4d30ea3a3ba58f75e5bff63ad43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yonath, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:12:27 np0005464891 systemd[1]: Started libpod-conmon-298aeb0e46680da3b88c30e670fdd09b940eb4d30ea3a3ba58f75e5bff63ad43.scope.
Oct  1 13:12:27 np0005464891 podman[314861]: 2025-10-01 17:12:27.421010302 +0000 UTC m=+0.038848343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:12:27 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:12:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69c425768cf98a5ecbfd81aaabed1367a2cb4ebaef76b66c518f319c3dc762c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:12:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69c425768cf98a5ecbfd81aaabed1367a2cb4ebaef76b66c518f319c3dc762c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:12:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69c425768cf98a5ecbfd81aaabed1367a2cb4ebaef76b66c518f319c3dc762c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:12:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69c425768cf98a5ecbfd81aaabed1367a2cb4ebaef76b66c518f319c3dc762c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:12:27 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69c425768cf98a5ecbfd81aaabed1367a2cb4ebaef76b66c518f319c3dc762c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 13:12:27 np0005464891 podman[314861]: 2025-10-01 17:12:27.559282931 +0000 UTC m=+0.177120912 container init 298aeb0e46680da3b88c30e670fdd09b940eb4d30ea3a3ba58f75e5bff63ad43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yonath, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:12:27 np0005464891 podman[314861]: 2025-10-01 17:12:27.56866978 +0000 UTC m=+0.186507731 container start 298aeb0e46680da3b88c30e670fdd09b940eb4d30ea3a3ba58f75e5bff63ad43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:12:27 np0005464891 podman[314861]: 2025-10-01 17:12:27.571997762 +0000 UTC m=+0.189835713 container attach 298aeb0e46680da3b88c30e670fdd09b940eb4d30ea3a3ba58f75e5bff63ad43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 13:12:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 597 B/s wr, 12 op/s
Oct  1 13:12:27 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:12:27 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2112100620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.764 2 DEBUG nova.network.neutron [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Successfully created port: 4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.976 2 DEBUG nova.compute.manager [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.977 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.978 2 INFO nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Creating image(s)#033[00m
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.978 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.978 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Ensure instance console log exists: /var/lib/nova/instances/ac5687c4-9ce5-46ee-a5ef-861637bb8b07/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.979 2 DEBUG oslo_concurrency.lockutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.979 2 DEBUG oslo_concurrency.lockutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:12:27 np0005464891 nova_compute[259907]: 2025-10-01 17:12:27.979 2 DEBUG oslo_concurrency.lockutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:12:28 np0005464891 nova_compute[259907]: 2025-10-01 17:12:28.195 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759338733.1950815, 8575c03c-88ab-44f0-8b99-c5e3874c9610 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:12:28 np0005464891 nova_compute[259907]: 2025-10-01 17:12:28.196 2 INFO nova.compute.manager [-] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] VM Stopped (Lifecycle Event)#033[00m
Oct  1 13:12:28 np0005464891 nova_compute[259907]: 2025-10-01 17:12:28.222 2 DEBUG nova.compute.manager [None req-387ef993-da9c-48c2-b177-cbae7175e28f - - - - - -] [instance: 8575c03c-88ab-44f0-8b99-c5e3874c9610] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:12:28 np0005464891 nova_compute[259907]: 2025-10-01 17:12:28.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:28 np0005464891 nova_compute[259907]: 2025-10-01 17:12:28.628 2 DEBUG nova.network.neutron [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Successfully updated port: 4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  1 13:12:28 np0005464891 nova_compute[259907]: 2025-10-01 17:12:28.650 2 DEBUG oslo_concurrency.lockutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "refresh_cache-ac5687c4-9ce5-46ee-a5ef-861637bb8b07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:12:28 np0005464891 nova_compute[259907]: 2025-10-01 17:12:28.650 2 DEBUG oslo_concurrency.lockutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquired lock "refresh_cache-ac5687c4-9ce5-46ee-a5ef-861637bb8b07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:12:28 np0005464891 nova_compute[259907]: 2025-10-01 17:12:28.650 2 DEBUG nova.network.neutron [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  1 13:12:28 np0005464891 nova_compute[259907]: 2025-10-01 17:12:28.746 2 DEBUG nova.compute.manager [req-6137dd14-9b48-4f92-b746-342bf6476f6d req-a5f8d1d0-213f-418c-bfb1-ad6826159da9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Received event network-changed-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:12:28 np0005464891 nova_compute[259907]: 2025-10-01 17:12:28.746 2 DEBUG nova.compute.manager [req-6137dd14-9b48-4f92-b746-342bf6476f6d req-a5f8d1d0-213f-418c-bfb1-ad6826159da9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Refreshing instance network info cache due to event network-changed-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 13:12:28 np0005464891 nova_compute[259907]: 2025-10-01 17:12:28.746 2 DEBUG oslo_concurrency.lockutils [req-6137dd14-9b48-4f92-b746-342bf6476f6d req-a5f8d1d0-213f-418c-bfb1-ad6826159da9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-ac5687c4-9ce5-46ee-a5ef-861637bb8b07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:12:28 np0005464891 exciting_yonath[314878]: --> passed data devices: 0 physical, 3 LVM
Oct  1 13:12:28 np0005464891 exciting_yonath[314878]: --> relative data size: 1.0
Oct  1 13:12:28 np0005464891 exciting_yonath[314878]: --> All data devices are unavailable
Oct  1 13:12:28 np0005464891 nova_compute[259907]: 2025-10-01 17:12:28.797 2 DEBUG nova.network.neutron [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  1 13:12:28 np0005464891 systemd[1]: libpod-298aeb0e46680da3b88c30e670fdd09b940eb4d30ea3a3ba58f75e5bff63ad43.scope: Deactivated successfully.
Oct  1 13:12:28 np0005464891 podman[314861]: 2025-10-01 17:12:28.799147573 +0000 UTC m=+1.416985524 container died 298aeb0e46680da3b88c30e670fdd09b940eb4d30ea3a3ba58f75e5bff63ad43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:12:28 np0005464891 systemd[1]: libpod-298aeb0e46680da3b88c30e670fdd09b940eb4d30ea3a3ba58f75e5bff63ad43.scope: Consumed 1.148s CPU time.
Oct  1 13:12:28 np0005464891 systemd[1]: var-lib-containers-storage-overlay-69c425768cf98a5ecbfd81aaabed1367a2cb4ebaef76b66c518f319c3dc762c1-merged.mount: Deactivated successfully.
Oct  1 13:12:28 np0005464891 podman[314861]: 2025-10-01 17:12:28.861321971 +0000 UTC m=+1.479159952 container remove 298aeb0e46680da3b88c30e670fdd09b940eb4d30ea3a3ba58f75e5bff63ad43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yonath, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:12:28 np0005464891 systemd[1]: libpod-conmon-298aeb0e46680da3b88c30e670fdd09b940eb4d30ea3a3ba58f75e5bff63ad43.scope: Deactivated successfully.
Oct  1 13:12:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:12:29 np0005464891 podman[315061]: 2025-10-01 17:12:29.475987746 +0000 UTC m=+0.033874317 container create 15dc38fd49b62c2a9ed79a14d13777e24d053de36cbaf8d48b74edeb7f69a54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:12:29 np0005464891 systemd[1]: Started libpod-conmon-15dc38fd49b62c2a9ed79a14d13777e24d053de36cbaf8d48b74edeb7f69a54b.scope.
Oct  1 13:12:29 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:12:29 np0005464891 podman[315061]: 2025-10-01 17:12:29.533841164 +0000 UTC m=+0.091727795 container init 15dc38fd49b62c2a9ed79a14d13777e24d053de36cbaf8d48b74edeb7f69a54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dirac, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:12:29 np0005464891 podman[315061]: 2025-10-01 17:12:29.53916498 +0000 UTC m=+0.097051551 container start 15dc38fd49b62c2a9ed79a14d13777e24d053de36cbaf8d48b74edeb7f69a54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dirac, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:12:29 np0005464891 podman[315061]: 2025-10-01 17:12:29.542348699 +0000 UTC m=+0.100235290 container attach 15dc38fd49b62c2a9ed79a14d13777e24d053de36cbaf8d48b74edeb7f69a54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dirac, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:12:29 np0005464891 stupefied_dirac[315078]: 167 167
Oct  1 13:12:29 np0005464891 systemd[1]: libpod-15dc38fd49b62c2a9ed79a14d13777e24d053de36cbaf8d48b74edeb7f69a54b.scope: Deactivated successfully.
Oct  1 13:12:29 np0005464891 podman[315061]: 2025-10-01 17:12:29.547791589 +0000 UTC m=+0.105678170 container died 15dc38fd49b62c2a9ed79a14d13777e24d053de36cbaf8d48b74edeb7f69a54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dirac, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:12:29 np0005464891 podman[315061]: 2025-10-01 17:12:29.461759773 +0000 UTC m=+0.019646364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:12:29 np0005464891 systemd[1]: var-lib-containers-storage-overlay-fbf7ee5dd846178614d5987c79000d7505b53d8307e59d4de8ba37d210f20fa6-merged.mount: Deactivated successfully.
Oct  1 13:12:29 np0005464891 podman[315061]: 2025-10-01 17:12:29.598199261 +0000 UTC m=+0.156085822 container remove 15dc38fd49b62c2a9ed79a14d13777e24d053de36cbaf8d48b74edeb7f69a54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 13:12:29 np0005464891 systemd[1]: libpod-conmon-15dc38fd49b62c2a9ed79a14d13777e24d053de36cbaf8d48b74edeb7f69a54b.scope: Deactivated successfully.
Oct  1 13:12:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 255 B/s wr, 1 op/s
Oct  1 13:12:29 np0005464891 podman[315103]: 2025-10-01 17:12:29.814523355 +0000 UTC m=+0.067037352 container create b36ea02f0f328dcf5812f2925157728cdad110fc6ef23f38dc143ed9f3998411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 13:12:29 np0005464891 systemd[1]: Started libpod-conmon-b36ea02f0f328dcf5812f2925157728cdad110fc6ef23f38dc143ed9f3998411.scope.
Oct  1 13:12:29 np0005464891 podman[315103]: 2025-10-01 17:12:29.791639734 +0000 UTC m=+0.044153751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:12:29 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:12:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14afc9d5a63294b497dfb876a3a81c6efc4283d29771b533d8062e1780be2fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:12:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14afc9d5a63294b497dfb876a3a81c6efc4283d29771b533d8062e1780be2fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:12:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14afc9d5a63294b497dfb876a3a81c6efc4283d29771b533d8062e1780be2fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:12:29 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14afc9d5a63294b497dfb876a3a81c6efc4283d29771b533d8062e1780be2fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:12:29 np0005464891 podman[315103]: 2025-10-01 17:12:29.917515 +0000 UTC m=+0.170028997 container init b36ea02f0f328dcf5812f2925157728cdad110fc6ef23f38dc143ed9f3998411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 13:12:29 np0005464891 podman[315103]: 2025-10-01 17:12:29.937541653 +0000 UTC m=+0.190055620 container start b36ea02f0f328dcf5812f2925157728cdad110fc6ef23f38dc143ed9f3998411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_boyd, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:12:29 np0005464891 podman[315103]: 2025-10-01 17:12:29.940748371 +0000 UTC m=+0.193262338 container attach b36ea02f0f328dcf5812f2925157728cdad110fc6ef23f38dc143ed9f3998411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_boyd, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]: {
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:    "0": [
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:        {
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "devices": [
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "/dev/loop3"
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            ],
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "lv_name": "ceph_lv0",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "lv_size": "21470642176",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "name": "ceph_lv0",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "tags": {
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.cluster_name": "ceph",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.crush_device_class": "",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.encrypted": "0",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.osd_id": "0",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.type": "block",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.vdo": "0"
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            },
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "type": "block",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "vg_name": "ceph_vg0"
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:        }
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:    ],
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:    "1": [
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:        {
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "devices": [
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "/dev/loop4"
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            ],
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "lv_name": "ceph_lv1",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "lv_size": "21470642176",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "name": "ceph_lv1",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "tags": {
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.cluster_name": "ceph",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.crush_device_class": "",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.encrypted": "0",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.osd_id": "1",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.type": "block",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.vdo": "0"
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            },
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "type": "block",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "vg_name": "ceph_vg1"
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:        }
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:    ],
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:    "2": [
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:        {
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "devices": [
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "/dev/loop5"
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            ],
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "lv_name": "ceph_lv2",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "lv_size": "21470642176",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "name": "ceph_lv2",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "tags": {
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.cluster_name": "ceph",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.crush_device_class": "",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.encrypted": "0",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.osd_id": "2",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.type": "block",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:                "ceph.vdo": "0"
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            },
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "type": "block",
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:            "vg_name": "ceph_vg2"
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:        }
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]:    ]
Oct  1 13:12:30 np0005464891 elegant_boyd[315119]: }
Oct  1 13:12:30 np0005464891 systemd[1]: libpod-b36ea02f0f328dcf5812f2925157728cdad110fc6ef23f38dc143ed9f3998411.scope: Deactivated successfully.
Oct  1 13:12:30 np0005464891 podman[315103]: 2025-10-01 17:12:30.698148889 +0000 UTC m=+0.950662866 container died b36ea02f0f328dcf5812f2925157728cdad110fc6ef23f38dc143ed9f3998411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_boyd, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.721 2 DEBUG nova.network.neutron [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Updating instance_info_cache with network_info: [{"id": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "address": "fa:16:3e:60:6c:13", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a125766-7f", "ovs_interfaceid": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:12:30 np0005464891 systemd[1]: var-lib-containers-storage-overlay-f14afc9d5a63294b497dfb876a3a81c6efc4283d29771b533d8062e1780be2fb-merged.mount: Deactivated successfully.
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.741 2 DEBUG oslo_concurrency.lockutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Releasing lock "refresh_cache-ac5687c4-9ce5-46ee-a5ef-861637bb8b07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.741 2 DEBUG nova.compute.manager [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Instance network_info: |[{"id": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "address": "fa:16:3e:60:6c:13", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a125766-7f", "ovs_interfaceid": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.742 2 DEBUG oslo_concurrency.lockutils [req-6137dd14-9b48-4f92-b746-342bf6476f6d req-a5f8d1d0-213f-418c-bfb1-ad6826159da9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-ac5687c4-9ce5-46ee-a5ef-861637bb8b07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.742 2 DEBUG nova.network.neutron [req-6137dd14-9b48-4f92-b746-342bf6476f6d req-a5f8d1d0-213f-418c-bfb1-ad6826159da9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Refreshing network info cache for port 4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.749 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Start _get_guest_xml network_info=[{"id": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "address": "fa:16:3e:60:6c:13", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a125766-7f", "ovs_interfaceid": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'mount_device': '/dev/vda', 'attachment_id': '9191812b-9f98-4e4c-9faf-5bfaea6bcb85', 'disk_bus': 'virtio', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-30f6581c-af66-4115-b288-8e22fa5808f0', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '30f6581c-af66-4115-b288-8e22fa5808f0', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'ac5687c4-9ce5-46ee-a5ef-861637bb8b07', 'attached_at': '', 'detached_at': '', 'volume_id': '30f6581c-af66-4115-b288-8e22fa5808f0', 'serial': '30f6581c-af66-4115-b288-8e22fa5808f0'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.756 2 WARNING nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.764 2 DEBUG nova.virt.libvirt.host [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.764 2 DEBUG nova.virt.libvirt.host [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  1 13:12:30 np0005464891 podman[315103]: 2025-10-01 17:12:30.767610567 +0000 UTC m=+1.020124554 container remove b36ea02f0f328dcf5812f2925157728cdad110fc6ef23f38dc143ed9f3998411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_boyd, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.767 2 DEBUG nova.virt.libvirt.host [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.768 2 DEBUG nova.virt.libvirt.host [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.769 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.769 2 DEBUG nova.virt.hardware [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-01T16:40:52Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='01c84f56-aade-4a08-b977-e1a2c3a3b49a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.769 2 DEBUG nova.virt.hardware [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.770 2 DEBUG nova.virt.hardware [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.770 2 DEBUG nova.virt.hardware [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.770 2 DEBUG nova.virt.hardware [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.770 2 DEBUG nova.virt.hardware [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.770 2 DEBUG nova.virt.hardware [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.771 2 DEBUG nova.virt.hardware [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.771 2 DEBUG nova.virt.hardware [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.771 2 DEBUG nova.virt.hardware [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.771 2 DEBUG nova.virt.hardware [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  1 13:12:30 np0005464891 systemd[1]: libpod-conmon-b36ea02f0f328dcf5812f2925157728cdad110fc6ef23f38dc143ed9f3998411.scope: Deactivated successfully.
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.795 2 DEBUG nova.storage.rbd_utils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] rbd image ac5687c4-9ce5-46ee-a5ef-861637bb8b07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:12:30 np0005464891 nova_compute[259907]: 2025-10-01 17:12:30.798 2 DEBUG oslo_concurrency.processutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:12:31 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 13:12:31 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1791193959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.264 2 DEBUG oslo_concurrency.processutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.391 2 DEBUG os_brick.encryptors [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Using volume encryption metadata '{'encryption_key_id': 'c00385ea-6114-48a4-b1ed-0277e3fa88a6', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-30f6581c-af66-4115-b288-8e22fa5808f0', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '30f6581c-af66-4115-b288-8e22fa5808f0', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'ac5687c4-9ce5-46ee-a5ef-861637bb8b07', 'attached_at': '', 'detached_at': '', 'volume_id': '30f6581c-af66-4115-b288-8e22fa5808f0', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.393 2 DEBUG barbicanclient.client [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.413 2 DEBUG barbicanclient.v1.secrets [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/c00385ea-6114-48a4-b1ed-0277e3fa88a6 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.413 2 INFO barbicanclient.base [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/c00385ea-6114-48a4-b1ed-0277e3fa88a6#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.437 2 DEBUG barbicanclient.client [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.438 2 INFO barbicanclient.base [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/c00385ea-6114-48a4-b1ed-0277e3fa88a6#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.470 2 DEBUG barbicanclient.client [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.470 2 INFO barbicanclient.base [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/c00385ea-6114-48a4-b1ed-0277e3fa88a6#033[00m
Oct  1 13:12:31 np0005464891 podman[315321]: 2025-10-01 17:12:31.487522709 +0000 UTC m=+0.042226987 container create dd13cafacae1ccac3576f55f84e8eff8f3d0612e38dd14218e8447f9b32e759a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.504 2 DEBUG barbicanclient.client [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.505 2 INFO barbicanclient.base [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/c00385ea-6114-48a4-b1ed-0277e3fa88a6#033[00m
Oct  1 13:12:31 np0005464891 systemd[1]: Started libpod-conmon-dd13cafacae1ccac3576f55f84e8eff8f3d0612e38dd14218e8447f9b32e759a.scope.
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.530 2 DEBUG barbicanclient.client [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.530 2 INFO barbicanclient.base [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/c00385ea-6114-48a4-b1ed-0277e3fa88a6#033[00m
Oct  1 13:12:31 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.556 2 DEBUG barbicanclient.client [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.556 2 INFO barbicanclient.base [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/c00385ea-6114-48a4-b1ed-0277e3fa88a6#033[00m
Oct  1 13:12:31 np0005464891 podman[315321]: 2025-10-01 17:12:31.467176028 +0000 UTC m=+0.021880326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:12:31 np0005464891 podman[315321]: 2025-10-01 17:12:31.568013622 +0000 UTC m=+0.122717920 container init dd13cafacae1ccac3576f55f84e8eff8f3d0612e38dd14218e8447f9b32e759a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:12:31 np0005464891 podman[315321]: 2025-10-01 17:12:31.574231854 +0000 UTC m=+0.128936162 container start dd13cafacae1ccac3576f55f84e8eff8f3d0612e38dd14218e8447f9b32e759a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.578 2 DEBUG barbicanclient.client [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.578 2 INFO barbicanclient.base [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/c00385ea-6114-48a4-b1ed-0277e3fa88a6#033[00m
Oct  1 13:12:31 np0005464891 stoic_liskov[315337]: 167 167
Oct  1 13:12:31 np0005464891 systemd[1]: libpod-dd13cafacae1ccac3576f55f84e8eff8f3d0612e38dd14218e8447f9b32e759a.scope: Deactivated successfully.
Oct  1 13:12:31 np0005464891 podman[315321]: 2025-10-01 17:12:31.583625173 +0000 UTC m=+0.138329641 container attach dd13cafacae1ccac3576f55f84e8eff8f3d0612e38dd14218e8447f9b32e759a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 13:12:31 np0005464891 podman[315321]: 2025-10-01 17:12:31.583906351 +0000 UTC m=+0.138610629 container died dd13cafacae1ccac3576f55f84e8eff8f3d0612e38dd14218e8447f9b32e759a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.602 2 DEBUG barbicanclient.client [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.602 2 INFO barbicanclient.base [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/c00385ea-6114-48a4-b1ed-0277e3fa88a6#033[00m
Oct  1 13:12:31 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ff226192ae8ead398b96d93a5477dc9cc6469a120867cdad5242b65b1744b429-merged.mount: Deactivated successfully.
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.627 2 DEBUG barbicanclient.client [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.627 2 INFO barbicanclient.base [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/c00385ea-6114-48a4-b1ed-0277e3fa88a6#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.657 2 DEBUG barbicanclient.client [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:12:31 np0005464891 podman[315321]: 2025-10-01 17:12:31.657921065 +0000 UTC m=+0.212625373 container remove dd13cafacae1ccac3576f55f84e8eff8f3d0612e38dd14218e8447f9b32e759a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.658 2 INFO barbicanclient.base [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/c00385ea-6114-48a4-b1ed-0277e3fa88a6#033[00m
Oct  1 13:12:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2243: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:12:31 np0005464891 systemd[1]: libpod-conmon-dd13cafacae1ccac3576f55f84e8eff8f3d0612e38dd14218e8447f9b32e759a.scope: Deactivated successfully.
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.689 2 DEBUG barbicanclient.client [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.690 2 INFO barbicanclient.base [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/c00385ea-6114-48a4-b1ed-0277e3fa88a6#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.710 2 DEBUG barbicanclient.client [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.710 2 INFO barbicanclient.base [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/c00385ea-6114-48a4-b1ed-0277e3fa88a6#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.728 2 DEBUG barbicanclient.client [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.728 2 INFO barbicanclient.base [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/c00385ea-6114-48a4-b1ed-0277e3fa88a6#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.730 2 DEBUG nova.network.neutron [req-6137dd14-9b48-4f92-b746-342bf6476f6d req-a5f8d1d0-213f-418c-bfb1-ad6826159da9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Updated VIF entry in instance network info cache for port 4a125766-7fd6-4fa3-ac9a-8ce62baf11b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.731 2 DEBUG nova.network.neutron [req-6137dd14-9b48-4f92-b746-342bf6476f6d req-a5f8d1d0-213f-418c-bfb1-ad6826159da9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Updating instance_info_cache with network_info: [{"id": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "address": "fa:16:3e:60:6c:13", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a125766-7f", "ovs_interfaceid": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.746 2 DEBUG oslo_concurrency.lockutils [req-6137dd14-9b48-4f92-b746-342bf6476f6d req-a5f8d1d0-213f-418c-bfb1-ad6826159da9 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-ac5687c4-9ce5-46ee-a5ef-861637bb8b07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.750 2 DEBUG barbicanclient.client [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.750 2 INFO barbicanclient.base [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/c00385ea-6114-48a4-b1ed-0277e3fa88a6#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.766 2 DEBUG barbicanclient.client [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.766 2 INFO barbicanclient.base [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Calculated Secrets uuid ref: secrets/c00385ea-6114-48a4-b1ed-0277e3fa88a6#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.798 2 DEBUG barbicanclient.client [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.798 2 DEBUG nova.virt.libvirt.host [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  <usage type="volume">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <volume>30f6581c-af66-4115-b288-8e22fa5808f0</volume>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  </usage>
Oct  1 13:12:31 np0005464891 nova_compute[259907]: </secret>
Oct  1 13:12:31 np0005464891 nova_compute[259907]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.826 2 DEBUG nova.virt.libvirt.vif [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T17:12:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2072355531',display_name='tempest-TransferEncryptedVolumeTest-server-2072355531',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2072355531',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCggI6gfQD/mHYmOm6rDTYT2bX0sAibZLzDEC2B5xpj9ltJuTla2hy5xtYjkh93bJjJwE1iJj8Z6crMN4OBz57+5pkjfi89vm+UxnL1pqlzGufhUcmighPnzcAkPF9ezXQ==',key_name='tempest-TransferEncryptedVolumeTest-1396714412',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7101f2ff48f540a08f6ec15b324152c6',ramdisk_id='',reservation_id='r-6vsbkk4p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1550217158',owner_user_name='tempest-TransferEncryptedVolumeTest-1550217158-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T17:12:26Z,user_data=None,user_id='c440275c1a1e4cf09fcf789374345bb2',uuid=ac5687c4-9ce5-46ee-a5ef-861637bb8b07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "address": "fa:16:3e:60:6c:13", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a125766-7f", "ovs_interfaceid": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.827 2 DEBUG nova.network.os_vif_util [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converting VIF {"id": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "address": "fa:16:3e:60:6c:13", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a125766-7f", "ovs_interfaceid": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.828 2 DEBUG nova.network.os_vif_util [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:6c:13,bridge_name='br-int',has_traffic_filtering=True,id=4a125766-7fd6-4fa3-ac9a-8ce62baf11b4,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a125766-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.831 2 DEBUG nova.objects.instance [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid ac5687c4-9ce5-46ee-a5ef-861637bb8b07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.848 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] End _get_guest_xml xml=<domain type="kvm">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  <uuid>ac5687c4-9ce5-46ee-a5ef-861637bb8b07</uuid>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  <name>instance-0000001e</name>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  <memory>131072</memory>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  <vcpu>1</vcpu>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  <metadata>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-2072355531</nova:name>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <nova:creationTime>2025-10-01 17:12:30</nova:creationTime>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <nova:flavor name="m1.nano">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:        <nova:memory>128</nova:memory>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:        <nova:disk>1</nova:disk>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:        <nova:swap>0</nova:swap>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:        <nova:ephemeral>0</nova:ephemeral>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:        <nova:vcpus>1</nova:vcpus>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      </nova:flavor>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <nova:owner>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:        <nova:user uuid="c440275c1a1e4cf09fcf789374345bb2">tempest-TransferEncryptedVolumeTest-1550217158-project-member</nova:user>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:        <nova:project uuid="7101f2ff48f540a08f6ec15b324152c6">tempest-TransferEncryptedVolumeTest-1550217158</nova:project>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      </nova:owner>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <nova:ports>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:        <nova:port uuid="4a125766-7fd6-4fa3-ac9a-8ce62baf11b4">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:        </nova:port>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      </nova:ports>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    </nova:instance>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  </metadata>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  <sysinfo type="smbios">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <system>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <entry name="manufacturer">RDO</entry>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <entry name="product">OpenStack Compute</entry>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <entry name="serial">ac5687c4-9ce5-46ee-a5ef-861637bb8b07</entry>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <entry name="uuid">ac5687c4-9ce5-46ee-a5ef-861637bb8b07</entry>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <entry name="family">Virtual Machine</entry>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    </system>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  </sysinfo>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  <os>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <boot dev="hd"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <smbios mode="sysinfo"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  </os>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  <features>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <acpi/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <apic/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <vmcoreinfo/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  </features>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  <clock offset="utc">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <timer name="pit" tickpolicy="delay"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <timer name="hpet" present="no"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  </clock>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  <cpu mode="host-model" match="exact">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <topology sockets="1" cores="1" threads="1"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  </cpu>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  <devices>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <disk type="network" device="cdrom">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <driver type="raw" cache="none"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="vms/ac5687c4-9ce5-46ee-a5ef-861637bb8b07_disk.config">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      </source>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      </auth>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <target dev="sda" bus="sata"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    </disk>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <disk type="network" device="disk">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <source protocol="rbd" name="volumes/volume-30f6581c-af66-4115-b288-8e22fa5808f0">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:        <host name="192.168.122.100" port="6789"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      </source>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <auth username="openstack">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:        <secret type="ceph" uuid="6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      </auth>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <target dev="vda" bus="virtio"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <serial>30f6581c-af66-4115-b288-8e22fa5808f0</serial>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <encryption format="luks">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:        <secret type="passphrase" uuid="09f5fb40-d42a-4f4f-99e1-4f62d2727ba4"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      </encryption>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    </disk>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <interface type="ethernet">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <mac address="fa:16:3e:60:6c:13"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <driver name="vhost" rx_queue_size="512"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <mtu size="1442"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <target dev="tap4a125766-7f"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    </interface>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <serial type="pty">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <log file="/var/lib/nova/instances/ac5687c4-9ce5-46ee-a5ef-861637bb8b07/console.log" append="off"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    </serial>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <video>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <model type="virtio"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    </video>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <input type="tablet" bus="usb"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <rng model="virtio">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <backend model="random">/dev/urandom</backend>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    </rng>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="pci" model="pcie-root-port"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <controller type="usb" index="0"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    <memballoon model="virtio">
Oct  1 13:12:31 np0005464891 nova_compute[259907]:      <stats period="10"/>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:    </memballoon>
Oct  1 13:12:31 np0005464891 nova_compute[259907]:  </devices>
Oct  1 13:12:31 np0005464891 nova_compute[259907]: </domain>
Oct  1 13:12:31 np0005464891 nova_compute[259907]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.849 2 DEBUG nova.compute.manager [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Preparing to wait for external event network-vif-plugged-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.849 2 DEBUG oslo_concurrency.lockutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.849 2 DEBUG oslo_concurrency.lockutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.850 2 DEBUG oslo_concurrency.lockutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.851 2 DEBUG nova.virt.libvirt.vif [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-01T17:12:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2072355531',display_name='tempest-TransferEncryptedVolumeTest-server-2072355531',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2072355531',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCggI6gfQD/mHYmOm6rDTYT2bX0sAibZLzDEC2B5xpj9ltJuTla2hy5xtYjkh93bJjJwE1iJj8Z6crMN4OBz57+5pkjfi89vm+UxnL1pqlzGufhUcmighPnzcAkPF9ezXQ==',key_name='tempest-TransferEncryptedVolumeTest-1396714412',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7101f2ff48f540a08f6ec15b324152c6',ramdisk_id='',reservation_id='r-6vsbkk4p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1550217158',owner_user_name='tempest-TransferEncryptedVolumeTest-1550217158-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-01T17:12:26Z,user_data=None,user_id='c440275c1a1e4cf09fcf789374345bb2',uuid=ac5687c4-9ce5-46ee-a5ef-861637bb8b07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "address": "fa:16:3e:60:6c:13", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a125766-7f", "ovs_interfaceid": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.852 2 DEBUG nova.network.os_vif_util [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converting VIF {"id": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "address": "fa:16:3e:60:6c:13", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a125766-7f", "ovs_interfaceid": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.853 2 DEBUG nova.network.os_vif_util [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:6c:13,bridge_name='br-int',has_traffic_filtering=True,id=4a125766-7fd6-4fa3-ac9a-8ce62baf11b4,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a125766-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.853 2 DEBUG os_vif [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:6c:13,bridge_name='br-int',has_traffic_filtering=True,id=4a125766-7fd6-4fa3-ac9a-8ce62baf11b4,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a125766-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.855 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.855 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:12:31 np0005464891 podman[315361]: 2025-10-01 17:12:31.856745236 +0000 UTC m=+0.056028088 container create 707979dec4acb3ab6a6faca1895f69f4e528d72e21ad0dc8c0980206298273d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.860 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a125766-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a125766-7f, col_values=(('external_ids', {'iface-id': '4a125766-7fd6-4fa3-ac9a-8ce62baf11b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:6c:13', 'vm-uuid': 'ac5687c4-9ce5-46ee-a5ef-861637bb8b07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:12:31 np0005464891 NetworkManager[44940]: <info>  [1759338751.8652] manager: (tap4a125766-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.878 2 INFO os_vif [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:6c:13,bridge_name='br-int',has_traffic_filtering=True,id=4a125766-7fd6-4fa3-ac9a-8ce62baf11b4,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a125766-7f')#033[00m
Oct  1 13:12:31 np0005464891 systemd[1]: Started libpod-conmon-707979dec4acb3ab6a6faca1895f69f4e528d72e21ad0dc8c0980206298273d9.scope.
Oct  1 13:12:31 np0005464891 podman[315361]: 2025-10-01 17:12:31.828348272 +0000 UTC m=+0.027631144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:12:31 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:12:31 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29030081bf1fa9196d7e3cf3976c9842c75fd666f2965233c2b98e6aa84c63f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:12:31 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29030081bf1fa9196d7e3cf3976c9842c75fd666f2965233c2b98e6aa84c63f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:12:31 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29030081bf1fa9196d7e3cf3976c9842c75fd666f2965233c2b98e6aa84c63f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:12:31 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29030081bf1fa9196d7e3cf3976c9842c75fd666f2965233c2b98e6aa84c63f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.959 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.960 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.960 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] No VIF found with MAC fa:16:3e:60:6c:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.961 2 INFO nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Using config drive#033[00m
Oct  1 13:12:31 np0005464891 podman[315361]: 2025-10-01 17:12:31.967232007 +0000 UTC m=+0.166514879 container init 707979dec4acb3ab6a6faca1895f69f4e528d72e21ad0dc8c0980206298273d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct  1 13:12:31 np0005464891 podman[315361]: 2025-10-01 17:12:31.977305946 +0000 UTC m=+0.176588808 container start 707979dec4acb3ab6a6faca1895f69f4e528d72e21ad0dc8c0980206298273d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:12:31 np0005464891 podman[315361]: 2025-10-01 17:12:31.983024114 +0000 UTC m=+0.182306966 container attach 707979dec4acb3ab6a6faca1895f69f4e528d72e21ad0dc8c0980206298273d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:12:31 np0005464891 nova_compute[259907]: 2025-10-01 17:12:31.991 2 DEBUG nova.storage.rbd_utils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] rbd image ac5687c4-9ce5-46ee-a5ef-861637bb8b07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:12:32 np0005464891 nova_compute[259907]: 2025-10-01 17:12:32.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:32 np0005464891 nova_compute[259907]: 2025-10-01 17:12:32.817 2 INFO nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Creating config drive at /var/lib/nova/instances/ac5687c4-9ce5-46ee-a5ef-861637bb8b07/disk.config#033[00m
Oct  1 13:12:32 np0005464891 nova_compute[259907]: 2025-10-01 17:12:32.822 2 DEBUG oslo_concurrency.processutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ac5687c4-9ce5-46ee-a5ef-861637bb8b07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9pvqypmy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:12:32 np0005464891 nova_compute[259907]: 2025-10-01 17:12:32.950 2 DEBUG oslo_concurrency.processutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ac5687c4-9ce5-46ee-a5ef-861637bb8b07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9pvqypmy" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:12:32 np0005464891 nova_compute[259907]: 2025-10-01 17:12:32.982 2 DEBUG nova.storage.rbd_utils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] rbd image ac5687c4-9ce5-46ee-a5ef-861637bb8b07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  1 13:12:32 np0005464891 nova_compute[259907]: 2025-10-01 17:12:32.987 2 DEBUG oslo_concurrency.processutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ac5687c4-9ce5-46ee-a5ef-861637bb8b07/disk.config ac5687c4-9ce5-46ee-a5ef-861637bb8b07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:12:33 np0005464891 nova_compute[259907]: 2025-10-01 17:12:33.147 2 DEBUG oslo_concurrency.processutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ac5687c4-9ce5-46ee-a5ef-861637bb8b07/disk.config ac5687c4-9ce5-46ee-a5ef-861637bb8b07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:12:33 np0005464891 nova_compute[259907]: 2025-10-01 17:12:33.147 2 INFO nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Deleting local config drive /var/lib/nova/instances/ac5687c4-9ce5-46ee-a5ef-861637bb8b07/disk.config because it was imported into RBD.#033[00m
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]: {
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:        "osd_id": 2,
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:        "type": "bluestore"
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:    },
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:        "osd_id": 0,
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:        "type": "bluestore"
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:    },
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:        "osd_id": 1,
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:        "type": "bluestore"
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]:    }
Oct  1 13:12:33 np0005464891 quizzical_varahamihira[315380]: }
Oct  1 13:12:33 np0005464891 systemd[1]: libpod-707979dec4acb3ab6a6faca1895f69f4e528d72e21ad0dc8c0980206298273d9.scope: Deactivated successfully.
Oct  1 13:12:33 np0005464891 systemd[1]: libpod-707979dec4acb3ab6a6faca1895f69f4e528d72e21ad0dc8c0980206298273d9.scope: Consumed 1.204s CPU time.
Oct  1 13:12:33 np0005464891 podman[315361]: 2025-10-01 17:12:33.182049328 +0000 UTC m=+1.381332200 container died 707979dec4acb3ab6a6faca1895f69f4e528d72e21ad0dc8c0980206298273d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:12:33 np0005464891 systemd[1]: var-lib-containers-storage-overlay-29030081bf1fa9196d7e3cf3976c9842c75fd666f2965233c2b98e6aa84c63f4-merged.mount: Deactivated successfully.
Oct  1 13:12:33 np0005464891 NetworkManager[44940]: <info>  [1759338753.2351] manager: (tap4a125766-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/152)
Oct  1 13:12:33 np0005464891 kernel: tap4a125766-7f: entered promiscuous mode
Oct  1 13:12:33 np0005464891 podman[315361]: 2025-10-01 17:12:33.236753978 +0000 UTC m=+1.436036860 container remove 707979dec4acb3ab6a6faca1895f69f4e528d72e21ad0dc8c0980206298273d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:12:33 np0005464891 nova_compute[259907]: 2025-10-01 17:12:33.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:33 np0005464891 ovn_controller[152409]: 2025-10-01T17:12:33Z|00297|binding|INFO|Claiming lport 4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 for this chassis.
Oct  1 13:12:33 np0005464891 ovn_controller[152409]: 2025-10-01T17:12:33Z|00298|binding|INFO|4a125766-7fd6-4fa3-ac9a-8ce62baf11b4: Claiming fa:16:3e:60:6c:13 10.100.0.8
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.250 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:6c:13 10.100.0.8'], port_security=['fa:16:3e:60:6c:13 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ac5687c4-9ce5-46ee-a5ef-861637bb8b07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d747029d-7cd7-4e92-a356-867cacbb54c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7101f2ff48f540a08f6ec15b324152c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd9e37ca0-9284-404d-8dbf-7a2a022ea664', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ce41f0a-b119-4c05-a337-15f2fa115c91, chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=4a125766-7fd6-4fa3-ac9a-8ce62baf11b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.251 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 in datapath d747029d-7cd7-4e92-a356-867cacbb54c4 bound to our chassis#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.252 162546 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d747029d-7cd7-4e92-a356-867cacbb54c4#033[00m
Oct  1 13:12:33 np0005464891 ovn_controller[152409]: 2025-10-01T17:12:33Z|00299|binding|INFO|Setting lport 4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 ovn-installed in OVS
Oct  1 13:12:33 np0005464891 ovn_controller[152409]: 2025-10-01T17:12:33Z|00300|binding|INFO|Setting lport 4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 up in Southbound
Oct  1 13:12:33 np0005464891 systemd[1]: libpod-conmon-707979dec4acb3ab6a6faca1895f69f4e528d72e21ad0dc8c0980206298273d9.scope: Deactivated successfully.
Oct  1 13:12:33 np0005464891 nova_compute[259907]: 2025-10-01 17:12:33.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.273 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[fa63fd22-8eb3-4a00-a811-b723930fed49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.274 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd747029d-71 in ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.277 267902 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd747029d-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.277 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9c6255-19e0-47a4-81bf-cfea8cc59063]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.277 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[647946fd-ce5f-4449-a5e5-7163ee7702dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:33 np0005464891 systemd-machined[214891]: New machine qemu-30-instance-0000001e.
Oct  1 13:12:33 np0005464891 systemd[1]: Started Virtual Machine qemu-30-instance-0000001e.
Oct  1 13:12:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.296 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[2c61142f-a9bb-427f-b20f-740ca9ceac50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:12:33 np0005464891 systemd-udevd[315496]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 13:12:33 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:12:33 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:12:33 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev b30788ae-7121-400e-8948-c8f670abc2f7 does not exist
Oct  1 13:12:33 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev a7c6be55-b84e-44fb-8144-dd5ef84e573e does not exist
Oct  1 13:12:33 np0005464891 NetworkManager[44940]: <info>  [1759338753.3155] device (tap4a125766-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 13:12:33 np0005464891 NetworkManager[44940]: <info>  [1759338753.3172] device (tap4a125766-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.322 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[47e5fc73-6bc0-4def-a0f9-eaacec075833]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.352 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7ae1b3-e388-4bc8-b03c-af7957af0aec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.357 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[89644dec-f59b-45b1-a358-45939a8bdd63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:33 np0005464891 systemd-udevd[315500]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 13:12:33 np0005464891 NetworkManager[44940]: <info>  [1759338753.3605] manager: (tapd747029d-70): new Veth device (/org/freedesktop/NetworkManager/Devices/153)
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.384 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[b460ec6d-67f5-4fb0-994b-6ab18d894e96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.388 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[8c49bf92-db7b-4fb9-917e-538efb1c8943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:33 np0005464891 NetworkManager[44940]: <info>  [1759338753.4058] device (tapd747029d-70): carrier: link connected
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.410 267917 DEBUG oslo.privsep.daemon [-] privsep: reply[25b04686-29c1-4c3e-9dcc-e4556810bc07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.425 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[8f92bfba-f787-4c46-8128-21d6964f690c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd747029d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:a1:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570197, 'reachable_time': 27394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315575, 'error': None, 'target': 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.441 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[5c195cdc-8196-4945-9348-19a6c8b2a177]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:a1a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570197, 'tstamp': 570197}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315578, 'error': None, 'target': 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.455 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd72b47-d30e-49f6-bfa1-67f86cbf9fa5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd747029d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:a1:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570197, 'reachable_time': 27394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315579, 'error': None, 'target': 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.485 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[bdefc064-9fdc-43b3-9d99-5c3a9af0eebd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.555 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a5491b-7b1a-4189-90b9-be1de88189a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.556 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd747029d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.557 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.557 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd747029d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:12:33 np0005464891 nova_compute[259907]: 2025-10-01 17:12:33.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:33 np0005464891 NetworkManager[44940]: <info>  [1759338753.5596] manager: (tapd747029d-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Oct  1 13:12:33 np0005464891 kernel: tapd747029d-70: entered promiscuous mode
Oct  1 13:12:33 np0005464891 nova_compute[259907]: 2025-10-01 17:12:33.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.564 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd747029d-70, col_values=(('external_ids', {'iface-id': '3454e5b0-0c54-4314-89c0-47c1b5603195'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:12:33 np0005464891 ovn_controller[152409]: 2025-10-01T17:12:33Z|00301|binding|INFO|Releasing lport 3454e5b0-0c54-4314-89c0-47c1b5603195 from this chassis (sb_readonly=0)
Oct  1 13:12:33 np0005464891 nova_compute[259907]: 2025-10-01 17:12:33.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.588 162546 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d747029d-7cd7-4e92-a356-867cacbb54c4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d747029d-7cd7-4e92-a356-867cacbb54c4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.589 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[36d4a25f-6e8c-44b7-8c15-0cd6d057a029]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.591 162546 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: global
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    log         /dev/log local0 debug
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    log-tag     haproxy-metadata-proxy-d747029d-7cd7-4e92-a356-867cacbb54c4
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    user        root
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    group       root
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    maxconn     1024
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    pidfile     /var/lib/neutron/external/pids/d747029d-7cd7-4e92-a356-867cacbb54c4.pid.haproxy
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    daemon
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: defaults
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    log global
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    mode http
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    option httplog
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    option dontlognull
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    option http-server-close
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    option forwardfor
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    retries                 3
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    timeout http-request    30s
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    timeout connect         30s
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    timeout client          32s
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    timeout server          32s
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    timeout http-keep-alive 30s
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: listen listener
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    bind 169.254.169.254:80
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    server metadata /var/lib/neutron/metadata_proxy
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]:    http-request add-header X-OVN-Network-ID d747029d-7cd7-4e92-a356-867cacbb54c4
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  1 13:12:33 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:12:33.592 162546 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'env', 'PROCESS_TAG=haproxy-d747029d-7cd7-4e92-a356-867cacbb54c4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d747029d-7cd7-4e92-a356-867cacbb54c4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  1 13:12:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2244: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 85 B/s wr, 0 op/s
Oct  1 13:12:33 np0005464891 nova_compute[259907]: 2025-10-01 17:12:33.839 2 DEBUG nova.compute.manager [req-f57f0627-ad55-493b-8aa9-3033a537c8fb req-c04f1f9b-2cb8-42ac-b907-3d729b5abd41 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Received event network-vif-plugged-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:12:33 np0005464891 nova_compute[259907]: 2025-10-01 17:12:33.841 2 DEBUG oslo_concurrency.lockutils [req-f57f0627-ad55-493b-8aa9-3033a537c8fb req-c04f1f9b-2cb8-42ac-b907-3d729b5abd41 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:12:33 np0005464891 nova_compute[259907]: 2025-10-01 17:12:33.841 2 DEBUG oslo_concurrency.lockutils [req-f57f0627-ad55-493b-8aa9-3033a537c8fb req-c04f1f9b-2cb8-42ac-b907-3d729b5abd41 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:12:33 np0005464891 nova_compute[259907]: 2025-10-01 17:12:33.842 2 DEBUG oslo_concurrency.lockutils [req-f57f0627-ad55-493b-8aa9-3033a537c8fb req-c04f1f9b-2cb8-42ac-b907-3d729b5abd41 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:12:33 np0005464891 nova_compute[259907]: 2025-10-01 17:12:33.842 2 DEBUG nova.compute.manager [req-f57f0627-ad55-493b-8aa9-3033a537c8fb req-c04f1f9b-2cb8-42ac-b907-3d729b5abd41 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Processing event network-vif-plugged-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  1 13:12:33 np0005464891 podman[315647]: 2025-10-01 17:12:33.980997463 +0000 UTC m=+0.050806554 container create 20699b7087685a94a7f1cb5cc65b53404fa8d1eeee8844627fe278512ad942b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  1 13:12:34 np0005464891 systemd[1]: Started libpod-conmon-20699b7087685a94a7f1cb5cc65b53404fa8d1eeee8844627fe278512ad942b3.scope.
Oct  1 13:12:34 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:12:34 np0005464891 podman[315647]: 2025-10-01 17:12:33.952537477 +0000 UTC m=+0.022346598 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  1 13:12:34 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70569cccbb9fd0e10fb7014fd5597fe379e95e4e2058812a884b5ea154e112b5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 13:12:34 np0005464891 podman[315647]: 2025-10-01 17:12:34.076882771 +0000 UTC m=+0.146691902 container init 20699b7087685a94a7f1cb5cc65b53404fa8d1eeee8844627fe278512ad942b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:12:34 np0005464891 podman[315647]: 2025-10-01 17:12:34.084566713 +0000 UTC m=+0.154375824 container start 20699b7087685a94a7f1cb5cc65b53404fa8d1eeee8844627fe278512ad942b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:12:34 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[315662]: [NOTICE]   (315667) : New worker (315669) forked
Oct  1 13:12:34 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[315662]: [NOTICE]   (315667) : Loading success.
Oct  1 13:12:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:12:34 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:12:34 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:12:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 85 B/s wr, 0 op/s
Oct  1 13:12:35 np0005464891 nova_compute[259907]: 2025-10-01 17:12:35.938 2 DEBUG nova.compute.manager [req-bc36bd3b-2195-4e02-af8d-76d30bb6a791 req-86e566d9-eb89-4bf1-92e3-59958d1e2d1a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Received event network-vif-plugged-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:12:35 np0005464891 nova_compute[259907]: 2025-10-01 17:12:35.939 2 DEBUG oslo_concurrency.lockutils [req-bc36bd3b-2195-4e02-af8d-76d30bb6a791 req-86e566d9-eb89-4bf1-92e3-59958d1e2d1a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:12:35 np0005464891 nova_compute[259907]: 2025-10-01 17:12:35.939 2 DEBUG oslo_concurrency.lockutils [req-bc36bd3b-2195-4e02-af8d-76d30bb6a791 req-86e566d9-eb89-4bf1-92e3-59958d1e2d1a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:12:35 np0005464891 nova_compute[259907]: 2025-10-01 17:12:35.939 2 DEBUG oslo_concurrency.lockutils [req-bc36bd3b-2195-4e02-af8d-76d30bb6a791 req-86e566d9-eb89-4bf1-92e3-59958d1e2d1a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:12:35 np0005464891 nova_compute[259907]: 2025-10-01 17:12:35.940 2 DEBUG nova.compute.manager [req-bc36bd3b-2195-4e02-af8d-76d30bb6a791 req-86e566d9-eb89-4bf1-92e3-59958d1e2d1a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] No waiting events found dispatching network-vif-plugged-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:12:35 np0005464891 nova_compute[259907]: 2025-10-01 17:12:35.940 2 WARNING nova.compute.manager [req-bc36bd3b-2195-4e02-af8d-76d30bb6a791 req-86e566d9-eb89-4bf1-92e3-59958d1e2d1a af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Received unexpected event network-vif-plugged-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 for instance with vm_state building and task_state spawning.#033[00m
Oct  1 13:12:35 np0005464891 podman[315678]: 2025-10-01 17:12:35.985333958 +0000 UTC m=+0.091143228 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.474 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338756.4735024, ac5687c4-9ce5-46ee-a5ef-861637bb8b07 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.475 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] VM Started (Lifecycle Event)#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.478 2 DEBUG nova.compute.manager [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.483 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.487 2 INFO nova.virt.libvirt.driver [-] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Instance spawned successfully.#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.487 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.501 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.514 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.522 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.522 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.523 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.524 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.525 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.526 2 DEBUG nova.virt.libvirt.driver [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.538 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.538 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338756.4736886, ac5687c4-9ce5-46ee-a5ef-861637bb8b07 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.539 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] VM Paused (Lifecycle Event)#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.562 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.567 2 DEBUG nova.virt.driver [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] Emitting event <LifecycleEvent: 1759338756.4823027, ac5687c4-9ce5-46ee-a5ef-861637bb8b07 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.567 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] VM Resumed (Lifecycle Event)#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.595 2 INFO nova.compute.manager [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Took 8.62 seconds to spawn the instance on the hypervisor.#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.596 2 DEBUG nova.compute.manager [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.597 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.608 2 DEBUG nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.636 2 INFO nova.compute.manager [None req-13e6e0e0-d3f4-4da7-b53e-af44dbfcb9c9 - - - - - -] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.662 2 INFO nova.compute.manager [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Took 10.62 seconds to build instance.#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.688 2 DEBUG oslo_concurrency.lockutils [None req-2a35aa08-5a04-42e0-90f7-fed0422d11b7 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:12:36 np0005464891 nova_compute[259907]: 2025-10-01 17:12:36.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:12:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3668137217' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:12:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:12:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3668137217' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:12:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Oct  1 13:12:37 np0005464891 nova_compute[259907]: 2025-10-01 17:12:37.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:38 np0005464891 podman[315702]: 2025-10-01 17:12:38.998251507 +0000 UTC m=+0.099010275 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  1 13:12:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:12:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 13 KiB/s wr, 48 op/s
Oct  1 13:12:40 np0005464891 nova_compute[259907]: 2025-10-01 17:12:40.179 2 DEBUG nova.compute.manager [req-91d42a72-ddbb-4915-a44e-6bb85f2b0a05 req-70a2de2a-6a45-454c-af78-0c5bd62004b1 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Received event network-changed-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:12:40 np0005464891 nova_compute[259907]: 2025-10-01 17:12:40.179 2 DEBUG nova.compute.manager [req-91d42a72-ddbb-4915-a44e-6bb85f2b0a05 req-70a2de2a-6a45-454c-af78-0c5bd62004b1 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Refreshing instance network info cache due to event network-changed-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  1 13:12:40 np0005464891 nova_compute[259907]: 2025-10-01 17:12:40.179 2 DEBUG oslo_concurrency.lockutils [req-91d42a72-ddbb-4915-a44e-6bb85f2b0a05 req-70a2de2a-6a45-454c-af78-0c5bd62004b1 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "refresh_cache-ac5687c4-9ce5-46ee-a5ef-861637bb8b07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:12:40 np0005464891 nova_compute[259907]: 2025-10-01 17:12:40.180 2 DEBUG oslo_concurrency.lockutils [req-91d42a72-ddbb-4915-a44e-6bb85f2b0a05 req-70a2de2a-6a45-454c-af78-0c5bd62004b1 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquired lock "refresh_cache-ac5687c4-9ce5-46ee-a5ef-861637bb8b07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:12:40 np0005464891 nova_compute[259907]: 2025-10-01 17:12:40.180 2 DEBUG nova.network.neutron [req-91d42a72-ddbb-4915-a44e-6bb85f2b0a05 req-70a2de2a-6a45-454c-af78-0c5bd62004b1 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Refreshing network info cache for port 4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  1 13:12:40 np0005464891 nova_compute[259907]: 2025-10-01 17:12:40.803 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:12:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct  1 13:12:41 np0005464891 nova_compute[259907]: 2025-10-01 17:12:41.815 2 DEBUG nova.network.neutron [req-91d42a72-ddbb-4915-a44e-6bb85f2b0a05 req-70a2de2a-6a45-454c-af78-0c5bd62004b1 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Updated VIF entry in instance network info cache for port 4a125766-7fd6-4fa3-ac9a-8ce62baf11b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  1 13:12:41 np0005464891 nova_compute[259907]: 2025-10-01 17:12:41.815 2 DEBUG nova.network.neutron [req-91d42a72-ddbb-4915-a44e-6bb85f2b0a05 req-70a2de2a-6a45-454c-af78-0c5bd62004b1 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Updating instance_info_cache with network_info: [{"id": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "address": "fa:16:3e:60:6c:13", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a125766-7f", "ovs_interfaceid": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:12:41 np0005464891 nova_compute[259907]: 2025-10-01 17:12:41.834 2 DEBUG oslo_concurrency.lockutils [req-91d42a72-ddbb-4915-a44e-6bb85f2b0a05 req-70a2de2a-6a45-454c-af78-0c5bd62004b1 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Releasing lock "refresh_cache-ac5687c4-9ce5-46ee-a5ef-861637bb8b07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:12:41 np0005464891 nova_compute[259907]: 2025-10-01 17:12:41.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:41 np0005464891 podman[315728]: 2025-10-01 17:12:41.966540264 +0000 UTC m=+0.076469133 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:12:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:12:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:12:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:12:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:12:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:12:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:12:42 np0005464891 nova_compute[259907]: 2025-10-01 17:12:42.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct  1 13:12:43 np0005464891 podman[315748]: 2025-10-01 17:12:43.974358605 +0000 UTC m=+0.090926212 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:12:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:12:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  1 13:12:46 np0005464891 nova_compute[259907]: 2025-10-01 17:12:46.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2251: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  1 13:12:47 np0005464891 nova_compute[259907]: 2025-10-01 17:12:47.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:12:47 np0005464891 nova_compute[259907]: 2025-10-01 17:12:47.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:47 np0005464891 nova_compute[259907]: 2025-10-01 17:12:47.847 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:12:47 np0005464891 nova_compute[259907]: 2025-10-01 17:12:47.848 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:12:47 np0005464891 nova_compute[259907]: 2025-10-01 17:12:47.848 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:12:47 np0005464891 nova_compute[259907]: 2025-10-01 17:12:47.848 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 13:12:47 np0005464891 nova_compute[259907]: 2025-10-01 17:12:47.849 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:12:48 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:12:48 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/361934392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:12:48 np0005464891 nova_compute[259907]: 2025-10-01 17:12:48.380 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:12:48 np0005464891 nova_compute[259907]: 2025-10-01 17:12:48.506 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 13:12:48 np0005464891 nova_compute[259907]: 2025-10-01 17:12:48.507 2 DEBUG nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  1 13:12:48 np0005464891 nova_compute[259907]: 2025-10-01 17:12:48.687 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:12:48 np0005464891 nova_compute[259907]: 2025-10-01 17:12:48.688 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4159MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 13:12:48 np0005464891 nova_compute[259907]: 2025-10-01 17:12:48.688 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:12:48 np0005464891 nova_compute[259907]: 2025-10-01 17:12:48.688 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:12:48 np0005464891 nova_compute[259907]: 2025-10-01 17:12:48.910 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Instance ac5687c4-9ce5-46ee-a5ef-861637bb8b07 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  1 13:12:48 np0005464891 nova_compute[259907]: 2025-10-01 17:12:48.911 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 13:12:48 np0005464891 nova_compute[259907]: 2025-10-01 17:12:48.911 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 13:12:48 np0005464891 nova_compute[259907]: 2025-10-01 17:12:48.925 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing inventories for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 13:12:48 np0005464891 nova_compute[259907]: 2025-10-01 17:12:48.945 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating ProviderTree inventory for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 13:12:48 np0005464891 nova_compute[259907]: 2025-10-01 17:12:48.947 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Updating inventory in ProviderTree for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 13:12:48 np0005464891 nova_compute[259907]: 2025-10-01 17:12:48.963 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing aggregate associations for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 13:12:48 np0005464891 nova_compute[259907]: 2025-10-01 17:12:48.991 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Refreshing trait associations for resource provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8, traits: HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 13:12:49 np0005464891 nova_compute[259907]: 2025-10-01 17:12:49.035 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:12:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:12:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:12:49 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3680752700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:12:49 np0005464891 nova_compute[259907]: 2025-10-01 17:12:49.479 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:12:49 np0005464891 nova_compute[259907]: 2025-10-01 17:12:49.486 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:12:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 20 KiB/s wr, 88 op/s
Oct  1 13:12:49 np0005464891 nova_compute[259907]: 2025-10-01 17:12:49.736 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:12:49 np0005464891 ovn_controller[152409]: 2025-10-01T17:12:49Z|00074|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.11 does not match offer 10.100.0.8
Oct  1 13:12:49 np0005464891 ovn_controller[152409]: 2025-10-01T17:12:49Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:60:6c:13 10.100.0.8
Oct  1 13:12:49 np0005464891 nova_compute[259907]: 2025-10-01 17:12:49.943 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 13:12:49 np0005464891 nova_compute[259907]: 2025-10-01 17:12:49.944 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:12:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2253: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 7.4 KiB/s wr, 46 op/s
Oct  1 13:12:51 np0005464891 nova_compute[259907]: 2025-10-01 17:12:51.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:51 np0005464891 nova_compute[259907]: 2025-10-01 17:12:51.944 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:12:51 np0005464891 nova_compute[259907]: 2025-10-01 17:12:51.945 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:12:51 np0005464891 nova_compute[259907]: 2025-10-01 17:12:51.945 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 13:12:51 np0005464891 nova_compute[259907]: 2025-10-01 17:12:51.945 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 13:12:52 np0005464891 nova_compute[259907]: 2025-10-01 17:12:52.710 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "refresh_cache-ac5687c4-9ce5-46ee-a5ef-861637bb8b07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 13:12:52 np0005464891 nova_compute[259907]: 2025-10-01 17:12:52.711 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquired lock "refresh_cache-ac5687c4-9ce5-46ee-a5ef-861637bb8b07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 13:12:52 np0005464891 nova_compute[259907]: 2025-10-01 17:12:52.711 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  1 13:12:52 np0005464891 nova_compute[259907]: 2025-10-01 17:12:52.712 2 DEBUG nova.objects.instance [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ac5687c4-9ce5-46ee-a5ef-861637bb8b07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:12:52 np0005464891 nova_compute[259907]: 2025-10-01 17:12:52.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:53 np0005464891 ovn_controller[152409]: 2025-10-01T17:12:53Z|00076|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.11 does not match offer 10.100.0.8
Oct  1 13:12:53 np0005464891 ovn_controller[152409]: 2025-10-01T17:12:53Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:60:6c:13 10.100.0.8
Oct  1 13:12:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 7.4 KiB/s wr, 43 op/s
Oct  1 13:12:54 np0005464891 nova_compute[259907]: 2025-10-01 17:12:54.247 2 DEBUG nova.network.neutron [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Updating instance_info_cache with network_info: [{"id": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "address": "fa:16:3e:60:6c:13", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a125766-7f", "ovs_interfaceid": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:12:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:12:54 np0005464891 nova_compute[259907]: 2025-10-01 17:12:54.301 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Releasing lock "refresh_cache-ac5687c4-9ce5-46ee-a5ef-861637bb8b07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 13:12:54 np0005464891 nova_compute[259907]: 2025-10-01 17:12:54.301 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  1 13:12:54 np0005464891 nova_compute[259907]: 2025-10-01 17:12:54.302 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:12:54 np0005464891 nova_compute[259907]: 2025-10-01 17:12:54.302 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:12:54 np0005464891 nova_compute[259907]: 2025-10-01 17:12:54.303 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:12:54 np0005464891 nova_compute[259907]: 2025-10-01 17:12:54.303 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:12:54 np0005464891 nova_compute[259907]: 2025-10-01 17:12:54.303 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 13:12:54 np0005464891 ovn_controller[152409]: 2025-10-01T17:12:54Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:60:6c:13 10.100.0.8
Oct  1 13:12:54 np0005464891 ovn_controller[152409]: 2025-10-01T17:12:54Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:60:6c:13 10.100.0.8
Oct  1 13:12:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2255: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 7.4 KiB/s wr, 43 op/s
Oct  1 13:12:55 np0005464891 nova_compute[259907]: 2025-10-01 17:12:55.806 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:12:56 np0005464891 nova_compute[259907]: 2025-10-01 17:12:56.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 11 KiB/s wr, 44 op/s
Oct  1 13:12:57 np0005464891 nova_compute[259907]: 2025-10-01 17:12:57.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:12:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:12:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 44 op/s
Oct  1 13:13:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 397 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct  1 13:13:01 np0005464891 nova_compute[259907]: 2025-10-01 17:13:01.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:02 np0005464891 nova_compute[259907]: 2025-10-01 17:13:02.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 17 KiB/s wr, 23 op/s
Oct  1 13:13:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:13:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2260: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s wr, 0 op/s
Oct  1 13:13:06 np0005464891 nova_compute[259907]: 2025-10-01 17:13:06.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:06 np0005464891 podman[315813]: 2025-10-01 17:13:06.951844903 +0000 UTC m=+0.067585208 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:13:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s wr, 1 op/s
Oct  1 13:13:07 np0005464891 nova_compute[259907]: 2025-10-01 17:13:07.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:13:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s wr, 0 op/s
Oct  1 13:13:09 np0005464891 ovn_controller[152409]: 2025-10-01T17:13:09Z|00302|memory_trim|INFO|Detected inactivity (last active 30018 ms ago): trimming memory
Oct  1 13:13:09 np0005464891 podman[315832]: 2025-10-01 17:13:09.978486462 +0000 UTC m=+0.090098080 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  1 13:13:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s wr, 0 op/s
Oct  1 13:13:11 np0005464891 nova_compute[259907]: 2025-10-01 17:13:11.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_17:13:12
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['backups', 'vms', 'images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'volumes']
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.277 2 DEBUG oslo_concurrency.lockutils [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.277 2 DEBUG oslo_concurrency.lockutils [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.278 2 DEBUG oslo_concurrency.lockutils [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.278 2 DEBUG oslo_concurrency.lockutils [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.279 2 DEBUG oslo_concurrency.lockutils [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.280 2 INFO nova.compute.manager [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Terminating instance#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.281 2 DEBUG nova.compute.manager [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  1 13:13:12 np0005464891 kernel: tap4a125766-7f (unregistering): left promiscuous mode
Oct  1 13:13:12 np0005464891 NetworkManager[44940]: <info>  [1759338792.3270] device (tap4a125766-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:12 np0005464891 ovn_controller[152409]: 2025-10-01T17:13:12Z|00303|binding|INFO|Releasing lport 4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 from this chassis (sb_readonly=0)
Oct  1 13:13:12 np0005464891 ovn_controller[152409]: 2025-10-01T17:13:12Z|00304|binding|INFO|Setting lport 4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 down in Southbound
Oct  1 13:13:12 np0005464891 ovn_controller[152409]: 2025-10-01T17:13:12Z|00305|binding|INFO|Removing iface tap4a125766-7f ovn-installed in OVS
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.344 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:6c:13 10.100.0.8'], port_security=['fa:16:3e:60:6c:13 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ac5687c4-9ce5-46ee-a5ef-861637bb8b07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d747029d-7cd7-4e92-a356-867cacbb54c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7101f2ff48f540a08f6ec15b324152c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd9e37ca0-9284-404d-8dbf-7a2a022ea664', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ce41f0a-b119-4c05-a337-15f2fa115c91, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>], logical_port=4a125766-7fd6-4fa3-ac9a-8ce62baf11b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc11d62db50>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.346 162546 INFO neutron.agent.ovn.metadata.agent [-] Port 4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 in datapath d747029d-7cd7-4e92-a356-867cacbb54c4 unbound from our chassis#033[00m
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.347 162546 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d747029d-7cd7-4e92-a356-867cacbb54c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.348 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[d5cabb1d-5c4a-436a-8bed-6710de3c2523]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.349 162546 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 namespace which is not needed anymore#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:12 np0005464891 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct  1 13:13:12 np0005464891 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Consumed 16.353s CPU time.
Oct  1 13:13:12 np0005464891 systemd-machined[214891]: Machine qemu-30-instance-0000001e terminated.
Oct  1 13:13:12 np0005464891 podman[315859]: 2025-10-01 17:13:12.437152334 +0000 UTC m=+0.085911864 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.473 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.474 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.474 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:13:12 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[315662]: [NOTICE]   (315667) : haproxy version is 2.8.14-c23fe91
Oct  1 13:13:12 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[315662]: [NOTICE]   (315667) : path to executable is /usr/sbin/haproxy
Oct  1 13:13:12 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[315662]: [ALERT]    (315667) : Current worker (315669) exited with code 143 (Terminated)
Oct  1 13:13:12 np0005464891 neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4[315662]: [WARNING]  (315667) : All workers exited. Exiting... (0)
Oct  1 13:13:12 np0005464891 systemd[1]: libpod-20699b7087685a94a7f1cb5cc65b53404fa8d1eeee8844627fe278512ad942b3.scope: Deactivated successfully.
Oct  1 13:13:12 np0005464891 podman[315900]: 2025-10-01 17:13:12.486898007 +0000 UTC m=+0.046843524 container died 20699b7087685a94a7f1cb5cc65b53404fa8d1eeee8844627fe278512ad942b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.516 2 INFO nova.virt.libvirt.driver [-] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Instance destroyed successfully.#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.517 2 DEBUG nova.objects.instance [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lazy-loading 'resources' on Instance uuid ac5687c4-9ce5-46ee-a5ef-861637bb8b07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  1 13:13:12 np0005464891 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-20699b7087685a94a7f1cb5cc65b53404fa8d1eeee8844627fe278512ad942b3-userdata-shm.mount: Deactivated successfully.
Oct  1 13:13:12 np0005464891 systemd[1]: var-lib-containers-storage-overlay-70569cccbb9fd0e10fb7014fd5597fe379e95e4e2058812a884b5ea154e112b5-merged.mount: Deactivated successfully.
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.536 2 DEBUG nova.virt.libvirt.vif [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-01T17:12:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2072355531',display_name='tempest-TransferEncryptedVolumeTest-server-2072355531',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2072355531',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCggI6gfQD/mHYmOm6rDTYT2bX0sAibZLzDEC2B5xpj9ltJuTla2hy5xtYjkh93bJjJwE1iJj8Z6crMN4OBz57+5pkjfi89vm+UxnL1pqlzGufhUcmighPnzcAkPF9ezXQ==',key_name='tempest-TransferEncryptedVolumeTest-1396714412',keypairs=<?>,launch_index=0,launched_at=2025-10-01T17:12:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7101f2ff48f540a08f6ec15b324152c6',ramdisk_id='',reservation_id='r-6vsbkk4p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1550217158',owner_user_name='tempest-TransferEncryptedVolumeTest-1550217158-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-01T17:12:36Z,user_data=None,user_id='c440275c1a1e4cf09fcf789374345bb2',uuid=ac5687c4-9ce5-46ee-a5ef-861637bb8b07,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "address": "fa:16:3e:60:6c:13", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a125766-7f", "ovs_interfaceid": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.537 2 DEBUG nova.network.os_vif_util [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converting VIF {"id": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "address": "fa:16:3e:60:6c:13", "network": {"id": "d747029d-7cd7-4e92-a356-867cacbb54c4", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-522121511-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7101f2ff48f540a08f6ec15b324152c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a125766-7f", "ovs_interfaceid": "4a125766-7fd6-4fa3-ac9a-8ce62baf11b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.538 2 DEBUG nova.network.os_vif_util [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:6c:13,bridge_name='br-int',has_traffic_filtering=True,id=4a125766-7fd6-4fa3-ac9a-8ce62baf11b4,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a125766-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.538 2 DEBUG os_vif [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:6c:13,bridge_name='br-int',has_traffic_filtering=True,id=4a125766-7fd6-4fa3-ac9a-8ce62baf11b4,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a125766-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.540 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a125766-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.545 2 INFO os_vif [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:6c:13,bridge_name='br-int',has_traffic_filtering=True,id=4a125766-7fd6-4fa3-ac9a-8ce62baf11b4,network=Network(d747029d-7cd7-4e92-a356-867cacbb54c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a125766-7f')#033[00m
Oct  1 13:13:12 np0005464891 podman[315900]: 2025-10-01 17:13:12.561310933 +0000 UTC m=+0.121256450 container cleanup 20699b7087685a94a7f1cb5cc65b53404fa8d1eeee8844627fe278512ad942b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 13:13:12 np0005464891 systemd[1]: libpod-conmon-20699b7087685a94a7f1cb5cc65b53404fa8d1eeee8844627fe278512ad942b3.scope: Deactivated successfully.
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:13:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:13:12 np0005464891 podman[315950]: 2025-10-01 17:13:12.626936445 +0000 UTC m=+0.043292057 container remove 20699b7087685a94a7f1cb5cc65b53404fa8d1eeee8844627fe278512ad942b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.632 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[f048d613-6650-48cf-8351-81ed84a7798c]: (4, ('Wed Oct  1 05:13:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 (20699b7087685a94a7f1cb5cc65b53404fa8d1eeee8844627fe278512ad942b3)\n20699b7087685a94a7f1cb5cc65b53404fa8d1eeee8844627fe278512ad942b3\nWed Oct  1 05:13:12 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 (20699b7087685a94a7f1cb5cc65b53404fa8d1eeee8844627fe278512ad942b3)\n20699b7087685a94a7f1cb5cc65b53404fa8d1eeee8844627fe278512ad942b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.634 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[b406e5b1-5787-4427-91cb-635135dbaa32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.635 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd747029d-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:13:12 np0005464891 kernel: tapd747029d-70: left promiscuous mode
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.652 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[a36c6be1-c461-4469-8f45-2118f93e169e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.676 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[c28dc6d8-1ace-4245-b251-1fe87bd9ea49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.678 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[73d1587e-f217-4139-81fd-da55af2c0472]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.693 267902 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea730c3-72fc-472d-911b-9a366cf81780]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570191, 'reachable_time': 32076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315970, 'error': None, 'target': 'ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.696 162906 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d747029d-7cd7-4e92-a356-867cacbb54c4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  1 13:13:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:12.696 162906 DEBUG oslo.privsep.daemon [-] privsep: reply[dc96085a-ae86-4a53-a874-336200c37dce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 13:13:12 np0005464891 systemd[1]: run-netns-ovnmeta\x2dd747029d\x2d7cd7\x2d4e92\x2da356\x2d867cacbb54c4.mount: Deactivated successfully.
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.719 2 INFO nova.virt.libvirt.driver [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Deleting instance files /var/lib/nova/instances/ac5687c4-9ce5-46ee-a5ef-861637bb8b07_del#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.719 2 INFO nova.virt.libvirt.driver [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Deletion of /var/lib/nova/instances/ac5687c4-9ce5-46ee-a5ef-861637bb8b07_del complete#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.767 2 INFO nova.compute.manager [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Took 0.48 seconds to destroy the instance on the hypervisor.#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.768 2 DEBUG oslo.service.loopingcall [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.768 2 DEBUG nova.compute.manager [-] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.768 2 DEBUG nova.network.neutron [-] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.948 2 DEBUG nova.compute.manager [req-e0400c62-999a-4a10-8611-49521a41fdb3 req-9e0782e5-9342-4767-a4e0-c4d935b6b938 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Received event network-vif-unplugged-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.949 2 DEBUG oslo_concurrency.lockutils [req-e0400c62-999a-4a10-8611-49521a41fdb3 req-9e0782e5-9342-4767-a4e0-c4d935b6b938 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.949 2 DEBUG oslo_concurrency.lockutils [req-e0400c62-999a-4a10-8611-49521a41fdb3 req-9e0782e5-9342-4767-a4e0-c4d935b6b938 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.949 2 DEBUG oslo_concurrency.lockutils [req-e0400c62-999a-4a10-8611-49521a41fdb3 req-9e0782e5-9342-4767-a4e0-c4d935b6b938 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.950 2 DEBUG nova.compute.manager [req-e0400c62-999a-4a10-8611-49521a41fdb3 req-9e0782e5-9342-4767-a4e0-c4d935b6b938 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] No waiting events found dispatching network-vif-unplugged-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:13:12 np0005464891 nova_compute[259907]: 2025-10-01 17:13:12.950 2 DEBUG nova.compute.manager [req-e0400c62-999a-4a10-8611-49521a41fdb3 req-9e0782e5-9342-4767-a4e0-c4d935b6b938 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Received event network-vif-unplugged-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  1 13:13:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:13.180 162546 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:94:cb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:f1:19:63:8d:7a'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 13:13:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:13.181 162546 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 13:13:13 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:13:13.182 162546 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7f6af0d3-69fd-4a3a-8e45-081fa1f83992, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 13:13:13 np0005464891 nova_compute[259907]: 2025-10-01 17:13:13.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:13 np0005464891 nova_compute[259907]: 2025-10-01 17:13:13.555 2 DEBUG nova.network.neutron [-] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  1 13:13:13 np0005464891 nova_compute[259907]: 2025-10-01 17:13:13.572 2 INFO nova.compute.manager [-] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Took 0.80 seconds to deallocate network for instance.#033[00m
Oct  1 13:13:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2264: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 7.2 KiB/s wr, 15 op/s
Oct  1 13:13:13 np0005464891 nova_compute[259907]: 2025-10-01 17:13:13.735 2 INFO nova.compute.manager [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Took 0.16 seconds to detach 1 volumes for instance.#033[00m
Oct  1 13:13:13 np0005464891 nova_compute[259907]: 2025-10-01 17:13:13.778 2 DEBUG oslo_concurrency.lockutils [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:13:13 np0005464891 nova_compute[259907]: 2025-10-01 17:13:13.778 2 DEBUG oslo_concurrency.lockutils [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:13:13 np0005464891 nova_compute[259907]: 2025-10-01 17:13:13.856 2 DEBUG oslo_concurrency.processutils [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:13:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:13:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:13:14 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1367082857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:13:14 np0005464891 nova_compute[259907]: 2025-10-01 17:13:14.331 2 DEBUG oslo_concurrency.processutils [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:13:14 np0005464891 nova_compute[259907]: 2025-10-01 17:13:14.340 2 DEBUG nova.compute.provider_tree [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:13:14 np0005464891 nova_compute[259907]: 2025-10-01 17:13:14.360 2 DEBUG nova.scheduler.client.report [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:13:14 np0005464891 nova_compute[259907]: 2025-10-01 17:13:14.384 2 DEBUG oslo_concurrency.lockutils [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:13:14 np0005464891 nova_compute[259907]: 2025-10-01 17:13:14.405 2 INFO nova.scheduler.client.report [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Deleted allocations for instance ac5687c4-9ce5-46ee-a5ef-861637bb8b07#033[00m
Oct  1 13:13:14 np0005464891 nova_compute[259907]: 2025-10-01 17:13:14.462 2 DEBUG oslo_concurrency.lockutils [None req-697e0982-8317-45be-b1ce-ea12060886e0 c440275c1a1e4cf09fcf789374345bb2 7101f2ff48f540a08f6ec15b324152c6 - - default default] Lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:13:14 np0005464891 podman[315994]: 2025-10-01 17:13:14.957800787 +0000 UTC m=+0.071612118 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 13:13:15 np0005464891 nova_compute[259907]: 2025-10-01 17:13:15.031 2 DEBUG nova.compute.manager [req-c8868cde-d274-4c72-bf82-2c6990ec1cc2 req-15852e0a-7620-4555-95c0-506d4300ab80 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Received event network-vif-plugged-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:13:15 np0005464891 nova_compute[259907]: 2025-10-01 17:13:15.032 2 DEBUG oslo_concurrency.lockutils [req-c8868cde-d274-4c72-bf82-2c6990ec1cc2 req-15852e0a-7620-4555-95c0-506d4300ab80 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Acquiring lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:13:15 np0005464891 nova_compute[259907]: 2025-10-01 17:13:15.032 2 DEBUG oslo_concurrency.lockutils [req-c8868cde-d274-4c72-bf82-2c6990ec1cc2 req-15852e0a-7620-4555-95c0-506d4300ab80 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:13:15 np0005464891 nova_compute[259907]: 2025-10-01 17:13:15.033 2 DEBUG oslo_concurrency.lockutils [req-c8868cde-d274-4c72-bf82-2c6990ec1cc2 req-15852e0a-7620-4555-95c0-506d4300ab80 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] Lock "ac5687c4-9ce5-46ee-a5ef-861637bb8b07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:13:15 np0005464891 nova_compute[259907]: 2025-10-01 17:13:15.033 2 DEBUG nova.compute.manager [req-c8868cde-d274-4c72-bf82-2c6990ec1cc2 req-15852e0a-7620-4555-95c0-506d4300ab80 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] No waiting events found dispatching network-vif-plugged-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  1 13:13:15 np0005464891 nova_compute[259907]: 2025-10-01 17:13:15.033 2 WARNING nova.compute.manager [req-c8868cde-d274-4c72-bf82-2c6990ec1cc2 req-15852e0a-7620-4555-95c0-506d4300ab80 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Received unexpected event network-vif-plugged-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 for instance with vm_state deleted and task_state None.#033[00m
Oct  1 13:13:15 np0005464891 nova_compute[259907]: 2025-10-01 17:13:15.034 2 DEBUG nova.compute.manager [req-c8868cde-d274-4c72-bf82-2c6990ec1cc2 req-15852e0a-7620-4555-95c0-506d4300ab80 af5a467eee0a44e898aa35e45cd32da2 887664ec11194266978ceeac8bd7b17e - - default default] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Received event network-vif-deleted-4a125766-7fd6-4fa3-ac9a-8ce62baf11b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  1 13:13:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 3.9 KiB/s wr, 15 op/s
Oct  1 13:13:17 np0005464891 nova_compute[259907]: 2025-10-01 17:13:17.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:13:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4049317576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:13:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:13:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4049317576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:13:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 453 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 3.9 KiB/s wr, 18 op/s
Oct  1 13:13:17 np0005464891 nova_compute[259907]: 2025-10-01 17:13:17.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:19.270000) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338799270038, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 1479, "num_deletes": 258, "total_data_size": 2288921, "memory_usage": 2318704, "flush_reason": "Manual Compaction"}
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338799280076, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 2244791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43826, "largest_seqno": 45304, "table_properties": {"data_size": 2237892, "index_size": 3970, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14234, "raw_average_key_size": 19, "raw_value_size": 2224110, "raw_average_value_size": 3063, "num_data_blocks": 178, "num_entries": 726, "num_filter_entries": 726, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338649, "oldest_key_time": 1759338649, "file_creation_time": 1759338799, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 10116 microseconds, and 5078 cpu microseconds.
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:19.280114) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 2244791 bytes OK
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:19.280132) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:19.281166) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:19.281178) EVENT_LOG_v1 {"time_micros": 1759338799281174, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:19.281192) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 2282390, prev total WAL file size 2282390, number of live WAL files 2.
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:19.282021) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353036' seq:72057594037927935, type:22 .. '6C6F676D0031373630' seq:0, type:0; will stop at (end)
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(2192KB)], [92(11MB)]
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338799282112, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 14242393, "oldest_snapshot_seqno": -1}
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 7688 keys, 14091463 bytes, temperature: kUnknown
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338799397673, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 14091463, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14031458, "index_size": 39669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19269, "raw_key_size": 194797, "raw_average_key_size": 25, "raw_value_size": 13885237, "raw_average_value_size": 1806, "num_data_blocks": 1581, "num_entries": 7688, "num_filter_entries": 7688, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759338799, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:19.397979) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 14091463 bytes
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:19.399444) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.2 rd, 121.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 11.4 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(12.6) write-amplify(6.3) OK, records in: 8216, records dropped: 528 output_compression: NoCompression
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:19.399505) EVENT_LOG_v1 {"time_micros": 1759338799399496, "job": 54, "event": "compaction_finished", "compaction_time_micros": 115650, "compaction_time_cpu_micros": 59037, "output_level": 6, "num_output_files": 1, "total_output_size": 14091463, "num_input_records": 8216, "num_output_records": 7688, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338799400121, "job": 54, "event": "table_file_deletion", "file_number": 94}
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338799402876, "job": 54, "event": "table_file_deletion", "file_number": 92}
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:19.281839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:19.402941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:19.402945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:19.402947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:19.402948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:13:19 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:19.402950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:13:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2267: 321 pgs: 321 active+clean; 332 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 230 KiB/s rd, 3.9 KiB/s wr, 35 op/s
Oct  1 13:13:21 np0005464891 nova_compute[259907]: 2025-10-01 17:13:21.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:21 np0005464891 nova_compute[259907]: 2025-10-01 17:13:21.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:13:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 13:13:22 np0005464891 nova_compute[259907]: 2025-10-01 17:13:22.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:22 np0005464891 nova_compute[259907]: 2025-10-01 17:13:22.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Oct  1 13:13:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:13:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 596 B/s wr, 22 op/s
Oct  1 13:13:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 13:13:25 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 9876 writes, 45K keys, 9876 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 9876 writes, 9876 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1465 writes, 6910 keys, 1465 commit groups, 1.0 writes per commit group, ingest: 9.26 MB, 0.02 MB/s#012Interval WAL: 1466 writes, 1466 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     23.0      2.35              0.19        27    0.087       0      0       0.0       0.0#012  L6      1/0   13.44 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.2     45.7     38.8      5.87              0.91        26    0.226    153K    14K       0.0       0.0#012 Sum      1/0   13.44 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.2     32.6     34.3      8.22              1.09        53    0.155    153K    14K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7     24.9     26.1      2.97              0.31        12    0.248     46K   3131       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     45.7     38.8      5.87              0.91        26    0.226    153K    14K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     23.0      2.35              0.19        26    0.090       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.053, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.28 GB write, 0.08 MB/s write, 0.26 GB read, 0.07 MB/s read, 8.2 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 3.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bddc5951f0#2 capacity: 304.00 MB usage: 29.48 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.011453 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2002,28.24 MB,9.28909%) FilterBlock(54,443.55 KB,0.142484%) IndexBlock(54,824.20 KB,0.264765%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 13:13:27 np0005464891 nova_compute[259907]: 2025-10-01 17:13:27.514 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759338792.5116167, ac5687c4-9ce5-46ee-a5ef-861637bb8b07 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  1 13:13:27 np0005464891 nova_compute[259907]: 2025-10-01 17:13:27.514 2 INFO nova.compute.manager [-] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] VM Stopped (Lifecycle Event)#033[00m
Oct  1 13:13:27 np0005464891 nova_compute[259907]: 2025-10-01 17:13:27.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:27 np0005464891 nova_compute[259907]: 2025-10-01 17:13:27.657 2 DEBUG nova.compute.manager [None req-b69d1e0a-aa45-4555-9a53-fb00b2cf2a47 - - - - - -] [instance: ac5687c4-9ce5-46ee-a5ef-861637bb8b07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  1 13:13:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 596 B/s wr, 22 op/s
Oct  1 13:13:27 np0005464891 nova_compute[259907]: 2025-10-01 17:13:27.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:13:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 596 B/s wr, 18 op/s
Oct  1 13:13:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 596 B/s rd, 255 B/s wr, 2 op/s
Oct  1 13:13:32 np0005464891 nova_compute[259907]: 2025-10-01 17:13:32.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:32 np0005464891 nova_compute[259907]: 2025-10-01 17:13:32.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:13:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:13:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:13:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:13:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 13:13:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:13:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 13:13:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:13:34 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 737c45f8-6713-4364-9beb-497085ffe18c does not exist
Oct  1 13:13:34 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 140b7cda-49b7-49eb-96bc-edd5568ba8f2 does not exist
Oct  1 13:13:34 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev d42a05ec-b530-479e-8e87-c8ef09d87ff8 does not exist
Oct  1 13:13:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 13:13:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 13:13:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 13:13:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:13:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:13:34 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:13:35 np0005464891 podman[316289]: 2025-10-01 17:13:35.141577532 +0000 UTC m=+0.032758165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:13:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:13:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:13:35 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:13:35 np0005464891 podman[316289]: 2025-10-01 17:13:35.297792017 +0000 UTC m=+0.188972580 container create 4931362a8d470ab8412ca4dcc38d346ca5e8c2ef7c3a32c3c25cf3deda908ddd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 13:13:35 np0005464891 systemd[1]: Started libpod-conmon-4931362a8d470ab8412ca4dcc38d346ca5e8c2ef7c3a32c3c25cf3deda908ddd.scope.
Oct  1 13:13:35 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:13:35 np0005464891 podman[316289]: 2025-10-01 17:13:35.621409505 +0000 UTC m=+0.512590128 container init 4931362a8d470ab8412ca4dcc38d346ca5e8c2ef7c3a32c3c25cf3deda908ddd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 13:13:35 np0005464891 podman[316289]: 2025-10-01 17:13:35.629995142 +0000 UTC m=+0.521175685 container start 4931362a8d470ab8412ca4dcc38d346ca5e8c2ef7c3a32c3c25cf3deda908ddd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 13:13:35 np0005464891 inspiring_vaughan[316305]: 167 167
Oct  1 13:13:35 np0005464891 systemd[1]: libpod-4931362a8d470ab8412ca4dcc38d346ca5e8c2ef7c3a32c3c25cf3deda908ddd.scope: Deactivated successfully.
Oct  1 13:13:35 np0005464891 podman[316289]: 2025-10-01 17:13:35.670066789 +0000 UTC m=+0.561247352 container attach 4931362a8d470ab8412ca4dcc38d346ca5e8c2ef7c3a32c3c25cf3deda908ddd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 13:13:35 np0005464891 podman[316289]: 2025-10-01 17:13:35.670980173 +0000 UTC m=+0.562160726 container died 4931362a8d470ab8412ca4dcc38d346ca5e8c2ef7c3a32c3c25cf3deda908ddd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:13:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:13:35 np0005464891 systemd[1]: var-lib-containers-storage-overlay-27c15f88eb47bdd96d1dca823422d63cfc2c96381cb560f1c56a3a8378acf82d-merged.mount: Deactivated successfully.
Oct  1 13:13:35 np0005464891 podman[316289]: 2025-10-01 17:13:35.983009311 +0000 UTC m=+0.874189864 container remove 4931362a8d470ab8412ca4dcc38d346ca5e8c2ef7c3a32c3c25cf3deda908ddd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:13:35 np0005464891 systemd[1]: libpod-conmon-4931362a8d470ab8412ca4dcc38d346ca5e8c2ef7c3a32c3c25cf3deda908ddd.scope: Deactivated successfully.
Oct  1 13:13:36 np0005464891 podman[316331]: 2025-10-01 17:13:36.151591157 +0000 UTC m=+0.049860328 container create afdd665eadde524431c69c5411d4c258e9e666f4e9d4c2117a98a14abb8da9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 13:13:36 np0005464891 systemd[1]: Started libpod-conmon-afdd665eadde524431c69c5411d4c258e9e666f4e9d4c2117a98a14abb8da9dd.scope.
Oct  1 13:13:36 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:13:36 np0005464891 podman[316331]: 2025-10-01 17:13:36.12455868 +0000 UTC m=+0.022827871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:13:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/003c30dcceec8256a5b9467795f8cf808f60c19fbb7b9767d68dd09e32abe21a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:13:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/003c30dcceec8256a5b9467795f8cf808f60c19fbb7b9767d68dd09e32abe21a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:13:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/003c30dcceec8256a5b9467795f8cf808f60c19fbb7b9767d68dd09e32abe21a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:13:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/003c30dcceec8256a5b9467795f8cf808f60c19fbb7b9767d68dd09e32abe21a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:13:36 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/003c30dcceec8256a5b9467795f8cf808f60c19fbb7b9767d68dd09e32abe21a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 13:13:36 np0005464891 podman[316331]: 2025-10-01 17:13:36.26216884 +0000 UTC m=+0.160438081 container init afdd665eadde524431c69c5411d4c258e9e666f4e9d4c2117a98a14abb8da9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_franklin, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:13:36 np0005464891 podman[316331]: 2025-10-01 17:13:36.27483267 +0000 UTC m=+0.173101871 container start afdd665eadde524431c69c5411d4c258e9e666f4e9d4c2117a98a14abb8da9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 13:13:36 np0005464891 podman[316331]: 2025-10-01 17:13:36.43851179 +0000 UTC m=+0.336780981 container attach afdd665eadde524431c69c5411d4c258e9e666f4e9d4c2117a98a14abb8da9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 13:13:37 np0005464891 hopeful_franklin[316347]: --> passed data devices: 0 physical, 3 LVM
Oct  1 13:13:37 np0005464891 hopeful_franklin[316347]: --> relative data size: 1.0
Oct  1 13:13:37 np0005464891 hopeful_franklin[316347]: --> All data devices are unavailable
Oct  1 13:13:37 np0005464891 systemd[1]: libpod-afdd665eadde524431c69c5411d4c258e9e666f4e9d4c2117a98a14abb8da9dd.scope: Deactivated successfully.
Oct  1 13:13:37 np0005464891 systemd[1]: libpod-afdd665eadde524431c69c5411d4c258e9e666f4e9d4c2117a98a14abb8da9dd.scope: Consumed 1.007s CPU time.
Oct  1 13:13:37 np0005464891 podman[316376]: 2025-10-01 17:13:37.40885999 +0000 UTC m=+0.026529764 container died afdd665eadde524431c69c5411d4c258e9e666f4e9d4c2117a98a14abb8da9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_franklin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 13:13:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:13:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4160038616' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:13:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:13:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4160038616' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:13:37 np0005464891 nova_compute[259907]: 2025-10-01 17:13:37.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:13:37 np0005464891 nova_compute[259907]: 2025-10-01 17:13:37.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:37 np0005464891 systemd[1]: var-lib-containers-storage-overlay-003c30dcceec8256a5b9467795f8cf808f60c19fbb7b9767d68dd09e32abe21a-merged.mount: Deactivated successfully.
Oct  1 13:13:38 np0005464891 podman[316376]: 2025-10-01 17:13:38.149979547 +0000 UTC m=+0.767649291 container remove afdd665eadde524431c69c5411d4c258e9e666f4e9d4c2117a98a14abb8da9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 13:13:38 np0005464891 systemd[1]: libpod-conmon-afdd665eadde524431c69c5411d4c258e9e666f4e9d4c2117a98a14abb8da9dd.scope: Deactivated successfully.
Oct  1 13:13:38 np0005464891 podman[316377]: 2025-10-01 17:13:38.284831791 +0000 UTC m=+0.886941135 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  1 13:13:38 np0005464891 podman[316547]: 2025-10-01 17:13:38.791122734 +0000 UTC m=+0.025846445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:13:38 np0005464891 podman[316547]: 2025-10-01 17:13:38.890166099 +0000 UTC m=+0.124889800 container create 3a9e4346f1cfa98ff8096c944c3383c65293872806f7017d1d6fa71b3cff4ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 13:13:38 np0005464891 systemd[1]: Started libpod-conmon-3a9e4346f1cfa98ff8096c944c3383c65293872806f7017d1d6fa71b3cff4ab9.scope.
Oct  1 13:13:39 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:13:39 np0005464891 podman[316547]: 2025-10-01 17:13:39.036012588 +0000 UTC m=+0.270736299 container init 3a9e4346f1cfa98ff8096c944c3383c65293872806f7017d1d6fa71b3cff4ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ishizaka, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:13:39 np0005464891 podman[316547]: 2025-10-01 17:13:39.043488504 +0000 UTC m=+0.278212205 container start 3a9e4346f1cfa98ff8096c944c3383c65293872806f7017d1d6fa71b3cff4ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:13:39 np0005464891 podman[316547]: 2025-10-01 17:13:39.047410142 +0000 UTC m=+0.282133853 container attach 3a9e4346f1cfa98ff8096c944c3383c65293872806f7017d1d6fa71b3cff4ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:13:39 np0005464891 naughty_ishizaka[316565]: 167 167
Oct  1 13:13:39 np0005464891 systemd[1]: libpod-3a9e4346f1cfa98ff8096c944c3383c65293872806f7017d1d6fa71b3cff4ab9.scope: Deactivated successfully.
Oct  1 13:13:39 np0005464891 podman[316547]: 2025-10-01 17:13:39.050282792 +0000 UTC m=+0.285006503 container died 3a9e4346f1cfa98ff8096c944c3383c65293872806f7017d1d6fa71b3cff4ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 13:13:39 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ad747cf704c83692569a4e274116b6785bc2c05c8590aec4c94beceba6593b95-merged.mount: Deactivated successfully.
Oct  1 13:13:39 np0005464891 podman[316547]: 2025-10-01 17:13:39.176861917 +0000 UTC m=+0.411585628 container remove 3a9e4346f1cfa98ff8096c944c3383c65293872806f7017d1d6fa71b3cff4ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:13:39 np0005464891 systemd[1]: libpod-conmon-3a9e4346f1cfa98ff8096c944c3383c65293872806f7017d1d6fa71b3cff4ab9.scope: Deactivated successfully.
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:39.416245) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338819416302, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 422, "num_deletes": 250, "total_data_size": 302634, "memory_usage": 310608, "flush_reason": "Manual Compaction"}
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Oct  1 13:13:39 np0005464891 podman[316591]: 2025-10-01 17:13:39.372053237 +0000 UTC m=+0.027374986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338819484193, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 246778, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45305, "largest_seqno": 45726, "table_properties": {"data_size": 244391, "index_size": 487, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6537, "raw_average_key_size": 20, "raw_value_size": 239553, "raw_average_value_size": 748, "num_data_blocks": 22, "num_entries": 320, "num_filter_entries": 320, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338800, "oldest_key_time": 1759338800, "file_creation_time": 1759338819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 67986 microseconds, and 2063 cpu microseconds.
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:13:39 np0005464891 podman[316591]: 2025-10-01 17:13:39.485000917 +0000 UTC m=+0.140322556 container create 930f11b15ab73c0da9a4d98a7ed4eefad38ba743b83ba96a8909a1bd0e3a2714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_napier, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:39.484237) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 246778 bytes OK
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:39.484257) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:39.490948) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:39.490987) EVENT_LOG_v1 {"time_micros": 1759338819490977, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:39.491011) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 300006, prev total WAL file size 300006, number of live WAL files 2.
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:39.491671) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353035' seq:72057594037927935, type:22 .. '6D6772737461740031373536' seq:0, type:0; will stop at (end)
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(240KB)], [95(13MB)]
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338819491733, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 14338241, "oldest_snapshot_seqno": -1}
Oct  1 13:13:39 np0005464891 systemd[1]: Started libpod-conmon-930f11b15ab73c0da9a4d98a7ed4eefad38ba743b83ba96a8909a1bd0e3a2714.scope.
Oct  1 13:13:39 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:13:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53658f5ad53a8825b03eb11ac3b9db9c158840107893b20ece3cc9a7bbd82dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:13:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53658f5ad53a8825b03eb11ac3b9db9c158840107893b20ece3cc9a7bbd82dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:13:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53658f5ad53a8825b03eb11ac3b9db9c158840107893b20ece3cc9a7bbd82dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:13:39 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53658f5ad53a8825b03eb11ac3b9db9c158840107893b20ece3cc9a7bbd82dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 7503 keys, 11100071 bytes, temperature: kUnknown
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338819638004, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 11100071, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11045973, "index_size": 34210, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18821, "raw_key_size": 191153, "raw_average_key_size": 25, "raw_value_size": 10907606, "raw_average_value_size": 1453, "num_data_blocks": 1354, "num_entries": 7503, "num_filter_entries": 7503, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759338819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:13:39 np0005464891 podman[316591]: 2025-10-01 17:13:39.640016248 +0000 UTC m=+0.295337927 container init 930f11b15ab73c0da9a4d98a7ed4eefad38ba743b83ba96a8909a1bd0e3a2714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_napier, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:39.638791) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 11100071 bytes
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:39.646620) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.6 rd, 75.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 13.4 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(103.1) write-amplify(45.0) OK, records in: 8008, records dropped: 505 output_compression: NoCompression
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:39.646652) EVENT_LOG_v1 {"time_micros": 1759338819646640, "job": 56, "event": "compaction_finished", "compaction_time_micros": 146844, "compaction_time_cpu_micros": 35839, "output_level": 6, "num_output_files": 1, "total_output_size": 11100071, "num_input_records": 8008, "num_output_records": 7503, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338819647254, "job": 56, "event": "table_file_deletion", "file_number": 97}
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338819650815, "job": 56, "event": "table_file_deletion", "file_number": 95}
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:39.491599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:39.650920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:39.650926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:39.650927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:39.650929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:13:39 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:13:39.650930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:13:39 np0005464891 podman[316591]: 2025-10-01 17:13:39.652513314 +0000 UTC m=+0.307834963 container start 930f11b15ab73c0da9a4d98a7ed4eefad38ba743b83ba96a8909a1bd0e3a2714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_napier, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:13:39 np0005464891 podman[316591]: 2025-10-01 17:13:39.666059957 +0000 UTC m=+0.321381706 container attach 930f11b15ab73c0da9a4d98a7ed4eefad38ba743b83ba96a8909a1bd0e3a2714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:13:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]: {
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:    "0": [
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:        {
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "devices": [
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "/dev/loop3"
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            ],
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "lv_name": "ceph_lv0",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "lv_size": "21470642176",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "name": "ceph_lv0",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "tags": {
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.cluster_name": "ceph",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.crush_device_class": "",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.encrypted": "0",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.osd_id": "0",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.type": "block",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.vdo": "0"
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            },
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "type": "block",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "vg_name": "ceph_vg0"
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:        }
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:    ],
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:    "1": [
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:        {
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "devices": [
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "/dev/loop4"
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            ],
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "lv_name": "ceph_lv1",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "lv_size": "21470642176",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "name": "ceph_lv1",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "tags": {
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.cluster_name": "ceph",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.crush_device_class": "",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.encrypted": "0",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.osd_id": "1",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.type": "block",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.vdo": "0"
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            },
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "type": "block",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "vg_name": "ceph_vg1"
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:        }
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:    ],
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:    "2": [
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:        {
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "devices": [
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "/dev/loop5"
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            ],
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "lv_name": "ceph_lv2",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "lv_size": "21470642176",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "name": "ceph_lv2",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "tags": {
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.cluster_name": "ceph",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.crush_device_class": "",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.encrypted": "0",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.osd_id": "2",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.type": "block",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:                "ceph.vdo": "0"
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            },
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "type": "block",
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:            "vg_name": "ceph_vg2"
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:        }
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]:    ]
Oct  1 13:13:40 np0005464891 thirsty_napier[316608]: }
Oct  1 13:13:40 np0005464891 systemd[1]: libpod-930f11b15ab73c0da9a4d98a7ed4eefad38ba743b83ba96a8909a1bd0e3a2714.scope: Deactivated successfully.
Oct  1 13:13:40 np0005464891 podman[316591]: 2025-10-01 17:13:40.446619954 +0000 UTC m=+1.101941613 container died 930f11b15ab73c0da9a4d98a7ed4eefad38ba743b83ba96a8909a1bd0e3a2714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:13:40 np0005464891 systemd[1]: var-lib-containers-storage-overlay-d53658f5ad53a8825b03eb11ac3b9db9c158840107893b20ece3cc9a7bbd82dd-merged.mount: Deactivated successfully.
Oct  1 13:13:40 np0005464891 nova_compute[259907]: 2025-10-01 17:13:40.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:13:41 np0005464891 podman[316591]: 2025-10-01 17:13:41.008073159 +0000 UTC m=+1.663394808 container remove 930f11b15ab73c0da9a4d98a7ed4eefad38ba743b83ba96a8909a1bd0e3a2714 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:13:41 np0005464891 systemd[1]: libpod-conmon-930f11b15ab73c0da9a4d98a7ed4eefad38ba743b83ba96a8909a1bd0e3a2714.scope: Deactivated successfully.
Oct  1 13:13:41 np0005464891 podman[316617]: 2025-10-01 17:13:41.153748113 +0000 UTC m=+0.673091160 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:13:41 np0005464891 podman[316797]: 2025-10-01 17:13:41.601323634 +0000 UTC m=+0.026885393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:13:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:13:41 np0005464891 podman[316797]: 2025-10-01 17:13:41.830543234 +0000 UTC m=+0.256104943 container create 88b9d564db398e92952256b4e0a74e1b1f5b6600cdc39971722f4945857a411d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 13:13:42 np0005464891 systemd[1]: Started libpod-conmon-88b9d564db398e92952256b4e0a74e1b1f5b6600cdc39971722f4945857a411d.scope.
Oct  1 13:13:42 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:13:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:13:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:13:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:13:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:13:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:13:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:13:42 np0005464891 podman[316797]: 2025-10-01 17:13:42.293777698 +0000 UTC m=+0.719339387 container init 88b9d564db398e92952256b4e0a74e1b1f5b6600cdc39971722f4945857a411d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 13:13:42 np0005464891 podman[316797]: 2025-10-01 17:13:42.301573273 +0000 UTC m=+0.727134952 container start 88b9d564db398e92952256b4e0a74e1b1f5b6600cdc39971722f4945857a411d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 13:13:42 np0005464891 epic_mclaren[316813]: 167 167
Oct  1 13:13:42 np0005464891 systemd[1]: libpod-88b9d564db398e92952256b4e0a74e1b1f5b6600cdc39971722f4945857a411d.scope: Deactivated successfully.
Oct  1 13:13:42 np0005464891 conmon[316813]: conmon 88b9d564db398e929522 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88b9d564db398e92952256b4e0a74e1b1f5b6600cdc39971722f4945857a411d.scope/container/memory.events
Oct  1 13:13:42 np0005464891 podman[316797]: 2025-10-01 17:13:42.417324241 +0000 UTC m=+0.842885930 container attach 88b9d564db398e92952256b4e0a74e1b1f5b6600cdc39971722f4945857a411d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:13:42 np0005464891 podman[316797]: 2025-10-01 17:13:42.418351629 +0000 UTC m=+0.843913298 container died 88b9d564db398e92952256b4e0a74e1b1f5b6600cdc39971722f4945857a411d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:13:42 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9f8b6da07c4315cc2b0594b18e516ec6df1d5b569d308d9e2e732783b525d12b-merged.mount: Deactivated successfully.
Oct  1 13:13:42 np0005464891 nova_compute[259907]: 2025-10-01 17:13:42.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:42 np0005464891 nova_compute[259907]: 2025-10-01 17:13:42.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:42 np0005464891 podman[316797]: 2025-10-01 17:13:42.901893852 +0000 UTC m=+1.327455541 container remove 88b9d564db398e92952256b4e0a74e1b1f5b6600cdc39971722f4945857a411d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:13:42 np0005464891 systemd[1]: libpod-conmon-88b9d564db398e92952256b4e0a74e1b1f5b6600cdc39971722f4945857a411d.scope: Deactivated successfully.
Oct  1 13:13:43 np0005464891 podman[316830]: 2025-10-01 17:13:43.047324699 +0000 UTC m=+0.502240902 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct  1 13:13:43 np0005464891 podman[316859]: 2025-10-01 17:13:43.112010475 +0000 UTC m=+0.048346256 container create dc23aefcb0f5ad81d0d8a0a2bff4fc6eaf6755d544015a3be8a1d20f5ee09e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 13:13:43 np0005464891 systemd[1]: Started libpod-conmon-dc23aefcb0f5ad81d0d8a0a2bff4fc6eaf6755d544015a3be8a1d20f5ee09e76.scope.
Oct  1 13:13:43 np0005464891 podman[316859]: 2025-10-01 17:13:43.088891877 +0000 UTC m=+0.025227688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:13:43 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:13:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6bc7cab82f886f7f19d76a2dfb6dad312df7f61c3586822dcba175bc4cb359/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:13:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6bc7cab82f886f7f19d76a2dfb6dad312df7f61c3586822dcba175bc4cb359/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:13:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6bc7cab82f886f7f19d76a2dfb6dad312df7f61c3586822dcba175bc4cb359/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:13:43 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6bc7cab82f886f7f19d76a2dfb6dad312df7f61c3586822dcba175bc4cb359/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:13:43 np0005464891 podman[316859]: 2025-10-01 17:13:43.210388833 +0000 UTC m=+0.146724634 container init dc23aefcb0f5ad81d0d8a0a2bff4fc6eaf6755d544015a3be8a1d20f5ee09e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:13:43 np0005464891 podman[316859]: 2025-10-01 17:13:43.222695773 +0000 UTC m=+0.159031554 container start dc23aefcb0f5ad81d0d8a0a2bff4fc6eaf6755d544015a3be8a1d20f5ee09e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 13:13:43 np0005464891 podman[316859]: 2025-10-01 17:13:43.232053231 +0000 UTC m=+0.168389012 container attach dc23aefcb0f5ad81d0d8a0a2bff4fc6eaf6755d544015a3be8a1d20f5ee09e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 13:13:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]: {
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:        "osd_id": 2,
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:        "type": "bluestore"
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:    },
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:        "osd_id": 0,
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:        "type": "bluestore"
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:    },
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:        "osd_id": 1,
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:        "type": "bluestore"
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]:    }
Oct  1 13:13:44 np0005464891 sweet_franklin[316874]: }
Oct  1 13:13:44 np0005464891 systemd[1]: libpod-dc23aefcb0f5ad81d0d8a0a2bff4fc6eaf6755d544015a3be8a1d20f5ee09e76.scope: Deactivated successfully.
Oct  1 13:13:44 np0005464891 systemd[1]: libpod-dc23aefcb0f5ad81d0d8a0a2bff4fc6eaf6755d544015a3be8a1d20f5ee09e76.scope: Consumed 1.049s CPU time.
Oct  1 13:13:44 np0005464891 podman[316859]: 2025-10-01 17:13:44.273277947 +0000 UTC m=+1.209613728 container died dc23aefcb0f5ad81d0d8a0a2bff4fc6eaf6755d544015a3be8a1d20f5ee09e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 13:13:44 np0005464891 systemd[1]: var-lib-containers-storage-overlay-8d6bc7cab82f886f7f19d76a2dfb6dad312df7f61c3586822dcba175bc4cb359-merged.mount: Deactivated successfully.
Oct  1 13:13:44 np0005464891 podman[316859]: 2025-10-01 17:13:44.331577087 +0000 UTC m=+1.267912868 container remove dc23aefcb0f5ad81d0d8a0a2bff4fc6eaf6755d544015a3be8a1d20f5ee09e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 13:13:44 np0005464891 systemd[1]: libpod-conmon-dc23aefcb0f5ad81d0d8a0a2bff4fc6eaf6755d544015a3be8a1d20f5ee09e76.scope: Deactivated successfully.
Oct  1 13:13:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:13:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:13:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:13:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:13:44 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:13:44 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 4b8449f5-510a-4db8-b178-702b19826c34 does not exist
Oct  1 13:13:44 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 93a7c38d-7331-40d2-aa42-7b865e87f576 does not exist
Oct  1 13:13:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:13:44 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:13:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:13:45 np0005464891 podman[316971]: 2025-10-01 17:13:45.959564828 +0000 UTC m=+0.062777584 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct  1 13:13:47 np0005464891 nova_compute[259907]: 2025-10-01 17:13:47.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:13:47 np0005464891 nova_compute[259907]: 2025-10-01 17:13:47.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:13:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:13:49 np0005464891 nova_compute[259907]: 2025-10-01 17:13:49.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:13:49 np0005464891 nova_compute[259907]: 2025-10-01 17:13:49.849 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:13:49 np0005464891 nova_compute[259907]: 2025-10-01 17:13:49.850 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:13:49 np0005464891 nova_compute[259907]: 2025-10-01 17:13:49.850 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:13:49 np0005464891 nova_compute[259907]: 2025-10-01 17:13:49.850 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 13:13:49 np0005464891 nova_compute[259907]: 2025-10-01 17:13:49.851 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:13:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:13:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/610842233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:13:50 np0005464891 nova_compute[259907]: 2025-10-01 17:13:50.273 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:13:50 np0005464891 nova_compute[259907]: 2025-10-01 17:13:50.466 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:13:50 np0005464891 nova_compute[259907]: 2025-10-01 17:13:50.468 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4312MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 13:13:50 np0005464891 nova_compute[259907]: 2025-10-01 17:13:50.469 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:13:50 np0005464891 nova_compute[259907]: 2025-10-01 17:13:50.469 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:13:50 np0005464891 nova_compute[259907]: 2025-10-01 17:13:50.561 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 13:13:50 np0005464891 nova_compute[259907]: 2025-10-01 17:13:50.562 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 13:13:50 np0005464891 nova_compute[259907]: 2025-10-01 17:13:50.581 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:13:50 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:13:50 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1627980347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:13:51 np0005464891 nova_compute[259907]: 2025-10-01 17:13:51.011 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:13:51 np0005464891 nova_compute[259907]: 2025-10-01 17:13:51.017 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:13:51 np0005464891 nova_compute[259907]: 2025-10-01 17:13:51.038 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:13:51 np0005464891 nova_compute[259907]: 2025-10-01 17:13:51.059 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 13:13:51 np0005464891 nova_compute[259907]: 2025-10-01 17:13:51.060 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:13:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:13:52 np0005464891 nova_compute[259907]: 2025-10-01 17:13:52.056 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:13:52 np0005464891 nova_compute[259907]: 2025-10-01 17:13:52.057 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:13:52 np0005464891 nova_compute[259907]: 2025-10-01 17:13:52.073 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:13:52 np0005464891 nova_compute[259907]: 2025-10-01 17:13:52.074 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 13:13:52 np0005464891 nova_compute[259907]: 2025-10-01 17:13:52.074 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 13:13:52 np0005464891 nova_compute[259907]: 2025-10-01 17:13:52.087 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 13:13:52 np0005464891 nova_compute[259907]: 2025-10-01 17:13:52.088 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:13:52 np0005464891 nova_compute[259907]: 2025-10-01 17:13:52.089 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:13:52 np0005464891 nova_compute[259907]: 2025-10-01 17:13:52.089 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 13:13:52 np0005464891 nova_compute[259907]: 2025-10-01 17:13:52.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:52 np0005464891 nova_compute[259907]: 2025-10-01 17:13:52.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:13:52 np0005464891 nova_compute[259907]: 2025-10-01 17:13:52.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:13:53 np0005464891 nova_compute[259907]: 2025-10-01 17:13:53.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:13:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:13:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:13:56 np0005464891 nova_compute[259907]: 2025-10-01 17:13:56.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:13:57 np0005464891 nova_compute[259907]: 2025-10-01 17:13:57.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:13:57 np0005464891 nova_compute[259907]: 2025-10-01 17:13:57.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:13:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:13:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:02 np0005464891 nova_compute[259907]: 2025-10-01 17:14:02.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:02 np0005464891 nova_compute[259907]: 2025-10-01 17:14:02.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:14:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:07 np0005464891 nova_compute[259907]: 2025-10-01 17:14:07.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:07 np0005464891 nova_compute[259907]: 2025-10-01 17:14:07.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:07 np0005464891 ovn_controller[152409]: 2025-10-01T17:14:07Z|00306|memory_trim|INFO|Detected inactivity (last active 30016 ms ago): trimming memory
Oct  1 13:14:08 np0005464891 podman[317036]: 2025-10-01 17:14:08.961229992 +0000 UTC m=+0.066350514 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  1 13:14:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:14:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:12 np0005464891 podman[317055]: 2025-10-01 17:14:12.026340471 +0000 UTC m=+0.133365327 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_17:14:12
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['backups', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'vms', '.mgr', 'images', 'default.rgw.control', 'default.rgw.log']
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 13:14:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:14:12.474 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:14:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:14:12.475 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:14:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:14:12.476 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:14:12 np0005464891 nova_compute[259907]: 2025-10-01 17:14:12.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:14:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:14:12 np0005464891 nova_compute[259907]: 2025-10-01 17:14:12.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:13 np0005464891 podman[317082]: 2025-10-01 17:14:13.965538739 +0000 UTC m=+0.073487011 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  1 13:14:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:14:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:16 np0005464891 podman[317103]: 2025-10-01 17:14:16.937593925 +0000 UTC m=+0.057933222 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 13:14:17 np0005464891 nova_compute[259907]: 2025-10-01 17:14:17.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:17 np0005464891 nova_compute[259907]: 2025-10-01 17:14:17.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:14:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:14:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 13:14:22 np0005464891 nova_compute[259907]: 2025-10-01 17:14:22.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:23 np0005464891 nova_compute[259907]: 2025-10-01 17:14:23.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:14:25 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:27 np0005464891 nova_compute[259907]: 2025-10-01 17:14:27.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:27 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:28 np0005464891 nova_compute[259907]: 2025-10-01 17:14:28.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:29 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:14:29 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:31 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:32.043342) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338872043376, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 656, "num_deletes": 251, "total_data_size": 812754, "memory_usage": 824768, "flush_reason": "Manual Compaction"}
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338872067302, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 805630, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45727, "largest_seqno": 46382, "table_properties": {"data_size": 802114, "index_size": 1424, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7841, "raw_average_key_size": 19, "raw_value_size": 795110, "raw_average_value_size": 1948, "num_data_blocks": 63, "num_entries": 408, "num_filter_entries": 408, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338820, "oldest_key_time": 1759338820, "file_creation_time": 1759338872, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 24013 microseconds, and 2977 cpu microseconds.
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:32.067351) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 805630 bytes OK
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:32.067371) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:32.182419) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:32.182525) EVENT_LOG_v1 {"time_micros": 1759338872182514, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:32.182549) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 809279, prev total WAL file size 809279, number of live WAL files 2.
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:32.183239) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(786KB)], [98(10MB)]
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338872183317, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 11905701, "oldest_snapshot_seqno": -1}
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 7398 keys, 10114282 bytes, temperature: kUnknown
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338872536116, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 10114282, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10062050, "index_size": 32664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18501, "raw_key_size": 189618, "raw_average_key_size": 25, "raw_value_size": 9926583, "raw_average_value_size": 1341, "num_data_blocks": 1279, "num_entries": 7398, "num_filter_entries": 7398, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759338872, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:14:32 np0005464891 nova_compute[259907]: 2025-10-01 17:14:32.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:32.536559) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10114282 bytes
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:32.682728) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 33.7 rd, 28.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.6 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(27.3) write-amplify(12.6) OK, records in: 7911, records dropped: 513 output_compression: NoCompression
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:32.682791) EVENT_LOG_v1 {"time_micros": 1759338872682768, "job": 58, "event": "compaction_finished", "compaction_time_micros": 352933, "compaction_time_cpu_micros": 46337, "output_level": 6, "num_output_files": 1, "total_output_size": 10114282, "num_input_records": 7911, "num_output_records": 7398, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338872684223, "job": 58, "event": "table_file_deletion", "file_number": 100}
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338872689231, "job": 58, "event": "table_file_deletion", "file_number": 98}
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:32.183096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:32.689627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:32.689637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:32.689641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:32.689645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:14:32 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:32.689650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:14:33 np0005464891 nova_compute[259907]: 2025-10-01 17:14:33.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:33 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:34 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:14:35 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 13:14:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3634769196' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 13:14:37 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 13:14:37 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3634769196' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 13:14:37 np0005464891 nova_compute[259907]: 2025-10-01 17:14:37.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:37 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:38 np0005464891 nova_compute[259907]: 2025-10-01 17:14:38.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:39 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:14:39 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:39 np0005464891 podman[317123]: 2025-10-01 17:14:39.962031319 +0000 UTC m=+0.075925199 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  1 13:14:41 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:41 np0005464891 nova_compute[259907]: 2025-10-01 17:14:41.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:14:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:14:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:14:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:14:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:14:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:14:42 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:14:42 np0005464891 nova_compute[259907]: 2025-10-01 17:14:42.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:42 np0005464891 podman[317142]: 2025-10-01 17:14:42.995302837 +0000 UTC m=+0.093164365 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  1 13:14:43 np0005464891 nova_compute[259907]: 2025-10-01 17:14:43.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:43 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:44 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:14:44 np0005464891 podman[317192]: 2025-10-01 17:14:44.743085278 +0000 UTC m=+0.103515051 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  1 13:14:45 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:45 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:14:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:14:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:14:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:14:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:14:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:14:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 13:14:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:14:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 13:14:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:14:46 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 209e7874-752d-43dc-9fae-8ac8eb4e5cfb does not exist
Oct  1 13:14:46 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev b828cbfd-14af-4fa8-9051-8f8a5b147a5b does not exist
Oct  1 13:14:46 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev 05ced2cf-ba1b-4bf9-9db8-cef965f9d3ab does not exist
Oct  1 13:14:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 13:14:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 13:14:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 13:14:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:14:46 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:14:46 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:14:47 np0005464891 podman[317582]: 2025-10-01 17:14:47.009426836 +0000 UTC m=+0.026591727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:14:47 np0005464891 podman[317582]: 2025-10-01 17:14:47.188957306 +0000 UTC m=+0.206122097 container create 493d5bbcbdeabf7ca1142dd2310925387ffe94b417b64f1bc9d62fa71bfd4a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:14:47 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:14:47 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:14:47 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 13:14:47 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:14:47 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 13:14:47 np0005464891 systemd[1]: Started libpod-conmon-493d5bbcbdeabf7ca1142dd2310925387ffe94b417b64f1bc9d62fa71bfd4a48.scope.
Oct  1 13:14:47 np0005464891 podman[317596]: 2025-10-01 17:14:47.302222465 +0000 UTC m=+0.081896024 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  1 13:14:47 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:14:47 np0005464891 podman[317582]: 2025-10-01 17:14:47.406018122 +0000 UTC m=+0.423182953 container init 493d5bbcbdeabf7ca1142dd2310925387ffe94b417b64f1bc9d62fa71bfd4a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:14:47 np0005464891 podman[317582]: 2025-10-01 17:14:47.412872002 +0000 UTC m=+0.430036803 container start 493d5bbcbdeabf7ca1142dd2310925387ffe94b417b64f1bc9d62fa71bfd4a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:14:47 np0005464891 stoic_blackburn[317612]: 167 167
Oct  1 13:14:47 np0005464891 systemd[1]: libpod-493d5bbcbdeabf7ca1142dd2310925387ffe94b417b64f1bc9d62fa71bfd4a48.scope: Deactivated successfully.
Oct  1 13:14:47 np0005464891 podman[317582]: 2025-10-01 17:14:47.529673629 +0000 UTC m=+0.546838440 container attach 493d5bbcbdeabf7ca1142dd2310925387ffe94b417b64f1bc9d62fa71bfd4a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 13:14:47 np0005464891 podman[317582]: 2025-10-01 17:14:47.531033007 +0000 UTC m=+0.548197808 container died 493d5bbcbdeabf7ca1142dd2310925387ffe94b417b64f1bc9d62fa71bfd4a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 13:14:47 np0005464891 nova_compute[259907]: 2025-10-01 17:14:47.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:47 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:47 np0005464891 systemd[1]: var-lib-containers-storage-overlay-9a1409ab62008cace76834af02ad2bd21ed501a301ce5d93493e384072c7cbfe-merged.mount: Deactivated successfully.
Oct  1 13:14:48 np0005464891 nova_compute[259907]: 2025-10-01 17:14:48.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:48 np0005464891 podman[317582]: 2025-10-01 17:14:48.320238942 +0000 UTC m=+1.337403753 container remove 493d5bbcbdeabf7ca1142dd2310925387ffe94b417b64f1bc9d62fa71bfd4a48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:14:48 np0005464891 systemd[1]: libpod-conmon-493d5bbcbdeabf7ca1142dd2310925387ffe94b417b64f1bc9d62fa71bfd4a48.scope: Deactivated successfully.
Oct  1 13:14:48 np0005464891 podman[317638]: 2025-10-01 17:14:48.58402825 +0000 UTC m=+0.117664201 container create add13969cbeec379bbdf65a7f8942881abb6332a076b46a69a07d1f51e1b11ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 13:14:48 np0005464891 podman[317638]: 2025-10-01 17:14:48.498959229 +0000 UTC m=+0.032595180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:14:48 np0005464891 systemd[1]: Started libpod-conmon-add13969cbeec379bbdf65a7f8942881abb6332a076b46a69a07d1f51e1b11ef.scope.
Oct  1 13:14:48 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:14:48 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ab10ad3af3053a76fbc2a981b8da41b60570f3c8e80c3675e07632193d052f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:14:48 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ab10ad3af3053a76fbc2a981b8da41b60570f3c8e80c3675e07632193d052f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:14:48 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ab10ad3af3053a76fbc2a981b8da41b60570f3c8e80c3675e07632193d052f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:14:48 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ab10ad3af3053a76fbc2a981b8da41b60570f3c8e80c3675e07632193d052f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:14:48 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ab10ad3af3053a76fbc2a981b8da41b60570f3c8e80c3675e07632193d052f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 13:14:48 np0005464891 podman[317638]: 2025-10-01 17:14:48.98212278 +0000 UTC m=+0.515758741 container init add13969cbeec379bbdf65a7f8942881abb6332a076b46a69a07d1f51e1b11ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 13:14:49 np0005464891 podman[317638]: 2025-10-01 17:14:49.000018024 +0000 UTC m=+0.533653945 container start add13969cbeec379bbdf65a7f8942881abb6332a076b46a69a07d1f51e1b11ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 13:14:49 np0005464891 podman[317638]: 2025-10-01 17:14:49.063742365 +0000 UTC m=+0.597378306 container attach add13969cbeec379bbdf65a7f8942881abb6332a076b46a69a07d1f51e1b11ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:49.639235) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338889639279, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 408, "num_deletes": 252, "total_data_size": 289247, "memory_usage": 298384, "flush_reason": "Manual Compaction"}
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338889740283, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 287794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46383, "largest_seqno": 46790, "table_properties": {"data_size": 285336, "index_size": 558, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5036, "raw_average_key_size": 15, "raw_value_size": 280482, "raw_average_value_size": 868, "num_data_blocks": 24, "num_entries": 323, "num_filter_entries": 323, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759338873, "oldest_key_time": 1759338873, "file_creation_time": 1759338889, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 101101 microseconds, and 11804 cpu microseconds.
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:49.740337) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 287794 bytes OK
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:49.740357) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:49.745800) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:49.745833) EVENT_LOG_v1 {"time_micros": 1759338889745824, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:49.745862) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 286643, prev total WAL file size 286643, number of live WAL files 2.
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:49.746385) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323533' seq:0, type:0; will stop at (end)
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(281KB)], [101(9877KB)]
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338889746505, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10402076, "oldest_snapshot_seqno": -1}
Oct  1 13:14:49 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 7206 keys, 9662400 bytes, temperature: kUnknown
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338889934536, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9662400, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9611420, "index_size": 31860, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 187450, "raw_average_key_size": 26, "raw_value_size": 9479172, "raw_average_value_size": 1315, "num_data_blocks": 1228, "num_entries": 7206, "num_filter_entries": 7206, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759335202, "oldest_key_time": 0, "file_creation_time": 1759338889, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4cdc7836-3ae4-40a3-8b66-898644585cc0", "db_session_id": "49L36WBKX0OR9VW6SLLI", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:49.934845) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9662400 bytes
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:49.992434) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 55.3 rd, 51.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(69.7) write-amplify(33.6) OK, records in: 7721, records dropped: 515 output_compression: NoCompression
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:49.992525) EVENT_LOG_v1 {"time_micros": 1759338889992503, "job": 60, "event": "compaction_finished", "compaction_time_micros": 188121, "compaction_time_cpu_micros": 45223, "output_level": 6, "num_output_files": 1, "total_output_size": 9662400, "num_input_records": 7721, "num_output_records": 7206, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338889992800, "job": 60, "event": "table_file_deletion", "file_number": 103}
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759338889994784, "job": 60, "event": "table_file_deletion", "file_number": 101}
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:49.746235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:49.994858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:49.994865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:49.994868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:49.994871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:14:49 np0005464891 ceph-mon[74303]: rocksdb: (Original Log Time 2025/10/01-17:14:49.994873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 13:14:50 np0005464891 hopeful_bartik[317654]: --> passed data devices: 0 physical, 3 LVM
Oct  1 13:14:50 np0005464891 hopeful_bartik[317654]: --> relative data size: 1.0
Oct  1 13:14:50 np0005464891 hopeful_bartik[317654]: --> All data devices are unavailable
Oct  1 13:14:50 np0005464891 systemd[1]: libpod-add13969cbeec379bbdf65a7f8942881abb6332a076b46a69a07d1f51e1b11ef.scope: Deactivated successfully.
Oct  1 13:14:50 np0005464891 podman[317638]: 2025-10-01 17:14:50.130508959 +0000 UTC m=+1.664144890 container died add13969cbeec379bbdf65a7f8942881abb6332a076b46a69a07d1f51e1b11ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:14:50 np0005464891 systemd[1]: libpod-add13969cbeec379bbdf65a7f8942881abb6332a076b46a69a07d1f51e1b11ef.scope: Consumed 1.076s CPU time.
Oct  1 13:14:50 np0005464891 systemd[1]: var-lib-containers-storage-overlay-69ab10ad3af3053a76fbc2a981b8da41b60570f3c8e80c3675e07632193d052f-merged.mount: Deactivated successfully.
Oct  1 13:14:50 np0005464891 podman[317638]: 2025-10-01 17:14:50.797642191 +0000 UTC m=+2.331278102 container remove add13969cbeec379bbdf65a7f8942881abb6332a076b46a69a07d1f51e1b11ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct  1 13:14:50 np0005464891 systemd[1]: libpod-conmon-add13969cbeec379bbdf65a7f8942881abb6332a076b46a69a07d1f51e1b11ef.scope: Deactivated successfully.
Oct  1 13:14:50 np0005464891 nova_compute[259907]: 2025-10-01 17:14:50.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:14:51 np0005464891 nova_compute[259907]: 2025-10-01 17:14:51.215 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:14:51 np0005464891 nova_compute[259907]: 2025-10-01 17:14:51.216 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:14:51 np0005464891 nova_compute[259907]: 2025-10-01 17:14:51.216 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:14:51 np0005464891 nova_compute[259907]: 2025-10-01 17:14:51.216 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 13:14:51 np0005464891 nova_compute[259907]: 2025-10-01 17:14:51.217 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:14:51 np0005464891 podman[317853]: 2025-10-01 17:14:51.406211996 +0000 UTC m=+0.019413877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:14:51 np0005464891 podman[317853]: 2025-10-01 17:14:51.521899643 +0000 UTC m=+0.135101514 container create ab7487355bb4bb73f6006d71454e6e61a13962d8851e78434b06f26ea70f6ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 13:14:51 np0005464891 systemd[1]: Started libpod-conmon-ab7487355bb4bb73f6006d71454e6e61a13962d8851e78434b06f26ea70f6ced.scope.
Oct  1 13:14:51 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:14:51 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:14:51 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1036239123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:14:51 np0005464891 podman[317853]: 2025-10-01 17:14:51.659860615 +0000 UTC m=+0.273062506 container init ab7487355bb4bb73f6006d71454e6e61a13962d8851e78434b06f26ea70f6ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_herschel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 13:14:51 np0005464891 podman[317853]: 2025-10-01 17:14:51.670881629 +0000 UTC m=+0.284083490 container start ab7487355bb4bb73f6006d71454e6e61a13962d8851e78434b06f26ea70f6ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_herschel, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:14:51 np0005464891 nova_compute[259907]: 2025-10-01 17:14:51.670 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:14:51 np0005464891 stupefied_herschel[317869]: 167 167
Oct  1 13:14:51 np0005464891 systemd[1]: libpod-ab7487355bb4bb73f6006d71454e6e61a13962d8851e78434b06f26ea70f6ced.scope: Deactivated successfully.
Oct  1 13:14:51 np0005464891 podman[317853]: 2025-10-01 17:14:51.682576842 +0000 UTC m=+0.295778733 container attach ab7487355bb4bb73f6006d71454e6e61a13962d8851e78434b06f26ea70f6ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_herschel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 13:14:51 np0005464891 podman[317853]: 2025-10-01 17:14:51.683851827 +0000 UTC m=+0.297053688 container died ab7487355bb4bb73f6006d71454e6e61a13962d8851e78434b06f26ea70f6ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:14:51 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:51 np0005464891 systemd[1]: var-lib-containers-storage-overlay-b8c3be570cb551aa5f82da5ec8231f6cd73373cb2040fffd6e1d6b836cd41e9e-merged.mount: Deactivated successfully.
Oct  1 13:14:51 np0005464891 nova_compute[259907]: 2025-10-01 17:14:51.846 2 WARNING nova.virt.libvirt.driver [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 13:14:51 np0005464891 nova_compute[259907]: 2025-10-01 17:14:51.847 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4267MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 13:14:51 np0005464891 nova_compute[259907]: 2025-10-01 17:14:51.847 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:14:51 np0005464891 nova_compute[259907]: 2025-10-01 17:14:51.848 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:14:52 np0005464891 podman[317853]: 2025-10-01 17:14:52.006732229 +0000 UTC m=+0.619934110 container remove ab7487355bb4bb73f6006d71454e6e61a13962d8851e78434b06f26ea70f6ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_herschel, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 13:14:52 np0005464891 systemd[1]: libpod-conmon-ab7487355bb4bb73f6006d71454e6e61a13962d8851e78434b06f26ea70f6ced.scope: Deactivated successfully.
Oct  1 13:14:52 np0005464891 nova_compute[259907]: 2025-10-01 17:14:52.095 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 13:14:52 np0005464891 nova_compute[259907]: 2025-10-01 17:14:52.096 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 13:14:52 np0005464891 nova_compute[259907]: 2025-10-01 17:14:52.119 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 13:14:52 np0005464891 podman[317895]: 2025-10-01 17:14:52.167419048 +0000 UTC m=+0.041475577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:14:52 np0005464891 podman[317895]: 2025-10-01 17:14:52.281977844 +0000 UTC m=+0.156034353 container create 3b7eedb6dc28e4e9fc32f62abd1b72677de4cdac15b33007538f826c32051669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kilby, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 13:14:52 np0005464891 systemd[1]: Started libpod-conmon-3b7eedb6dc28e4e9fc32f62abd1b72677de4cdac15b33007538f826c32051669.scope.
Oct  1 13:14:52 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:14:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef757c3b5658cdb52db6bec456462c5f72e32cb8953f9f1d0c0e9d933d1f3753/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:14:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef757c3b5658cdb52db6bec456462c5f72e32cb8953f9f1d0c0e9d933d1f3753/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:14:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef757c3b5658cdb52db6bec456462c5f72e32cb8953f9f1d0c0e9d933d1f3753/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:14:52 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef757c3b5658cdb52db6bec456462c5f72e32cb8953f9f1d0c0e9d933d1f3753/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:14:52 np0005464891 podman[317895]: 2025-10-01 17:14:52.535882548 +0000 UTC m=+0.409939027 container init 3b7eedb6dc28e4e9fc32f62abd1b72677de4cdac15b33007538f826c32051669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kilby, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 13:14:52 np0005464891 podman[317895]: 2025-10-01 17:14:52.543316994 +0000 UTC m=+0.417373463 container start 3b7eedb6dc28e4e9fc32f62abd1b72677de4cdac15b33007538f826c32051669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 13:14:52 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 13:14:52 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3341352500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 13:14:52 np0005464891 nova_compute[259907]: 2025-10-01 17:14:52.598 2 DEBUG oslo_concurrency.processutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 13:14:52 np0005464891 nova_compute[259907]: 2025-10-01 17:14:52.605 2 DEBUG nova.compute.provider_tree [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed in ProviderTree for provider: bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 13:14:52 np0005464891 nova_compute[259907]: 2025-10-01 17:14:52.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:52 np0005464891 podman[317895]: 2025-10-01 17:14:52.658177968 +0000 UTC m=+0.532234457 container attach 3b7eedb6dc28e4e9fc32f62abd1b72677de4cdac15b33007538f826c32051669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 13:14:52 np0005464891 nova_compute[259907]: 2025-10-01 17:14:52.682 2 DEBUG nova.scheduler.client.report [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Inventory has not changed for provider bc459ca1-2cfa-468c-b8b4-be58cf2e5ef8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 13:14:52 np0005464891 nova_compute[259907]: 2025-10-01 17:14:52.685 2 DEBUG nova.compute.resource_tracker [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 13:14:52 np0005464891 nova_compute[259907]: 2025-10-01 17:14:52.686 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:14:53 np0005464891 nova_compute[259907]: 2025-10-01 17:14:53.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:53 np0005464891 zen_kilby[317932]: {
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:    "0": [
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:        {
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "devices": [
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "/dev/loop3"
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            ],
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "lv_name": "ceph_lv0",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "lv_size": "21470642176",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "lv_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "name": "ceph_lv0",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "tags": {
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.block_uuid": "j6Nmfe-Rgej-Gj30-DYsp-R0b7-2sV7-FlRF7B",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.cluster_name": "ceph",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.crush_device_class": "",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.encrypted": "0",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.osd_fsid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.osd_id": "0",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.type": "block",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.vdo": "0"
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            },
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "type": "block",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "vg_name": "ceph_vg0"
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:        }
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:    ],
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:    "1": [
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:        {
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "devices": [
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "/dev/loop4"
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            ],
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "lv_name": "ceph_lv1",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "lv_size": "21470642176",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=de7d462b-eb5f-4e2e-be78-18c7710c6a61,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "lv_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "name": "ceph_lv1",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "tags": {
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.block_uuid": "OZ0KjL-jRjI-QU6Q-6Gxj-1ntG-pI0e-zzoKhX",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.cluster_name": "ceph",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.crush_device_class": "",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.encrypted": "0",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.osd_fsid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.osd_id": "1",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.type": "block",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.vdo": "0"
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            },
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "type": "block",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "vg_name": "ceph_vg1"
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:        }
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:    ],
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:    "2": [
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:        {
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "devices": [
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "/dev/loop5"
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            ],
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "lv_name": "ceph_lv2",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "lv_size": "21470642176",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f882664-54d4-4e41-96ff-3d2c8223e250,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "lv_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "name": "ceph_lv2",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "tags": {
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.block_uuid": "X80F2J-IZ1D-RQ4i-FXnA-gIPF-rcOu-aedJWq",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.cephx_lockbox_secret": "",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.cluster_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.cluster_name": "ceph",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.crush_device_class": "",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.encrypted": "0",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.osd_fsid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.osd_id": "2",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.type": "block",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:                "ceph.vdo": "0"
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            },
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "type": "block",
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:            "vg_name": "ceph_vg2"
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:        }
Oct  1 13:14:53 np0005464891 zen_kilby[317932]:    ]
Oct  1 13:14:53 np0005464891 zen_kilby[317932]: }
Oct  1 13:14:53 np0005464891 systemd[1]: libpod-3b7eedb6dc28e4e9fc32f62abd1b72677de4cdac15b33007538f826c32051669.scope: Deactivated successfully.
Oct  1 13:14:53 np0005464891 podman[317895]: 2025-10-01 17:14:53.390818731 +0000 UTC m=+1.264875240 container died 3b7eedb6dc28e4e9fc32f62abd1b72677de4cdac15b33007538f826c32051669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 13:14:53 np0005464891 nova_compute[259907]: 2025-10-01 17:14:53.686 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:14:53 np0005464891 nova_compute[259907]: 2025-10-01 17:14:53.687 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 13:14:53 np0005464891 nova_compute[259907]: 2025-10-01 17:14:53.687 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 13:14:53 np0005464891 nova_compute[259907]: 2025-10-01 17:14:53.761 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 13:14:53 np0005464891 nova_compute[259907]: 2025-10-01 17:14:53.761 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:14:53 np0005464891 nova_compute[259907]: 2025-10-01 17:14:53.761 2 DEBUG nova.compute.manager [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 13:14:53 np0005464891 systemd[1]: var-lib-containers-storage-overlay-ef757c3b5658cdb52db6bec456462c5f72e32cb8953f9f1d0c0e9d933d1f3753-merged.mount: Deactivated successfully.
Oct  1 13:14:53 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:53 np0005464891 nova_compute[259907]: 2025-10-01 17:14:53.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:14:53 np0005464891 nova_compute[259907]: 2025-10-01 17:14:53.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:14:53 np0005464891 nova_compute[259907]: 2025-10-01 17:14:53.805 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:14:54 np0005464891 podman[317895]: 2025-10-01 17:14:54.155764455 +0000 UTC m=+2.029820924 container remove 3b7eedb6dc28e4e9fc32f62abd1b72677de4cdac15b33007538f826c32051669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kilby, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 13:14:54 np0005464891 systemd[1]: libpod-conmon-3b7eedb6dc28e4e9fc32f62abd1b72677de4cdac15b33007538f826c32051669.scope: Deactivated successfully.
Oct  1 13:14:54 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:14:54 np0005464891 podman[318097]: 2025-10-01 17:14:54.792562459 +0000 UTC m=+0.062406145 container create 8e244692666dec1fe89ee305e9e03d7e2c6ab3558b782ed21f717215b0b62448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 13:14:54 np0005464891 nova_compute[259907]: 2025-10-01 17:14:54.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:14:54 np0005464891 podman[318097]: 2025-10-01 17:14:54.751807013 +0000 UTC m=+0.021650729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:14:54 np0005464891 systemd[1]: Started libpod-conmon-8e244692666dec1fe89ee305e9e03d7e2c6ab3558b782ed21f717215b0b62448.scope.
Oct  1 13:14:55 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:14:55 np0005464891 podman[318097]: 2025-10-01 17:14:55.104673802 +0000 UTC m=+0.374517518 container init 8e244692666dec1fe89ee305e9e03d7e2c6ab3558b782ed21f717215b0b62448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 13:14:55 np0005464891 podman[318097]: 2025-10-01 17:14:55.113109985 +0000 UTC m=+0.382953711 container start 8e244692666dec1fe89ee305e9e03d7e2c6ab3558b782ed21f717215b0b62448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldstine, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 13:14:55 np0005464891 competent_goldstine[318114]: 167 167
Oct  1 13:14:55 np0005464891 systemd[1]: libpod-8e244692666dec1fe89ee305e9e03d7e2c6ab3558b782ed21f717215b0b62448.scope: Deactivated successfully.
Oct  1 13:14:55 np0005464891 podman[318097]: 2025-10-01 17:14:55.250258164 +0000 UTC m=+0.520101850 container attach 8e244692666dec1fe89ee305e9e03d7e2c6ab3558b782ed21f717215b0b62448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 13:14:55 np0005464891 podman[318097]: 2025-10-01 17:14:55.25080911 +0000 UTC m=+0.520652796 container died 8e244692666dec1fe89ee305e9e03d7e2c6ab3558b782ed21f717215b0b62448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldstine, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 13:14:55 np0005464891 systemd[1]: var-lib-containers-storage-overlay-89303078f75b33ba82adf74ce24829302b5fc375e279d2e8d4b7df1b2de0036e-merged.mount: Deactivated successfully.
Oct  1 13:14:55 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2315: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:56 np0005464891 podman[318097]: 2025-10-01 17:14:56.140394979 +0000 UTC m=+1.410238675 container remove 8e244692666dec1fe89ee305e9e03d7e2c6ab3558b782ed21f717215b0b62448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldstine, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 13:14:56 np0005464891 systemd[1]: libpod-conmon-8e244692666dec1fe89ee305e9e03d7e2c6ab3558b782ed21f717215b0b62448.scope: Deactivated successfully.
Oct  1 13:14:56 np0005464891 podman[318140]: 2025-10-01 17:14:56.295583147 +0000 UTC m=+0.025074264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 13:14:56 np0005464891 podman[318140]: 2025-10-01 17:14:56.401713218 +0000 UTC m=+0.131204335 container create 16be58a1994906351bd2e969b3a68814256b3cb82416a993b5a928051fa59c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 13:14:56 np0005464891 systemd[1]: Started libpod-conmon-16be58a1994906351bd2e969b3a68814256b3cb82416a993b5a928051fa59c8d.scope.
Oct  1 13:14:56 np0005464891 systemd[1]: Started libcrun container.
Oct  1 13:14:56 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d08b38b81a5b0f3bd4575bfd0766c924bf5d332603b51306bb97bc363f7008/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 13:14:56 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d08b38b81a5b0f3bd4575bfd0766c924bf5d332603b51306bb97bc363f7008/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 13:14:56 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d08b38b81a5b0f3bd4575bfd0766c924bf5d332603b51306bb97bc363f7008/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 13:14:56 np0005464891 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d08b38b81a5b0f3bd4575bfd0766c924bf5d332603b51306bb97bc363f7008/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 13:14:56 np0005464891 podman[318140]: 2025-10-01 17:14:56.829185139 +0000 UTC m=+0.558676266 container init 16be58a1994906351bd2e969b3a68814256b3cb82416a993b5a928051fa59c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 13:14:56 np0005464891 podman[318140]: 2025-10-01 17:14:56.835820782 +0000 UTC m=+0.565311879 container start 16be58a1994906351bd2e969b3a68814256b3cb82416a993b5a928051fa59c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 13:14:57 np0005464891 podman[318140]: 2025-10-01 17:14:57.045682971 +0000 UTC m=+0.775174078 container attach 16be58a1994906351bd2e969b3a68814256b3cb82416a993b5a928051fa59c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 13:14:57 np0005464891 systemd-logind[801]: New session 54 of user zuul.
Oct  1 13:14:57 np0005464891 systemd[1]: Started Session 54 of User zuul.
Oct  1 13:14:57 np0005464891 nova_compute[259907]: 2025-10-01 17:14:57.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:57 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]: {
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:    "1f882664-54d4-4e41-96ff-3d2c8223e250": {
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:        "osd_id": 2,
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:        "osd_uuid": "1f882664-54d4-4e41-96ff-3d2c8223e250",
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:        "type": "bluestore"
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:    },
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:    "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c": {
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:        "osd_id": 0,
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:        "osd_uuid": "2f8ff2ab-7a8d-429a-9b0d-b8215a229c6c",
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:        "type": "bluestore"
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:    },
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:    "de7d462b-eb5f-4e2e-be78-18c7710c6a61": {
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:        "ceph_fsid": "6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5",
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:        "osd_id": 1,
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:        "osd_uuid": "de7d462b-eb5f-4e2e-be78-18c7710c6a61",
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:        "type": "bluestore"
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]:    }
Oct  1 13:14:57 np0005464891 heuristic_swartz[318156]: }
Oct  1 13:14:57 np0005464891 systemd[1]: libpod-16be58a1994906351bd2e969b3a68814256b3cb82416a993b5a928051fa59c8d.scope: Deactivated successfully.
Oct  1 13:14:57 np0005464891 podman[318140]: 2025-10-01 17:14:57.859821776 +0000 UTC m=+1.589312903 container died 16be58a1994906351bd2e969b3a68814256b3cb82416a993b5a928051fa59c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 13:14:58 np0005464891 nova_compute[259907]: 2025-10-01 17:14:58.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:14:58 np0005464891 nova_compute[259907]: 2025-10-01 17:14:58.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:14:59 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:14:59 np0005464891 systemd[1]: var-lib-containers-storage-overlay-88d08b38b81a5b0f3bd4575bfd0766c924bf5d332603b51306bb97bc363f7008-merged.mount: Deactivated successfully.
Oct  1 13:14:59 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:15:00 np0005464891 podman[318140]: 2025-10-01 17:15:00.136096589 +0000 UTC m=+3.865587696 container remove 16be58a1994906351bd2e969b3a68814256b3cb82416a993b5a928051fa59c8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 13:15:00 np0005464891 systemd[1]: libpod-conmon-16be58a1994906351bd2e969b3a68814256b3cb82416a993b5a928051fa59c8d.scope: Deactivated successfully.
Oct  1 13:15:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 13:15:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:15:00 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 13:15:00 np0005464891 ceph-mon[74303]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:15:00 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev e80deecf-1cbb-44af-80ac-14dc88358a70 does not exist
Oct  1 13:15:00 np0005464891 ceph-mgr[74592]: [progress WARNING root] complete: ev e0aaf332-131f-4fda-92aa-0fe3fc1e14e7 does not exist
Oct  1 13:15:01 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:15:01 np0005464891 ceph-mon[74303]: from='mgr.14132 192.168.122.100:0/3907234026' entity='mgr.compute-0.ieawdb' 
Oct  1 13:15:01 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2318: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:15:02 np0005464891 nova_compute[259907]: 2025-10-01 17:15:02.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:15:03 np0005464891 nova_compute[259907]: 2025-10-01 17:15:03.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:15:03 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 13:15:03 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:15:04 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19265 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 13:15:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:15:04 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  1 13:15:04 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2746724501' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  1 13:15:05 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2320: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:15:07 np0005464891 nova_compute[259907]: 2025-10-01 17:15:07.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:15:07 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2321: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:15:08 np0005464891 nova_compute[259907]: 2025-10-01 17:15:08.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:15:09 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:15:09 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:15:10 np0005464891 podman[318553]: 2025-10-01 17:15:10.971726401 +0000 UTC m=+0.077220484 container health_status bedadcfe3ed6b7ca20b15ff8fd407326224df78d00d14ea3c9cfb21550ae5b1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct  1 13:15:11 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Optimize plan auto_2025-10-01_17:15:12
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] do_upmap
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'images', 'backups']
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [balancer INFO root] prepared 0/10 changes
Oct  1 13:15:12 np0005464891 ovs-vsctl[318601]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  1 13:15:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:15:12.475 162546 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:15:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:15:12.477 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:15:12 np0005464891 ovn_metadata_agent[162541]: 2025-10-01 17:15:12.477 162546 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 13:15:12 np0005464891 ceph-mgr[74592]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 13:15:12 np0005464891 nova_compute[259907]: 2025-10-01 17:15:12.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:15:13 np0005464891 nova_compute[259907]: 2025-10-01 17:15:13.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:15:13 np0005464891 virtqemud[259614]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  1 13:15:13 np0005464891 virtqemud[259614]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  1 13:15:13 np0005464891 virtqemud[259614]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  1 13:15:13 np0005464891 nova_compute[259907]: 2025-10-01 17:15:13.804 2 DEBUG oslo_service.periodic_task [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 13:15:13 np0005464891 nova_compute[259907]: 2025-10-01 17:15:13.805 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:15:13 np0005464891 nova_compute[259907]: 2025-10-01 17:15:13.806 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:15:13 np0005464891 nova_compute[259907]: 2025-10-01 17:15:13.806 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:15:13 np0005464891 nova_compute[259907]: 2025-10-01 17:15:13.806 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 13:15:13 np0005464891 nova_compute[259907]: 2025-10-01 17:15:13.806 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 13:15:13 np0005464891 nova_compute[259907]: 2025-10-01 17:15:13.807 2 DEBUG oslo_concurrency.lockutils [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 13:15:13 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:15:13 np0005464891 podman[318828]: 2025-10-01 17:15:13.981733447 +0000 UTC m=+0.098657897 container health_status 03a8c15c9f52dfe9042b877b79b4c385ff635cec2d385102bf21660e429f1dbe (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  1 13:15:14 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt asok_command: cache status {prefix=cache status} (starting...)
Oct  1 13:15:14 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt asok_command: client ls {prefix=client ls} (starting...)
Oct  1 13:15:14 np0005464891 lvm[318985]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  1 13:15:14 np0005464891 lvm[318985]: VG ceph_vg0 finished
Oct  1 13:15:14 np0005464891 lvm[319008]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  1 13:15:14 np0005464891 lvm[319008]: VG ceph_vg2 finished
Oct  1 13:15:14 np0005464891 lvm[319009]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  1 13:15:14 np0005464891 lvm[319009]: VG ceph_vg1 finished
Oct  1 13:15:14 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:15:14 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19269 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 13:15:14 np0005464891 kernel: block sr0: the capability attribute has been deprecated.
Oct  1 13:15:14 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt asok_command: damage ls {prefix=damage ls} (starting...)
Oct  1 13:15:14 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt asok_command: dump loads {prefix=dump loads} (starting...)
Oct  1 13:15:14 np0005464891 podman[319064]: 2025-10-01 17:15:14.966131745 +0000 UTC m=+0.070876779 container health_status 4856fa22f7aa8322fd2a7cc84810b876d812fdc4752d167f224c6269eff6bb69 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  1 13:15:15 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19271 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 13:15:15 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  1 13:15:15 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  1 13:15:15 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  1 13:15:15 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  1 13:15:15 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  1 13:15:15 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct  1 13:15:15 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1294893950' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  1 13:15:15 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:15:15 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19277 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 13:15:15 np0005464891 ceph-mgr[74592]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  1 13:15:15 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T17:15:15.864+0000 7f1d20c83640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  1 13:15:15 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  1 13:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 13:15:16 np0005464891 ceph-osd[87649]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 31K writes, 119K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s#012Cumulative WAL: 31K writes, 11K syncs, 2.75 writes per sync, written: 0.09 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3532 writes, 19K keys, 3532 commit groups, 1.0 writes per commit group, ingest: 13.32 MB, 0.02 MB/s#012Interval WAL: 3532 writes, 1376 syncs, 2.57 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 13:15:16 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt asok_command: ops {prefix=ops} (starting...)
Oct  1 13:15:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 13:15:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/555155427' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 13:15:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct  1 13:15:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1503575095' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  1 13:15:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct  1 13:15:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/771847622' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  1 13:15:16 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt asok_command: session ls {prefix=session ls} (starting...)
Oct  1 13:15:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  1 13:15:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2920304310' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  1 13:15:16 np0005464891 ceph-mds[100500]: mds.cephfs.compute-0.dnoypt asok_command: status {prefix=status} (starting...)
Oct  1 13:15:16 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct  1 13:15:16 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/424250352' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  1 13:15:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  1 13:15:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2030948842' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  1 13:15:17 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19291 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 13:15:17 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  1 13:15:17 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3669665130' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  1 13:15:17 np0005464891 nova_compute[259907]: 2025-10-01 17:15:17.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:15:17 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19295 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 13:15:17 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2326: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:15:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  1 13:15:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3219854249' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  1 13:15:18 np0005464891 nova_compute[259907]: 2025-10-01 17:15:18.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:15:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct  1 13:15:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3790989290' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  1 13:15:18 np0005464891 podman[319487]: 2025-10-01 17:15:18.243089776 +0000 UTC m=+0.351439431 container health_status 00cff9fd4e135a1f073c39a0281dbe8234d0305aeeeb32c1156a5c6715f66a67 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  1 13:15:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  1 13:15:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2732797821' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  1 13:15:18 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct  1 13:15:18 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1907674265' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  1 13:15:19 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19307 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 13:15:19 np0005464891 ceph-mgr[74592]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  1 13:15:19 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T17:15:19.318+0000 7f1d20c83640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  1 13:15:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  1 13:15:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/257392986' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  1 13:15:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:15:19 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19311 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 13:15:19 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct  1 13:15:19 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2430323730' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  1 13:15:19 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:15:20 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19313 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 13:15:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct  1 13:15:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3103972080' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 225 heartbeat osd_stat(store_statfs(0x4f7756000/0x0/0x4ffc00000, data 0x39b96d1/0x3b06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 225 ms_handle_reset con 0x56404f5a2c00 session 0x56404f4925a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 130007040 unmapped: 53624832 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1760534 data_alloc: 234881024 data_used: 12132352
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 225 handle_osd_map epochs [227,227], i have 225, src has [1,227]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 225 handle_osd_map epochs [226,227], i have 225, src has [1,227]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 227 ms_handle_reset con 0x5640501a2000 session 0x56404e5da960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 130015232 unmapped: 53616640 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 130015232 unmapped: 53616640 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 227 ms_handle_reset con 0x56404e694800 session 0x56404f5912c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 227 heartbeat osd_stat(store_statfs(0x4f8f17000/0x0/0x4ffc00000, data 0x203dcd6/0x2189000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 130015232 unmapped: 53616640 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.657832146s of 12.565610886s, submitted: 261
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 227 ms_handle_reset con 0x56404f5a2c00 session 0x5640501594a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 227 ms_handle_reset con 0x564050ff7c00 session 0x564052c82960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 51863552 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 227 ms_handle_reset con 0x56405104a800 session 0x564052c821e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 51847168 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1772604 data_alloc: 234881024 data_used: 15360000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 227 handle_osd_map epochs [227,228], i have 227, src has [1,228]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 51847168 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 228 handle_osd_map epochs [228,229], i have 228, src has [1,229]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 229 heartbeat osd_stat(store_statfs(0x4f90cf000/0x0/0x4ffc00000, data 0x203f83f/0x218e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 229 ms_handle_reset con 0x564052ac7000 session 0x56404e8b9a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 51847168 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 229 ms_handle_reset con 0x56404e694800 session 0x56404e2f5c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 229 ms_handle_reset con 0x56404f5a2c00 session 0x564050939e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 229 ms_handle_reset con 0x564050ff7c00 session 0x5640508c9a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 51904512 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 51904512 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 51904512 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 229 ms_handle_reset con 0x56405104a800 session 0x5640508c94a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1840764 data_alloc: 234881024 data_used: 15380480
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 229 handle_osd_map epochs [229,230], i have 229, src has [1,230]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 230 ms_handle_reset con 0x564052bdb000 session 0x5640508c9860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 230 ms_handle_reset con 0x564052bda400 session 0x56404e5d6b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 130088960 unmapped: 53542912 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 230 heartbeat osd_stat(store_statfs(0x4f7d4e000/0x0/0x4ffc00000, data 0x33bb061/0x350f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 231 ms_handle_reset con 0x56404e694800 session 0x56404e824000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 231 ms_handle_reset con 0x56405183a400 session 0x56404e3d1a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 231 ms_handle_reset con 0x56405090f400 session 0x5640508c8f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 130105344 unmapped: 53526528 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 130105344 unmapped: 53526528 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 231 ms_handle_reset con 0x56404f5a2c00 session 0x56404e5db2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 231 ms_handle_reset con 0x56404f5a2c00 session 0x5640506852c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.475681305s of 10.003683090s, submitted: 128
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 231 ms_handle_reset con 0x56404e694800 session 0x56404e8b8960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 130129920 unmapped: 53501952 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 231 heartbeat osd_stat(store_statfs(0x4f7d4c000/0x0/0x4ffc00000, data 0x33bcd74/0x3511000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 231 ms_handle_reset con 0x56405090f400 session 0x564050eba780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 231 ms_handle_reset con 0x56405183a400 session 0x56404f498960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 231 ms_handle_reset con 0x56405104a800 session 0x564050e92b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 231 ms_handle_reset con 0x564050ff7c00 session 0x564050ebaf00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 130162688 unmapped: 53469184 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1944216 data_alloc: 234881024 data_used: 15384576
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 231 ms_handle_reset con 0x56404e694800 session 0x564050ebbe00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 231 heartbeat osd_stat(store_statfs(0x4f7d4d000/0x0/0x4ffc00000, data 0x33bcd74/0x3511000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 231 handle_osd_map epochs [232,232], i have 232, src has [1,232]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 232 ms_handle_reset con 0x56404f5a2c00 session 0x56404e7b05a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 232 ms_handle_reset con 0x56405183a400 session 0x56404f499c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 130367488 unmapped: 53264384 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 232 ms_handle_reset con 0x564052bda400 session 0x56404e60a3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 232 ms_handle_reset con 0x56404f5a2c00 session 0x5640508c90e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 233 ms_handle_reset con 0x56405183a400 session 0x564050d5b680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 233 ms_handle_reset con 0x564052ac7400 session 0x56404e90b680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 130400256 unmapped: 53231616 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 234 ms_handle_reset con 0x564050ff7c00 session 0x564050d5b2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 234 ms_handle_reset con 0x564052ac7800 session 0x564050d5a780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 234 ms_handle_reset con 0x564050e54400 session 0x564050afa960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 234 ms_handle_reset con 0x56404e694800 session 0x564050efda40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 53166080 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 234 ms_handle_reset con 0x56404f5a2c00 session 0x564052c83e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 234 ms_handle_reset con 0x564050ff7c00 session 0x5640508c9860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 53149696 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 234 ms_handle_reset con 0x564052ac7400 session 0x564050685860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 234 ms_handle_reset con 0x56404e694800 session 0x56404f4f23c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 234 handle_osd_map epochs [234,235], i have 234, src has [1,235]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 235 ms_handle_reset con 0x56404f5a2c00 session 0x56404f4f2000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 51027968 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 235 ms_handle_reset con 0x564050e54400 session 0x56404f4f3c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1883810 data_alloc: 234881024 data_used: 15392768
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 235 ms_handle_reset con 0x56405183a400 session 0x5640508c8f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 236 ms_handle_reset con 0x564050ff7c00 session 0x56404f499c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 236 heartbeat osd_stat(store_statfs(0x4f7d41000/0x0/0x4ffc00000, data 0x33c229d/0x351c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131563520 unmapped: 52068352 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 236 handle_osd_map epochs [236,237], i have 236, src has [1,237]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 237 ms_handle_reset con 0x56404e694800 session 0x564050eba780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 237 ms_handle_reset con 0x56404f5a2c00 session 0x564050f19c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132612096 unmapped: 51019776 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 238 ms_handle_reset con 0x564050e54400 session 0x564050ebaf00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 238 heartbeat osd_stat(store_statfs(0x4f899a000/0x0/0x4ffc00000, data 0x27665e8/0x28c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 238 ms_handle_reset con 0x564050ff7c00 session 0x564050e92b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 238 ms_handle_reset con 0x56405183a400 session 0x56404e5da960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132669440 unmapped: 50962432 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 238 ms_handle_reset con 0x56404e694800 session 0x56404e3d1a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.601013184s of 10.644791603s, submitted: 189
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 238 ms_handle_reset con 0x56404f5a2c00 session 0x56404e60ad20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 50937856 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 238 ms_handle_reset con 0x564050e54400 session 0x564050eba3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 238 ms_handle_reset con 0x564050ff7c00 session 0x56404e90a1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 238 ms_handle_reset con 0x564052b8d000 session 0x564050f19680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 238 ms_handle_reset con 0x564052b8dc00 session 0x564050efd680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132734976 unmapped: 50896896 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1838217 data_alloc: 234881024 data_used: 15409152
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 239 ms_handle_reset con 0x56404e694800 session 0x5640508c8000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 239 ms_handle_reset con 0x564050e54400 session 0x564050f183c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132751360 unmapped: 50880512 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 239 handle_osd_map epochs [239,240], i have 239, src has [1,240]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 240 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x2052bcb/0x21af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 240 ms_handle_reset con 0x564050ff7c00 session 0x564050f185a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 50855936 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 241 ms_handle_reset con 0x564052b8cc00 session 0x564050d5bc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 241 ms_handle_reset con 0x56404e694800 session 0x5640508be960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 241 ms_handle_reset con 0x56404f5a2c00 session 0x56404e824960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 241 ms_handle_reset con 0x564050e54400 session 0x564050eba000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 50855936 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 241 ms_handle_reset con 0x564050ff7c00 session 0x564050aeba40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132382720 unmapped: 51249152 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 241 ms_handle_reset con 0x564052b8dc00 session 0x564050e93c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132390912 unmapped: 51240960 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1850117 data_alloc: 234881024 data_used: 15433728
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 241 handle_osd_map epochs [241,242], i have 241, src has [1,242]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 241 handle_osd_map epochs [242,242], i have 242, src has [1,242]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 242 ms_handle_reset con 0x56404e694800 session 0x5640508c92c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 242 ms_handle_reset con 0x56404f5a2c00 session 0x564050efd0e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 242 ms_handle_reset con 0x564050ff7c00 session 0x5640508bf2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 242 ms_handle_reset con 0x564050e54400 session 0x56404e3d0960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 242 ms_handle_reset con 0x564052b8d800 session 0x5640508c9680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132440064 unmapped: 51191808 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f90a3000/0x0/0x4ffc00000, data 0x2058564/0x21ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 243 ms_handle_reset con 0x56404e694800 session 0x564050efc000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 243 ms_handle_reset con 0x56404f5a2c00 session 0x56404e8b8d20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 243 ms_handle_reset con 0x564052b8d400 session 0x5640508be3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132456448 unmapped: 51175424 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132456448 unmapped: 51175424 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 244 ms_handle_reset con 0x564050e54400 session 0x564050b7f2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 244 heartbeat osd_stat(store_statfs(0x4f909f000/0x0/0x4ffc00000, data 0x205a0fd/0x21bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132464640 unmapped: 51167232 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.758306503s of 10.212130547s, submitted: 129
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 245 ms_handle_reset con 0x564050ff7c00 session 0x564050b7e1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 245 ms_handle_reset con 0x56404e694800 session 0x564050f185a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 245 ms_handle_reset con 0x56404f5a2c00 session 0x5640508c9860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132481024 unmapped: 51150848 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 245 heartbeat osd_stat(store_statfs(0x4f909a000/0x0/0x4ffc00000, data 0x205d4e1/0x21c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1867227 data_alloc: 234881024 data_used: 15446016
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132481024 unmapped: 51150848 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 246 ms_handle_reset con 0x564050e54400 session 0x5640508c8000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 246 ms_handle_reset con 0x564052b8d800 session 0x56404e90b680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 246 heartbeat osd_stat(store_statfs(0x4f9098000/0x0/0x4ffc00000, data 0x205f0ef/0x21c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131383296 unmapped: 52248576 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 246 handle_osd_map epochs [246,247], i have 246, src has [1,247]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 247 ms_handle_reset con 0x564052b8d400 session 0x56404e90a5a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131383296 unmapped: 52248576 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 247 ms_handle_reset con 0x56404e694800 session 0x56404e3d1a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 247 ms_handle_reset con 0x56404f5a2c00 session 0x564050e92b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131383296 unmapped: 52248576 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 247 heartbeat osd_stat(store_statfs(0x4f9093000/0x0/0x4ffc00000, data 0x2060d26/0x21c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 247 ms_handle_reset con 0x564050e54400 session 0x56404f499c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 247 ms_handle_reset con 0x564052b8d800 session 0x56404f4f2000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 247 ms_handle_reset con 0x564052b8c400 session 0x564052c83e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131407872 unmapped: 52224000 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1880781 data_alloc: 234881024 data_used: 15462400
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 247 handle_osd_map epochs [247,248], i have 247, src has [1,248]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 248 ms_handle_reset con 0x56404e694800 session 0x564050afa960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131424256 unmapped: 52207616 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 248 handle_osd_map epochs [248,249], i have 248, src has [1,249]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 249 ms_handle_reset con 0x56404f5a2c00 session 0x5640511621e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 52199424 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 249 ms_handle_reset con 0x564050e54400 session 0x5640511632c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 52199424 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 250 ms_handle_reset con 0x564052b8d800 session 0x564051162b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 250 heartbeat osd_stat(store_statfs(0x4f908b000/0x0/0x4ffc00000, data 0x20645fa/0x21d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 250 ms_handle_reset con 0x564052b8c000 session 0x564051162d20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 250 ms_handle_reset con 0x56404e694800 session 0x56404e5d4b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 52199424 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 131432448 unmapped: 52199424 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1888900 data_alloc: 234881024 data_used: 15474688
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 250 handle_osd_map epochs [250,251], i have 250, src has [1,251]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.615159988s of 11.102748871s, submitted: 132
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 251 ms_handle_reset con 0x56404f5a2c00 session 0x56404e5d5a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 251 ms_handle_reset con 0x564050e54400 session 0x56404e5d54a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 251 ms_handle_reset con 0x564052b8d800 session 0x56404e5d5860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 251 ms_handle_reset con 0x5640512a2800 session 0x56404e5d5e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 251 ms_handle_reset con 0x564052ac6c00 session 0x56404f499680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 51027968 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 251 ms_handle_reset con 0x56404e694800 session 0x56404e5d50e0
Oct  1 13:15:20 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19317 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 251 ms_handle_reset con 0x56404f5a2c00 session 0x564050f2d680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132628480 unmapped: 51003392 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 251 ms_handle_reset con 0x564050e54400 session 0x564050f2d2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 251 heartbeat osd_stat(store_statfs(0x4f87ae000/0x0/0x4ffc00000, data 0x2944b88/0x2ab0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132636672 unmapped: 50995200 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 252 ms_handle_reset con 0x564052b8d800 session 0x564050f2d0e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 252 ms_handle_reset con 0x56404e694800 session 0x564050f2c780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 252 heartbeat osd_stat(store_statfs(0x4f87aa000/0x0/0x4ffc00000, data 0x2946779/0x2ab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132636672 unmapped: 50995200 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 252 ms_handle_reset con 0x56404f5a2c00 session 0x564050f2da40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 253 ms_handle_reset con 0x564050e54400 session 0x56404e5da960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 253 ms_handle_reset con 0x564052ac6c00 session 0x56404e5db2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132644864 unmapped: 50987008 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 253 ms_handle_reset con 0x56405090f400 session 0x564050402780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1965897 data_alloc: 234881024 data_used: 15482880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 254 ms_handle_reset con 0x564052ac6800 session 0x564050685c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 254 ms_handle_reset con 0x56405090f400 session 0x5640511632c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 132997120 unmapped: 50634752 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 133005312 unmapped: 50626560 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 255 ms_handle_reset con 0x564050e54400 session 0x564050d5a1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 46071808 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f8778000/0x0/0x4ffc00000, data 0x2975940/0x2ae4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137592832 unmapped: 46039040 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 255 ms_handle_reset con 0x564052ac6c00 session 0x56404e2f41e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137609216 unmapped: 46022656 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2040547 data_alloc: 234881024 data_used: 24776704
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.246212006s of 10.735179901s, submitted: 87
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f8777000/0x0/0x4ffc00000, data 0x29759a2/0x2ae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 46260224 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 257 ms_handle_reset con 0x56405104a800 session 0x56404e2f5e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 46260224 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 46260224 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 46260224 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 257 heartbeat osd_stat(store_statfs(0x4f8772000/0x0/0x4ffc00000, data 0x2978fba/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 257 handle_osd_map epochs [258,258], i have 258, src has [1,258]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 46252032 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2049468 data_alloc: 234881024 data_used: 24780800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 259 ms_handle_reset con 0x56405090f400 session 0x56404f71ef00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 46252032 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 259 heartbeat osd_stat(store_statfs(0x4f876b000/0x0/0x4ffc00000, data 0x297c5b6/0x2af1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 46243840 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 260 ms_handle_reset con 0x564050e54400 session 0x56404e8b8d20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 42647552 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 260 ms_handle_reset con 0x564052ac6800 session 0x56404e5d63c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 260 ms_handle_reset con 0x564052ac6c00 session 0x564050685860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 41934848 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 260 ms_handle_reset con 0x5640501f3800 session 0x56404e60ad20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 41934848 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2102485 data_alloc: 234881024 data_used: 25866240
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 260 ms_handle_reset con 0x56405090f400 session 0x56404e90a1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 260 ms_handle_reset con 0x564050e54400 session 0x56404e824960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 43147264 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.833752632s of 11.245484352s, submitted: 105
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 43147264 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 260 ms_handle_reset con 0x564052ac6c00 session 0x564050684b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 261 heartbeat osd_stat(store_statfs(0x4f8080000/0x0/0x4ffc00000, data 0x2c58133/0x2dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 261 ms_handle_reset con 0x564052ac6000 session 0x56404f4925a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 261 ms_handle_reset con 0x564052ac6800 session 0x56404e90a960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 43139072 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 261 ms_handle_reset con 0x56405090f400 session 0x564050f18780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 262 ms_handle_reset con 0x564050e54400 session 0x564050f18f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 262 ms_handle_reset con 0x564052ac6000 session 0x56404f590780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 43114496 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 262 ms_handle_reset con 0x564052ac6c00 session 0x56404e3d1a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 262 ms_handle_reset con 0x564052e23000 session 0x5640506683c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 43098112 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2100848 data_alloc: 234881024 data_used: 25882624
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 262 heartbeat osd_stat(store_statfs(0x4f807a000/0x0/0x4ffc00000, data 0x2c5b873/0x2dd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 262 ms_handle_reset con 0x56405090f400 session 0x564050afad20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 43098112 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 262 ms_handle_reset con 0x564052ac6000 session 0x56404e7b05a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 262 ms_handle_reset con 0x564052ac6c00 session 0x56404f590780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 42287104 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 263 ms_handle_reset con 0x564052e22c00 session 0x56404f4925a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141352960 unmapped: 42278912 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 263 ms_handle_reset con 0x564052f5d400 session 0x56404f4a45a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 263 heartbeat osd_stat(store_statfs(0x4f807b000/0x0/0x4ffc00000, data 0x2c5b873/0x2dd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 263 handle_osd_map epochs [263,264], i have 263, src has [1,264]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 263 handle_osd_map epochs [264,264], i have 264, src has [1,264]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 264 ms_handle_reset con 0x56405090f400 session 0x564050afa3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 264 ms_handle_reset con 0x564052f5d400 session 0x56404e7b1a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 264 ms_handle_reset con 0x564052ac6000 session 0x5640501594a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141385728 unmapped: 42246144 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 264 ms_handle_reset con 0x564052ac6c00 session 0x5640508bed20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 265 ms_handle_reset con 0x564052f5c000 session 0x56404e8b8d20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141393920 unmapped: 42237952 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 265 handle_osd_map epochs [265,266], i have 265, src has [1,266]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2116577 data_alloc: 234881024 data_used: 25894912
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 266 ms_handle_reset con 0x56405090f400 session 0x56404e5db2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 266 ms_handle_reset con 0x564052ac6000 session 0x564050f2d0e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141418496 unmapped: 42213376 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 266 ms_handle_reset con 0x564052e22c00 session 0x564050f18000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 266 ms_handle_reset con 0x564052ac6c00 session 0x564050f2d680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 266 ms_handle_reset con 0x5640512a8c00 session 0x564050aebc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 267 ms_handle_reset con 0x56405090f400 session 0x564051162d20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141434880 unmapped: 42196992 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.874319077s of 10.528992653s, submitted: 117
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 267 ms_handle_reset con 0x564052ac6000 session 0x56404f71e000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 268 ms_handle_reset con 0x564052ac6c00 session 0x5640508bf2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 268 ms_handle_reset con 0x5640512a8c00 session 0x564052c82960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 268 ms_handle_reset con 0x564052f5d400 session 0x564050efd860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 268 ms_handle_reset con 0x564052f5d400 session 0x564050f185a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 142499840 unmapped: 41132032 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 269 ms_handle_reset con 0x56405090f400 session 0x5640508c9860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 269 ms_handle_reset con 0x564052ac6c00 session 0x564050e92b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 269 ms_handle_reset con 0x5640512a8c00 session 0x56404f590f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 142393344 unmapped: 41238528 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 269 heartbeat osd_stat(store_statfs(0x4f7665000/0x0/0x4ffc00000, data 0x3665f13/0x37e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 269 handle_osd_map epochs [269,270], i have 269, src has [1,270]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 270 ms_handle_reset con 0x564052e22c00 session 0x564050426780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 270 ms_handle_reset con 0x564052ac6000 session 0x564050668f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 270 ms_handle_reset con 0x56405090f400 session 0x564050d5ab40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 142393344 unmapped: 41238528 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 270 ms_handle_reset con 0x5640512a8c00 session 0x564050efcb40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2228166 data_alloc: 234881024 data_used: 25903104
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 270 ms_handle_reset con 0x564052f5d400 session 0x564050f19860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 270 handle_osd_map epochs [271,271], i have 271, src has [1,271]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 271 ms_handle_reset con 0x5640512a9000 session 0x56404e60ad20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 271 ms_handle_reset con 0x5640512a9400 session 0x564050efd2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141967360 unmapped: 41664512 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 271 ms_handle_reset con 0x564052ac6c00 session 0x564050aeb4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 271 ms_handle_reset con 0x5640512a8c00 session 0x564050158d20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 271 heartbeat osd_stat(store_statfs(0x4f7383000/0x0/0x4ffc00000, data 0x3943893/0x3aca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 272 ms_handle_reset con 0x564052ac6000 session 0x56404f591a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141991936 unmapped: 41639936 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 272 ms_handle_reset con 0x564052b8dc00 session 0x5640504032c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 272 ms_handle_reset con 0x564052b8d000 session 0x56404f590f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 272 heartbeat osd_stat(store_statfs(0x4f737e000/0x0/0x4ffc00000, data 0x39454aa/0x3ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 272 ms_handle_reset con 0x564052ac6000 session 0x564050f185a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 273 ms_handle_reset con 0x564052f5d400 session 0x564050afa000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 273 ms_handle_reset con 0x564052ac6c00 session 0x564050afb0e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 273 ms_handle_reset con 0x56405090f400 session 0x564052c823c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141615104 unmapped: 42016768 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 273 handle_osd_map epochs [273,274], i have 273, src has [1,274]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 274 ms_handle_reset con 0x564050ff7c00 session 0x5640508c9a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 274 ms_handle_reset con 0x564052b8d400 session 0x564052c82960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 274 ms_handle_reset con 0x564052ac6000 session 0x56404f58f2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141631488 unmapped: 42000384 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 274 ms_handle_reset con 0x564052b8d000 session 0x56404e5da960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 274 ms_handle_reset con 0x5640512a9400 session 0x5640508c9860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 274 ms_handle_reset con 0x5640512a8c00 session 0x564050e923c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 275 ms_handle_reset con 0x564052ac6c00 session 0x56404f58f4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 275 ms_handle_reset con 0x5640512a9400 session 0x56404e5d41e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141631488 unmapped: 42000384 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2318283 data_alloc: 234881024 data_used: 25915392
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 275 ms_handle_reset con 0x564050ff7c00 session 0x564050f2d860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 142696448 unmapped: 40935424 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 276 ms_handle_reset con 0x564052ac6000 session 0x56404e5db2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 276 heartbeat osd_stat(store_statfs(0x4f7ae3000/0x0/0x4ffc00000, data 0x41fd807/0x4389000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 276 ms_handle_reset con 0x564052ac6000 session 0x564050e92780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 142712832 unmapped: 40919040 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 276 ms_handle_reset con 0x564050e54400 session 0x5640508bf4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.312987328s of 10.079778671s, submitted: 193
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 277 ms_handle_reset con 0x564050ff7c00 session 0x56404f58eb40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 142737408 unmapped: 40894464 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 277 ms_handle_reset con 0x56404e694800 session 0x564051162b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 277 ms_handle_reset con 0x56404f5a2c00 session 0x56404e2f43c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 277 ms_handle_reset con 0x56404f5a2c00 session 0x564050f19a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 45768704 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 277 ms_handle_reset con 0x564050e54400 session 0x564050afa3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 277 ms_handle_reset con 0x56404e694800 session 0x564050ebab40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 277 handle_osd_map epochs [277,278], i have 277, src has [1,278]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 45768704 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2166500 data_alloc: 234881024 data_used: 15581184
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 278 heartbeat osd_stat(store_statfs(0x4f86bd000/0x0/0x4ffc00000, data 0x36200bc/0x37b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 278 ms_handle_reset con 0x564050ff7c00 session 0x56405017dc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 278 ms_handle_reset con 0x564052ac6000 session 0x56404e825c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 45768704 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 278 ms_handle_reset con 0x56404e694800 session 0x56404f71eb40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 278 ms_handle_reset con 0x564050e54400 session 0x56404f4a45a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137936896 unmapped: 45694976 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 279 ms_handle_reset con 0x564050ff7c00 session 0x56404e90a1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 279 ms_handle_reset con 0x56404f5a2c00 session 0x564050668960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137953280 unmapped: 45678592 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 279 ms_handle_reset con 0x5640512a9400 session 0x564050426f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 280 heartbeat osd_stat(store_statfs(0x4f86b8000/0x0/0x4ffc00000, data 0x362373e/0x37b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137961472 unmapped: 45670400 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 280 handle_osd_map epochs [280,281], i have 280, src has [1,281]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 280 handle_osd_map epochs [281,281], i have 281, src has [1,281]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 281 handle_osd_map epochs [282,282], i have 282, src has [1,282]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 137961472 unmapped: 45670400 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 282 ms_handle_reset con 0x5640512a8c00 session 0x56404efb0f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2174374 data_alloc: 234881024 data_used: 15593472
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 282 ms_handle_reset con 0x56404e694800 session 0x56404e5d6000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 282 ms_handle_reset con 0x56404f5a2c00 session 0x56404f71ef00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 282 ms_handle_reset con 0x564050e54400 session 0x564050efcf00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 282 ms_handle_reset con 0x564050ff7c00 session 0x56404f4930e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 45613056 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 282 ms_handle_reset con 0x56404e694800 session 0x56404e8b8960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 138035200 unmapped: 45596672 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 282 ms_handle_reset con 0x56404f5a2c00 session 0x564050f2c3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 282 ms_handle_reset con 0x564050e54400 session 0x56404e90a780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 282 ms_handle_reset con 0x5640512a8c00 session 0x56404e2f5860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 138051584 unmapped: 45580288 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 282 heartbeat osd_stat(store_statfs(0x4f750f000/0x0/0x4ffc00000, data 0x3628abb/0x37be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 282 ms_handle_reset con 0x564052b8d400 session 0x564050aeb860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.426215172s of 11.044559479s, submitted: 183
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 282 ms_handle_reset con 0x56404e694800 session 0x56404e824b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 138051584 unmapped: 45580288 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140582912 unmapped: 43048960 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2234741 data_alloc: 234881024 data_used: 23584768
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 43040768 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 43040768 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 283 heartbeat osd_stat(store_statfs(0x4f750d000/0x0/0x4ffc00000, data 0x362a55b/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 43040768 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 283 ms_handle_reset con 0x56404f5a2c00 session 0x564050426f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 43040768 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 43040768 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2240062 data_alloc: 234881024 data_used: 23597056
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 284 ms_handle_reset con 0x564050e54400 session 0x564050668960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 284 heartbeat osd_stat(store_statfs(0x4f750d000/0x0/0x4ffc00000, data 0x362a5bd/0x37c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 284 ms_handle_reset con 0x564052f5d400 session 0x5640511632c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140623872 unmapped: 43008000 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 284 handle_osd_map epochs [285,285], i have 285, src has [1,285]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 285 ms_handle_reset con 0x564050e5b800 session 0x564050ebb680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 285 ms_handle_reset con 0x56404e857400 session 0x564050aebc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140640256 unmapped: 42991616 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 286 ms_handle_reset con 0x56404e616800 session 0x564050d5ba40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 286 ms_handle_reset con 0x5640512a8c00 session 0x56404e90a1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140664832 unmapped: 42967040 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 287 ms_handle_reset con 0x564050e5a000 session 0x564050afa3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140656640 unmapped: 42975232 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.501262665s of 10.727403641s, submitted: 49
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 288 ms_handle_reset con 0x56404e695000 session 0x5640508bfc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 288 ms_handle_reset con 0x56404e695400 session 0x564050f2d860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 38010880 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2381232 data_alloc: 234881024 data_used: 24403968
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 289 ms_handle_reset con 0x56404e616800 session 0x564050ebb860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 289 heartbeat osd_stat(store_statfs(0x4f6ea2000/0x0/0x4ffc00000, data 0x442aa85/0x3e2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 145137664 unmapped: 38494208 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 290 ms_handle_reset con 0x56404e857400 session 0x56404f58f4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 290 ms_handle_reset con 0x564050e5a000 session 0x564052c823c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 290 ms_handle_reset con 0x5640512a8c00 session 0x56404f4a34a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146309120 unmapped: 37322752 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 290 ms_handle_reset con 0x56404e616800 session 0x56404e90ab40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 290 heartbeat osd_stat(store_statfs(0x4f6e8a000/0x0/0x4ffc00000, data 0x4553610/0x3e41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146333696 unmapped: 37298176 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 290 ms_handle_reset con 0x56404e695400 session 0x564050afbe00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 290 ms_handle_reset con 0x56404e857400 session 0x5640508bfe00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146333696 unmapped: 37298176 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146333696 unmapped: 37298176 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2403883 data_alloc: 234881024 data_used: 24715264
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146333696 unmapped: 37298176 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 290 heartbeat osd_stat(store_statfs(0x4f6e8a000/0x0/0x4ffc00000, data 0x4553600/0x3e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 290 heartbeat osd_stat(store_statfs(0x4f6e8a000/0x0/0x4ffc00000, data 0x4553600/0x3e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146333696 unmapped: 37298176 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 290 ms_handle_reset con 0x564050e5a000 session 0x56404e2f4000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 290 ms_handle_reset con 0x564052ac6c00 session 0x564050d5a1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 290 ms_handle_reset con 0x564052b8d000 session 0x56404e90a960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146366464 unmapped: 37265408 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 290 ms_handle_reset con 0x56404e616800 session 0x56404e8b92c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 290 heartbeat osd_stat(store_statfs(0x4f6e8e000/0x0/0x4ffc00000, data 0x4553652/0x3e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146366464 unmapped: 37265408 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.819549561s of 10.590759277s, submitted: 178
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 290 ms_handle_reset con 0x56404e695400 session 0x56404e5d5a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146391040 unmapped: 37240832 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2406353 data_alloc: 234881024 data_used: 24735744
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 291 ms_handle_reset con 0x56404e694800 session 0x5640508be000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 292 ms_handle_reset con 0x564050e5a000 session 0x564050d5a960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146391040 unmapped: 37240832 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 292 ms_handle_reset con 0x56404e857400 session 0x5640504032c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 292 heartbeat osd_stat(store_statfs(0x4f6e86000/0x0/0x4ffc00000, data 0x4556dbc/0x3e46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146391040 unmapped: 37240832 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 292 ms_handle_reset con 0x56404e694800 session 0x56404e60a3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 293 ms_handle_reset con 0x564050e5a000 session 0x56404f590f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 293 ms_handle_reset con 0x564052b8d000 session 0x564050e92000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 40222720 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 293 ms_handle_reset con 0x56404e616800 session 0x564050402780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 293 ms_handle_reset con 0x564050e54400 session 0x56404e8b9860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 293 ms_handle_reset con 0x56404eef6c00 session 0x564050f2cf00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 293 ms_handle_reset con 0x56404e616800 session 0x56404f499c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 294 ms_handle_reset con 0x564050e55c00 session 0x56404f5912c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 294 ms_handle_reset con 0x56404e695400 session 0x56404e90a960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 294 heartbeat osd_stat(store_statfs(0x4f8a7b000/0x0/0x4ffc00000, data 0x20b051e/0x2252000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 42778624 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 295 ms_handle_reset con 0x56404e616800 session 0x564050efc780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140861440 unmapped: 42770432 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2081318 data_alloc: 234881024 data_used: 15650816
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140861440 unmapped: 42770432 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 295 heartbeat osd_stat(store_statfs(0x4f8a73000/0x0/0x4ffc00000, data 0x20b3c8e/0x2257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 295 ms_handle_reset con 0x56404eef6c00 session 0x564050f2c000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140861440 unmapped: 42770432 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 295 heartbeat osd_stat(store_statfs(0x4f8a73000/0x0/0x4ffc00000, data 0x20b3c8e/0x2257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140861440 unmapped: 42770432 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 296 ms_handle_reset con 0x564050e55c00 session 0x56404e8245a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 297 ms_handle_reset con 0x564050e54400 session 0x564050668f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140902400 unmapped: 42729472 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 297 heartbeat osd_stat(store_statfs(0x4f8a6e000/0x0/0x4ffc00000, data 0x20b74fc/0x225f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.534094810s of 10.292316437s, submitted: 119
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140902400 unmapped: 42729472 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2092864 data_alloc: 234881024 data_used: 15667200
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 298 heartbeat osd_stat(store_statfs(0x4f8a6a000/0x0/0x4ffc00000, data 0x20b8f71/0x2261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140902400 unmapped: 42729472 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141967360 unmapped: 41664512 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140918784 unmapped: 42713088 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140918784 unmapped: 42713088 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 299 ms_handle_reset con 0x56404e694800 session 0x56405017c000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140918784 unmapped: 42713088 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2092638 data_alloc: 234881024 data_used: 15667200
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 299 heartbeat osd_stat(store_statfs(0x4f8a6a000/0x0/0x4ffc00000, data 0x20bac96/0x2263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140918784 unmapped: 42713088 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140918784 unmapped: 42713088 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 300 ms_handle_reset con 0x56404e616800 session 0x56404f591c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 300 heartbeat osd_stat(store_statfs(0x4f8a66000/0x0/0x4ffc00000, data 0x20bc741/0x2267000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140918784 unmapped: 42713088 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 300 ms_handle_reset con 0x56404eef6c00 session 0x56404e2f5c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 300 ms_handle_reset con 0x564050e54400 session 0x564050159860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140918784 unmapped: 42713088 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 300 ms_handle_reset con 0x56404e857400 session 0x564052c830e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140926976 unmapped: 42704896 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2101804 data_alloc: 234881024 data_used: 15671296
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.756897926s of 10.095726967s, submitted: 65
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 301 heartbeat osd_stat(store_statfs(0x4f8a61000/0x0/0x4ffc00000, data 0x20be384/0x226c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 301 ms_handle_reset con 0x564052b8d000 session 0x56404e2f4000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 301 ms_handle_reset con 0x564050ff7400 session 0x564050f194a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140918784 unmapped: 42713088 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 302 ms_handle_reset con 0x56404e857400 session 0x56404e825c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 302 ms_handle_reset con 0x564050ff6400 session 0x5640506683c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 302 ms_handle_reset con 0x564050e55c00 session 0x564050669a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140935168 unmapped: 42696704 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 303 ms_handle_reset con 0x56404e616800 session 0x5640511632c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 303 ms_handle_reset con 0x564050e5a000 session 0x564052c83860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140926976 unmapped: 42704896 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 303 heartbeat osd_stat(store_statfs(0x4f8a57000/0x0/0x4ffc00000, data 0x20c1c7a/0x2274000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 303 ms_handle_reset con 0x56404e857400 session 0x564050b7f4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140926976 unmapped: 42704896 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 304 ms_handle_reset con 0x564050e55c00 session 0x564050d5a5a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140951552 unmapped: 42680320 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2121823 data_alloc: 234881024 data_used: 15704064
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 305 ms_handle_reset con 0x564050ff7400 session 0x56404e8a7a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 42672128 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 306 ms_handle_reset con 0x564050ff6400 session 0x56404f591680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 306 ms_handle_reset con 0x56404e857400 session 0x56404f591a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 306 ms_handle_reset con 0x564050e55c00 session 0x5640508c8000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 306 ms_handle_reset con 0x564050e5a000 session 0x56404e5db860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 306 ms_handle_reset con 0x564050ff7400 session 0x564050ebb4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 306 ms_handle_reset con 0x56404eef6c00 session 0x564050f2d0e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 306 ms_handle_reset con 0x56404e857400 session 0x56405017da40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 42614784 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 306 ms_handle_reset con 0x564050e55c00 session 0x564050e92960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 306 ms_handle_reset con 0x564050e5a000 session 0x56404f493c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f7eb1000/0x0/0x4ffc00000, data 0x2c68e45/0x2e1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 42606592 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 307 ms_handle_reset con 0x564050ff7400 session 0x564050d5af00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 42606592 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 308 ms_handle_reset con 0x564050e54400 session 0x56404e8a70e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 308 ms_handle_reset con 0x56404e857400 session 0x564050f2c1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 42598400 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2223423 data_alloc: 234881024 data_used: 15704064
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.873045921s of 10.135804176s, submitted: 111
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 309 ms_handle_reset con 0x564050e5a000 session 0x564050f2c3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 309 ms_handle_reset con 0x564050e55c00 session 0x564050f2da40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 42590208 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 310 ms_handle_reset con 0x564050ff7400 session 0x56405017c000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 310 ms_handle_reset con 0x564050e54400 session 0x56404e2f4000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 42573824 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 310 heartbeat osd_stat(store_statfs(0x4f7ea6000/0x0/0x4ffc00000, data 0x2c6e2fe/0x2e25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 310 ms_handle_reset con 0x56404e857400 session 0x564050efc780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 42573824 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 311 ms_handle_reset con 0x564050e54400 session 0x56404f5912c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 311 ms_handle_reset con 0x564050ff7000 session 0x56404e5d5a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 311 ms_handle_reset con 0x564050ff7400 session 0x56404e60a3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 141271040 unmapped: 42360832 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 312 ms_handle_reset con 0x56404f476c00 session 0x564050afbe00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146497536 unmapped: 37134336 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2321040 data_alloc: 251658240 data_used: 27361280
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 313 ms_handle_reset con 0x564050ff6c00 session 0x5640508c8780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 313 ms_handle_reset con 0x564050e54400 session 0x564050afa000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 313 ms_handle_reset con 0x56404e857400 session 0x564050685e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146513920 unmapped: 37117952 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 313 ms_handle_reset con 0x564050e62c00 session 0x56404e90a000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 313 ms_handle_reset con 0x564050ff7400 session 0x564050ebb0e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146513920 unmapped: 37117952 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 313 ms_handle_reset con 0x564050e54400 session 0x564050f2dc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 313 ms_handle_reset con 0x564050e62c00 session 0x56404f4f3860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 314 ms_handle_reset con 0x56404e857400 session 0x5640508bed20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 314 ms_handle_reset con 0x564050e63c00 session 0x56404e824b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 314 ms_handle_reset con 0x56404f006800 session 0x56404f498960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 314 ms_handle_reset con 0x564050ff7000 session 0x564050afb680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 314 heartbeat osd_stat(store_statfs(0x4f7e99000/0x0/0x4ffc00000, data 0x2c75831/0x2e34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146522112 unmapped: 37109760 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 314 ms_handle_reset con 0x56404e857400 session 0x564050427c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 314 ms_handle_reset con 0x564050e63800 session 0x56404e3d1860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 315 ms_handle_reset con 0x564050ff6c00 session 0x5640508c90e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 315 ms_handle_reset con 0x56404f006800 session 0x56404f58fa40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146563072 unmapped: 37068800 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146563072 unmapped: 37068800 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2334378 data_alloc: 251658240 data_used: 27365376
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.863903046s of 10.400607109s, submitted: 108
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 316 ms_handle_reset con 0x56404e857400 session 0x56404f4934a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146579456 unmapped: 37052416 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 316 ms_handle_reset con 0x564050e63800 session 0x5640508bed20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 317 ms_handle_reset con 0x564050ff6c00 session 0x56404f4f3860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146620416 unmapped: 37011456 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 318 ms_handle_reset con 0x564050ff7000 session 0x564050afa000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 318 ms_handle_reset con 0x564050e54400 session 0x564050f2c1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146628608 unmapped: 37003264 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 318 heartbeat osd_stat(store_statfs(0x4f7e8a000/0x0/0x4ffc00000, data 0x2c7e4bc/0x2e42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146628608 unmapped: 37003264 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 318 heartbeat osd_stat(store_statfs(0x4f7e8a000/0x0/0x4ffc00000, data 0x2c7e4bc/0x2e42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 318 ms_handle_reset con 0x564050e54400 session 0x564050ebb4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 318 handle_osd_map epochs [318,319], i have 318, src has [1,319]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 319 ms_handle_reset con 0x56404f5a0800 session 0x564050668d20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 319 handle_osd_map epochs [319,320], i have 319, src has [1,320]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 320 ms_handle_reset con 0x56404e857400 session 0x564050f2d0e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152936448 unmapped: 30695424 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2466586 data_alloc: 251658240 data_used: 28262400
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 320 heartbeat osd_stat(store_statfs(0x4f6f77000/0x0/0x4ffc00000, data 0x3b88d12/0x3d4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 321 ms_handle_reset con 0x56404f5a1800 session 0x56404e5db860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153821184 unmapped: 29810688 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 322 ms_handle_reset con 0x56405093e800 session 0x5640508c8000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154370048 unmapped: 29261824 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 322 ms_handle_reset con 0x56404e857400 session 0x56404f591680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 322 heartbeat osd_stat(store_statfs(0x4f6f6b000/0x0/0x4ffc00000, data 0x3b9b60c/0x3d63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 323 ms_handle_reset con 0x56404f5a0800 session 0x5640501585a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154099712 unmapped: 29532160 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 323 heartbeat osd_stat(store_statfs(0x4f6f67000/0x0/0x4ffc00000, data 0x3b9d229/0x3d66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 324 ms_handle_reset con 0x56404f5a1800 session 0x56404f4f32c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154124288 unmapped: 29507584 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 324 handle_osd_map epochs [324,325], i have 324, src has [1,325]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154132480 unmapped: 29499392 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2487692 data_alloc: 251658240 data_used: 28880896
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 325 handle_osd_map epochs [325,326], i have 325, src has [1,326]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.688247681s of 10.444023132s, submitted: 239
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154132480 unmapped: 29499392 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 326 ms_handle_reset con 0x564050e54400 session 0x56404e3d03c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154148864 unmapped: 29483008 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 327 ms_handle_reset con 0x56405093ec00 session 0x564050afbc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 327 ms_handle_reset con 0x56404e857400 session 0x564050aea780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f6f5b000/0x0/0x4ffc00000, data 0x3ba4119/0x3d71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154165248 unmapped: 29466624 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 327 ms_handle_reset con 0x56404f5a0800 session 0x5640508c8780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154165248 unmapped: 29466624 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 327 handle_osd_map epochs [327,328], i have 327, src has [1,328]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154165248 unmapped: 29466624 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2495502 data_alloc: 251658240 data_used: 28889088
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154173440 unmapped: 29458432 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 328 handle_osd_map epochs [329,329], i have 329, src has [1,329]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 329 heartbeat osd_stat(store_statfs(0x4f6f5a000/0x0/0x4ffc00000, data 0x3ba575f/0x3d73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154198016 unmapped: 29433856 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 329 ms_handle_reset con 0x56405093ec00 session 0x56404f498960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154198016 unmapped: 29433856 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 329 handle_osd_map epochs [331,331], i have 329, src has [1,331]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 329 handle_osd_map epochs [330,331], i have 329, src has [1,331]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 331 ms_handle_reset con 0x56405093f000 session 0x564050685c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154255360 unmapped: 29376512 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 331 heartbeat osd_stat(store_statfs(0x4f6f58000/0x0/0x4ffc00000, data 0x3ba72ea/0x3d75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 331 ms_handle_reset con 0x56404f5a1800 session 0x56404e90a000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 331 ms_handle_reset con 0x564050e54400 session 0x564051163a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 332 ms_handle_reset con 0x56404e857400 session 0x564050e932c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 332 ms_handle_reset con 0x56404f5a0800 session 0x56404f4f2000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154279936 unmapped: 29351936 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2507164 data_alloc: 251658240 data_used: 28889088
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154296320 unmapped: 29335552 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154312704 unmapped: 29319168 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.467901230s of 11.562768936s, submitted: 126
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 334 ms_handle_reset con 0x56405093f000 session 0x564050efd0e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154353664 unmapped: 29278208 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 334 handle_osd_map epochs [334,335], i have 334, src has [1,335]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 335 handle_osd_map epochs [335,335], i have 335, src has [1,335]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 335 ms_handle_reset con 0x56405093ec00 session 0x564050efdc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 335 ms_handle_reset con 0x56405093f000 session 0x564050ebb680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 335 ms_handle_reset con 0x56404e857400 session 0x564050938960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154361856 unmapped: 29270016 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 335 heartbeat osd_stat(store_statfs(0x4f6b33000/0x0/0x4ffc00000, data 0x3bb205b/0x3d89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 336 ms_handle_reset con 0x56404f5a0800 session 0x564052c82b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 336 ms_handle_reset con 0x56405183b800 session 0x56404e3d0960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154370048 unmapped: 29261824 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2521772 data_alloc: 251658240 data_used: 28897280
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154386432 unmapped: 29245440 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 337 handle_osd_map epochs [337,338], i have 337, src has [1,338]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 338 ms_handle_reset con 0x564050e54400 session 0x5640506843c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154411008 unmapped: 29220864 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 338 ms_handle_reset con 0x56404e857400 session 0x56405017dc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 338 heartbeat osd_stat(store_statfs(0x4f6b2b000/0x0/0x4ffc00000, data 0x3bb7292/0x3d91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154427392 unmapped: 29204480 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154427392 unmapped: 29204480 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 338 heartbeat osd_stat(store_statfs(0x4f6b2a000/0x0/0x4ffc00000, data 0x3bb72a2/0x3d92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154427392 unmapped: 29204480 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2528626 data_alloc: 251658240 data_used: 28913664
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154435584 unmapped: 29196288 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 339 heartbeat osd_stat(store_statfs(0x4f6b26000/0x0/0x4ffc00000, data 0x3bb8d21/0x3d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154468352 unmapped: 29163520 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 339 ms_handle_reset con 0x56405183b800 session 0x56404f591860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.895841599s of 10.137880325s, submitted: 80
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 340 ms_handle_reset con 0x56405183b000 session 0x564050f2c3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154509312 unmapped: 29122560 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 340 heartbeat osd_stat(store_statfs(0x4f6b27000/0x0/0x4ffc00000, data 0x3bb8d31/0x3d96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 340 handle_osd_map epochs [340,341], i have 340, src has [1,341]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 340 handle_osd_map epochs [341,341], i have 341, src has [1,341]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 341 ms_handle_reset con 0x56405183a800 session 0x564050eba000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 341 ms_handle_reset con 0x56405183ac00 session 0x564052c832c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 341 ms_handle_reset con 0x56405183bc00 session 0x56404e5d63c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 341 ms_handle_reset con 0x56405093f000 session 0x564050e93e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154533888 unmapped: 29097984 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 341 heartbeat osd_stat(store_statfs(0x4f6b1f000/0x0/0x4ffc00000, data 0x3bbc5b5/0x3d9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 341 handle_osd_map epochs [341,342], i have 341, src has [1,342]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 342 ms_handle_reset con 0x56404e857400 session 0x56404e2f5860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 342 handle_osd_map epochs [342,343], i have 342, src has [1,343]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 343 ms_handle_reset con 0x56405183a800 session 0x564050d5af00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154558464 unmapped: 29073408 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2547462 data_alloc: 251658240 data_used: 28913664
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154566656 unmapped: 29065216 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 343 ms_handle_reset con 0x56404f5a0800 session 0x564050aeb2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 343 ms_handle_reset con 0x56404e857400 session 0x564050f2dc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 343 ms_handle_reset con 0x56405183a800 session 0x56404f4f2960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 343 ms_handle_reset con 0x56405093f000 session 0x564050f19860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 343 ms_handle_reset con 0x56405183bc00 session 0x564052c82b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 344 ms_handle_reset con 0x56405183ac00 session 0x564050aeab40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 28934144 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 344 ms_handle_reset con 0x564050e55c00 session 0x56404f4f23c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 344 ms_handle_reset con 0x564050e5a000 session 0x56404f58f860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 344 ms_handle_reset con 0x56404e857400 session 0x5640508c8000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 344 ms_handle_reset con 0x56405093f000 session 0x564050ebb4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146399232 unmapped: 37232640 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 344 ms_handle_reset con 0x56405183a800 session 0x56404f4f3860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 344 handle_osd_map epochs [344,345], i have 344, src has [1,345]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 345 ms_handle_reset con 0x56404e857400 session 0x5640508bed20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146399232 unmapped: 37232640 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 345 handle_osd_map epochs [345,346], i have 345, src has [1,346]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 345 handle_osd_map epochs [346,346], i have 346, src has [1,346]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146440192 unmapped: 37191680 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 346 heartbeat osd_stat(store_statfs(0x4f85cf000/0x0/0x4ffc00000, data 0x210b3cf/0x22ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2266160 data_alloc: 234881024 data_used: 15781888
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 346 ms_handle_reset con 0x564050e55c00 session 0x56404e2f5e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 346 ms_handle_reset con 0x564050e5a000 session 0x564050b7f4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 346 ms_handle_reset con 0x56405093f000 session 0x56404e3d1860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 37257216 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 346 heartbeat osd_stat(store_statfs(0x4f85ce000/0x0/0x4ffc00000, data 0x210cc6d/0x22f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 37257216 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 347 ms_handle_reset con 0x56405183bc00 session 0x564050eba780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.162285805s of 10.083068848s, submitted: 159
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 347 ms_handle_reset con 0x56404e857400 session 0x56404f4a2780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 37257216 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 37257216 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 37257216 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2266780 data_alloc: 234881024 data_used: 15794176
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 37257216 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 348 ms_handle_reset con 0x56405093f000 session 0x564052c821e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 348 ms_handle_reset con 0x564050e55c00 session 0x56404efb1c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 348 ms_handle_reset con 0x564050e5a000 session 0x56404e90a1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 348 heartbeat osd_stat(store_statfs(0x4f85c6000/0x0/0x4ffc00000, data 0x21103d7/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146407424 unmapped: 37224448 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 348 ms_handle_reset con 0x56405183b000 session 0x56404e5d72c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146415616 unmapped: 37216256 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 348 ms_handle_reset con 0x56404e857400 session 0x564050b7e1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146432000 unmapped: 37199872 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 348 ms_handle_reset con 0x564050e55c00 session 0x56404e5d5a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 348 ms_handle_reset con 0x56405093f000 session 0x5640511630e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146481152 unmapped: 37150720 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2281746 data_alloc: 234881024 data_used: 15794176
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 349 ms_handle_reset con 0x56405183b800 session 0x564050afb2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146481152 unmapped: 37150720 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 350 ms_handle_reset con 0x56405183a400 session 0x564050efd4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 350 ms_handle_reset con 0x56405183a400 session 0x564050b7ef00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146505728 unmapped: 37126144 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 350 ms_handle_reset con 0x56404e857400 session 0x564050afab40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 350 ms_handle_reset con 0x56404e73cc00 session 0x56404e8b9a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 351 ms_handle_reset con 0x56405093f000 session 0x56404f4a5e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 351 ms_handle_reset con 0x564050e5a000 session 0x56404e2f5e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 351 heartbeat osd_stat(store_statfs(0x4f85bc000/0x0/0x4ffc00000, data 0x2113a8b/0x2300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146513920 unmapped: 37117952 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 351 ms_handle_reset con 0x56404e73cc00 session 0x564050ebb4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.431301117s of 10.783758163s, submitted: 92
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 351 heartbeat osd_stat(store_statfs(0x4f85ba000/0x0/0x4ffc00000, data 0x2115a9b/0x2303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 351 ms_handle_reset con 0x56404e857400 session 0x5640508c8000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146513920 unmapped: 37117952 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 351 handle_osd_map epochs [351,352], i have 351, src has [1,352]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 352 ms_handle_reset con 0x56405093f000 session 0x564050aeab40
Oct  1 13:15:20 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  1 13:15:20 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2815210254' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 352 ms_handle_reset con 0x564050e5a000 session 0x564050f2dc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146513920 unmapped: 37117952 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2294113 data_alloc: 234881024 data_used: 15818752
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 352 handle_osd_map epochs [352,353], i have 352, src has [1,353]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 353 ms_handle_reset con 0x56405183a400 session 0x564050aeb2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 353 ms_handle_reset con 0x56404e73cc00 session 0x56404e2f5860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 145915904 unmapped: 37715968 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 353 heartbeat osd_stat(store_statfs(0x4f85b9000/0x0/0x4ffc00000, data 0x2118c80/0x2304000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 145915904 unmapped: 37715968 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 353 ms_handle_reset con 0x56404e857400 session 0x564052c832c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 353 ms_handle_reset con 0x56405093f000 session 0x56404f591860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 353 heartbeat osd_stat(store_statfs(0x4f85b9000/0x0/0x4ffc00000, data 0x2118c80/0x2304000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 145989632 unmapped: 37642240 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 145989632 unmapped: 37642240 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 353 ms_handle_reset con 0x564050e5a000 session 0x56404e824960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 353 ms_handle_reset con 0x564050e55c00 session 0x5640508c94a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 145989632 unmapped: 37642240 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2300307 data_alloc: 234881024 data_used: 15814656
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 145989632 unmapped: 37642240 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x56404e73cc00 session 0x564050aeb860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x56404e857400 session 0x5640508bfe00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x56405093f000 session 0x56404e8b94a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x56405183b800 session 0x564050b7e000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x56404e73d400 session 0x56404e3d1a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146022400 unmapped: 37609472 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x564050e5a000 session 0x564050159860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x56404e73d400 session 0x56404f5914a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x56404e73cc00 session 0x564050b7f2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 heartbeat osd_stat(store_statfs(0x4f85b4000/0x0/0x4ffc00000, data 0x211a7ee/0x230a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x56404e857400 session 0x56404f4f30e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146030592 unmapped: 37601280 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.616359711s of 10.001767159s, submitted: 112
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x56405093f000 session 0x56404e8a63c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x56404e73cc00 session 0x564050d5a3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x56404e857400 session 0x56404e60af00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146046976 unmapped: 37584896 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x56404e73d400 session 0x564050f2c780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x564050e5a000 session 0x564050afb860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 heartbeat osd_stat(store_statfs(0x4f85b4000/0x0/0x4ffc00000, data 0x211a7ee/0x230a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146046976 unmapped: 37584896 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2310986 data_alloc: 234881024 data_used: 15826944
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x56405183b800 session 0x564050f2d860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 heartbeat osd_stat(store_statfs(0x4f85b5000/0x0/0x4ffc00000, data 0x211a78c/0x2309000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146055168 unmapped: 37576704 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x56404e73cc00 session 0x56404f489680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 ms_handle_reset con 0x56404e73d400 session 0x564051163680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146071552 unmapped: 37560320 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146071552 unmapped: 37560320 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 355 ms_handle_reset con 0x56404e857400 session 0x5640501594a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 355 heartbeat osd_stat(store_statfs(0x4f85b1000/0x0/0x4ffc00000, data 0x211c309/0x230c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 355 ms_handle_reset con 0x56405090ec00 session 0x564050efde00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 355 heartbeat osd_stat(store_statfs(0x4f85b1000/0x0/0x4ffc00000, data 0x211c309/0x230c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146112512 unmapped: 37519360 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 355 ms_handle_reset con 0x56405090f000 session 0x564050f18960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 355 ms_handle_reset con 0x5640501a3400 session 0x5640508bf680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146128896 unmapped: 37502976 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2316874 data_alloc: 234881024 data_used: 15843328
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 355 ms_handle_reset con 0x564050e5a000 session 0x564050e923c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146153472 unmapped: 37478400 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 355 handle_osd_map epochs [355,356], i have 355, src has [1,356]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 356 ms_handle_reset con 0x56404e73cc00 session 0x564050685680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 356 ms_handle_reset con 0x56404e73d400 session 0x56404f58e000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146161664 unmapped: 37470208 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 356 ms_handle_reset con 0x56404e857400 session 0x564050d5a5a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 356 ms_handle_reset con 0x56405090ec00 session 0x564050159860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146161664 unmapped: 37470208 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.199916363s of 10.417717934s, submitted: 137
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 356 ms_handle_reset con 0x56404e73cc00 session 0x56404f58f4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146178048 unmapped: 37453824 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 356 ms_handle_reset con 0x56404e73d400 session 0x56404e5d6000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 356 heartbeat osd_stat(store_statfs(0x4f85b0000/0x0/0x4ffc00000, data 0x211decb/0x230e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146178048 unmapped: 37453824 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2317542 data_alloc: 234881024 data_used: 15847424
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146178048 unmapped: 37453824 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146202624 unmapped: 37429248 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 356 heartbeat osd_stat(store_statfs(0x4f85b0000/0x0/0x4ffc00000, data 0x211de79/0x230e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 357 ms_handle_reset con 0x56404e857400 session 0x56404e3d01e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146219008 unmapped: 37412864 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 357 ms_handle_reset con 0x5640501a3400 session 0x564052c825a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146219008 unmapped: 37412864 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 357 ms_handle_reset con 0x56404e73cc00 session 0x564050ebb680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 357 ms_handle_reset con 0x56404e857400 session 0x564050b7e000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 358 ms_handle_reset con 0x56404e73d400 session 0x56404e8b8b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 358 ms_handle_reset con 0x564050e5a000 session 0x56404e90a000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 358 ms_handle_reset con 0x56405090ec00 session 0x564050aeb860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 37371904 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 358 ms_handle_reset con 0x56405090ec00 session 0x564050d5ad20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2333267 data_alloc: 234881024 data_used: 15867904
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 358 ms_handle_reset con 0x56404e73cc00 session 0x564050684000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 358 ms_handle_reset con 0x56404e73d400 session 0x564051162000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146292736 unmapped: 37339136 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 358 heartbeat osd_stat(store_statfs(0x4f85a8000/0x0/0x4ffc00000, data 0x2121875/0x2315000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146292736 unmapped: 37339136 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 358 heartbeat osd_stat(store_statfs(0x4f85a8000/0x0/0x4ffc00000, data 0x2121875/0x2315000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 358 handle_osd_map epochs [358,359], i have 358, src has [1,359]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 358 handle_osd_map epochs [359,359], i have 359, src has [1,359]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146309120 unmapped: 37322752 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 359 ms_handle_reset con 0x56404e857400 session 0x564050afba40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.055034637s of 10.044010162s, submitted: 137
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 360 handle_osd_map epochs [360,360], i have 360, src has [1,360]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 360 ms_handle_reset con 0x564050e5a000 session 0x564050668d20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146317312 unmapped: 37314560 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 360 heartbeat osd_stat(store_statfs(0x4f85a2000/0x0/0x4ffc00000, data 0x2124bf2/0x231a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146317312 unmapped: 37314560 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2340656 data_alloc: 234881024 data_used: 15867904
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 360 ms_handle_reset con 0x56404e73cc00 session 0x564050d5be00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 360 ms_handle_reset con 0x56404e857400 session 0x564050aeb2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 360 ms_handle_reset con 0x56404e73d400 session 0x56404e8a7a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146317312 unmapped: 37314560 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 360 ms_handle_reset con 0x56405090ec00 session 0x5640511625a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 360 heartbeat osd_stat(store_statfs(0x4f85a2000/0x0/0x4ffc00000, data 0x2124bf2/0x231a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146317312 unmapped: 37314560 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 361 ms_handle_reset con 0x5640501a3c00 session 0x564050f2d2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146309120 unmapped: 37322752 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146309120 unmapped: 37322752 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 361 ms_handle_reset con 0x56404e73cc00 session 0x56404e5d4d20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 361 ms_handle_reset con 0x56404e73d400 session 0x564050d5b0e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 361 heartbeat osd_stat(store_statfs(0x4f859f000/0x0/0x4ffc00000, data 0x21267cd/0x231e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146325504 unmapped: 37306368 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2346328 data_alloc: 234881024 data_used: 15867904
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 362 ms_handle_reset con 0x56404e857400 session 0x564050f19860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146341888 unmapped: 37289984 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 362 handle_osd_map epochs [362,363], i have 362, src has [1,363]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 37257216 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 363 ms_handle_reset con 0x56405090ec00 session 0x564050d5ad20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 37257216 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 363 heartbeat osd_stat(store_statfs(0x4f8597000/0x0/0x4ffc00000, data 0x2129de0/0x2326000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 37257216 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 363 ms_handle_reset con 0x5640501a2400 session 0x564050aeb860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.729123116s of 10.968172073s, submitted: 65
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 363 ms_handle_reset con 0x56404e73d400 session 0x5640508c8000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 363 ms_handle_reset con 0x56404e73cc00 session 0x56404e90a000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 363 ms_handle_reset con 0x56404e857400 session 0x564050b7e000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146391040 unmapped: 37240832 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2354817 data_alloc: 234881024 data_used: 15876096
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146399232 unmapped: 37232640 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 364 ms_handle_reset con 0x56405090ec00 session 0x564052c825a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 364 ms_handle_reset con 0x5640501a3000 session 0x564050ebb4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 364 ms_handle_reset con 0x5640501a2800 session 0x56404e5d6000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146399232 unmapped: 37232640 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146399232 unmapped: 37232640 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 364 ms_handle_reset con 0x56404e73cc00 session 0x564050159860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 364 ms_handle_reset con 0x56404e73d400 session 0x56404f58e000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 364 heartbeat osd_stat(store_statfs(0x4f8594000/0x0/0x4ffc00000, data 0x212b99f/0x232a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146407424 unmapped: 37224448 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 364 ms_handle_reset con 0x56404e857400 session 0x564050e923c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146415616 unmapped: 37216256 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 365 ms_handle_reset con 0x56405090ec00 session 0x564050f18960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361693 data_alloc: 234881024 data_used: 15904768
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 365 ms_handle_reset con 0x56404e73d400 session 0x56404e2f5e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146415616 unmapped: 37216256 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 365 ms_handle_reset con 0x56404e857400 session 0x564050afab40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146423808 unmapped: 37208064 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 366 ms_handle_reset con 0x5640501a2800 session 0x564050b7ef00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 366 ms_handle_reset con 0x56404e73cc00 session 0x564050efde00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 366 ms_handle_reset con 0x56405090ec00 session 0x564050afb860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 366 ms_handle_reset con 0x56404e73cc00 session 0x564050f2c780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 366 ms_handle_reset con 0x56404e73d400 session 0x564050f2d4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 366 ms_handle_reset con 0x56404e857400 session 0x564050f2cb40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 366 heartbeat osd_stat(store_statfs(0x4f832f000/0x0/0x4ffc00000, data 0x238d131/0x258e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [0,0,0,0,0,1,0,4])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 155762688 unmapped: 27869184 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 366 ms_handle_reset con 0x5640501a2800 session 0x56404e5d45a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 366 ms_handle_reset con 0x564052f96400 session 0x564050aeb680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 366 ms_handle_reset con 0x56404e73cc00 session 0x564050aea780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 366 ms_handle_reset con 0x56404e73d400 session 0x564050684b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 366 ms_handle_reset con 0x56404e857400 session 0x56404e3d1a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146964480 unmapped: 36667392 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 366 ms_handle_reset con 0x5640501a2800 session 0x56404f58eb40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.584374428s of 10.038191795s, submitted: 87
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 367 ms_handle_reset con 0x564052f96000 session 0x56404e824960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146972672 unmapped: 36659200 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2433410 data_alloc: 234881024 data_used: 15925248
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146972672 unmapped: 36659200 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146972672 unmapped: 36659200 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 367 heartbeat osd_stat(store_statfs(0x4f7da3000/0x0/0x4ffc00000, data 0x2918ba7/0x2b1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146972672 unmapped: 36659200 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 367 ms_handle_reset con 0x56404e73cc00 session 0x564050eba960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146972672 unmapped: 36659200 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 368 ms_handle_reset con 0x5640501a2800 session 0x564050eba960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 146972672 unmapped: 36659200 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2437926 data_alloc: 234881024 data_used: 15925248
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 148602880 unmapped: 35028992 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 368 heartbeat osd_stat(store_statfs(0x4f7d9f000/0x0/0x4ffc00000, data 0x291a66c/0x2b1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 368 handle_osd_map epochs [368,369], i have 368, src has [1,369]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 369 ms_handle_reset con 0x564052f96000 session 0x56404f58eb40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 369 ms_handle_reset con 0x564054011800 session 0x564050afa780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 369 ms_handle_reset con 0x56404f007800 session 0x56404e3d1a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 35012608 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 369 ms_handle_reset con 0x56404e73cc00 session 0x564050684b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 369 ms_handle_reset con 0x56404f007800 session 0x564050aea780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 369 ms_handle_reset con 0x564052f96000 session 0x564050aeb680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 369 ms_handle_reset con 0x5640501a2800 session 0x564050d5b860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 148627456 unmapped: 35004416 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 369 ms_handle_reset con 0x564054011800 session 0x56404e5d45a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 148627456 unmapped: 35004416 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 148627456 unmapped: 35004416 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2527991 data_alloc: 234881024 data_used: 24023040
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.920071602s of 11.178400040s, submitted: 41
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 370 ms_handle_reset con 0x56404e73cc00 session 0x564050f2cb40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 148627456 unmapped: 35004416 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 370 ms_handle_reset con 0x56404f007800 session 0x564050f2c780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 370 ms_handle_reset con 0x5640501a2800 session 0x56404e8b94a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 148627456 unmapped: 35004416 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 370 heartbeat osd_stat(store_statfs(0x4f7998000/0x0/0x4ffc00000, data 0x2d1dd76/0x2f25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 370 ms_handle_reset con 0x56404f007c00 session 0x564050f181e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 148635648 unmapped: 34996224 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 370 ms_handle_reset con 0x564050e63800 session 0x56404e8b92c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 148635648 unmapped: 34996224 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 150347776 unmapped: 33284096 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2564547 data_alloc: 251658240 data_used: 28225536
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 371 ms_handle_reset con 0x564050e63800 session 0x564050aeb4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 371 ms_handle_reset con 0x56404e73cc00 session 0x56404db88960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 371 heartbeat osd_stat(store_statfs(0x4f7995000/0x0/0x4ffc00000, data 0x2d1f947/0x2f28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 24748032 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 371 ms_handle_reset con 0x56404f007800 session 0x564050aea5a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 371 handle_osd_map epochs [371,372], i have 371, src has [1,372]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 372 ms_handle_reset con 0x56404f007c00 session 0x56404e5d65a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 158212096 unmapped: 25419776 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 372 ms_handle_reset con 0x5640501a2800 session 0x564050e92f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 159055872 unmapped: 24576000 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 159055872 unmapped: 24576000 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 373 heartbeat osd_stat(store_statfs(0x4f5a26000/0x0/0x4ffc00000, data 0x3ae74a6/0x3cef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 159055872 unmapped: 24576000 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2685993 data_alloc: 251658240 data_used: 29200384
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 159055872 unmapped: 24576000 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 159055872 unmapped: 24576000 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.506753922s of 12.138844490s, submitted: 198
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 24444928 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 373 heartbeat osd_stat(store_statfs(0x4f5a0a000/0x0/0x4ffc00000, data 0x3b0af25/0x3d14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 24444928 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 374 ms_handle_reset con 0x5640501a2800 session 0x5640508bf680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 374 heartbeat osd_stat(store_statfs(0x4f55f9000/0x0/0x4ffc00000, data 0x3f189ea/0x4124000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 24461312 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2726961 data_alloc: 251658240 data_used: 29216768
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 375 ms_handle_reset con 0x56404e73cc00 session 0x56404e824000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 24453120 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 376 ms_handle_reset con 0x56404f007800 session 0x564050938b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 24453120 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 24444928 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 377 ms_handle_reset con 0x564050e63800 session 0x564050938960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 377 heartbeat osd_stat(store_statfs(0x4f55e4000/0x0/0x4ffc00000, data 0x3f28c53/0x4137000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 24395776 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 378 ms_handle_reset con 0x564050e62c00 session 0x564050938d20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 159440896 unmapped: 24190976 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2742094 data_alloc: 251658240 data_used: 29241344
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 378 ms_handle_reset con 0x56404f007c00 session 0x5640506854a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 159440896 unmapped: 24190976 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 159440896 unmapped: 24190976 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 378 ms_handle_reset con 0x56404e73d400 session 0x564050402000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 378 ms_handle_reset con 0x56404e857400 session 0x56404e5d5a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.384851456s of 10.000457764s, submitted: 82
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 378 ms_handle_reset con 0x56404f007800 session 0x5640511621e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 378 heartbeat osd_stat(store_statfs(0x4f55d8000/0x0/0x4ffc00000, data 0x3f37830/0x4146000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 378 ms_handle_reset con 0x56404e73cc00 session 0x56405017c1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154165248 unmapped: 29466624 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154165248 unmapped: 29466624 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154165248 unmapped: 29466624 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2517770 data_alloc: 234881024 data_used: 20185088
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154165248 unmapped: 29466624 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 379 ms_handle_reset con 0x56404e73cc00 session 0x564050e93e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 379 ms_handle_reset con 0x56404e73d400 session 0x56404e2f5e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154173440 unmapped: 29458432 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 379 ms_handle_reset con 0x56404e857400 session 0x564050ebba40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154181632 unmapped: 29450240 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 379 heartbeat osd_stat(store_statfs(0x4f6bb2000/0x0/0x4ffc00000, data 0x295c25d/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 379 ms_handle_reset con 0x5640501a2800 session 0x564050f19680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154238976 unmapped: 29392896 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 380 ms_handle_reset con 0x564050e63800 session 0x56404f5914a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 381 ms_handle_reset con 0x564050e63c00 session 0x564050939a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 381 ms_handle_reset con 0x56404e73cc00 session 0x564050938b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 381 ms_handle_reset con 0x56404f007c00 session 0x56404e5db2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154263552 unmapped: 29368320 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2546805 data_alloc: 234881024 data_used: 20201472
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 381 handle_osd_map epochs [382,382], i have 382, src has [1,382]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154206208 unmapped: 29425664 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 382 ms_handle_reset con 0x56404e73d400 session 0x564050e92f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 382 ms_handle_reset con 0x56404e857400 session 0x564050aeb4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 382 ms_handle_reset con 0x56404e73cc00 session 0x564050aeb680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154214400 unmapped: 29417472 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.758539200s of 10.025648117s, submitted: 120
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154230784 unmapped: 29401088 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 383 ms_handle_reset con 0x56404e73d400 session 0x564050aea780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 383 heartbeat osd_stat(store_statfs(0x4f6b00000/0x0/0x4ffc00000, data 0x2a8c6ce/0x2c1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 383 handle_osd_map epochs [384,384], i have 384, src has [1,384]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153313280 unmapped: 30318592 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 384 ms_handle_reset con 0x564050e63c00 session 0x5640508bfe00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 384 ms_handle_reset con 0x56404f007c00 session 0x564050eba960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 30310400 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2562525 data_alloc: 234881024 data_used: 20246528
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 30310400 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153542656 unmapped: 30089216 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153542656 unmapped: 30089216 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f6ad5000/0x0/0x4ffc00000, data 0x2b3fe60/0x2c46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153575424 unmapped: 30056448 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 385 ms_handle_reset con 0x5640501a2800 session 0x56404e5d72c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153583616 unmapped: 30048256 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2581681 data_alloc: 234881024 data_used: 20279296
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153583616 unmapped: 30048256 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f6ad1000/0x0/0x4ffc00000, data 0x2b41a15/0x2c49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 386 heartbeat osd_stat(store_statfs(0x4f6ad1000/0x0/0x4ffc00000, data 0x2b434b0/0x2c4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 386 ms_handle_reset con 0x56404f007800 session 0x5640506843c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153583616 unmapped: 30048256 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.330673695s of 10.250423431s, submitted: 77
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 386 ms_handle_reset con 0x56404e73cc00 session 0x5640509385a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153583616 unmapped: 30048256 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 386 ms_handle_reset con 0x56404f007c00 session 0x564050eba000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 386 ms_handle_reset con 0x56404e73d400 session 0x56404f492960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153583616 unmapped: 30048256 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 387 heartbeat osd_stat(store_statfs(0x4f66c0000/0x0/0x4ffc00000, data 0x2b43522/0x2c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 387 ms_handle_reset con 0x564050e63c00 session 0x564050b7f2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153608192 unmapped: 30023680 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2593320 data_alloc: 234881024 data_used: 20287488
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 387 ms_handle_reset con 0x564050e63c00 session 0x56404e90a1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 387 heartbeat osd_stat(store_statfs(0x4f66bc000/0x0/0x4ffc00000, data 0x2b45101/0x2c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153624576 unmapped: 30007296 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 388 ms_handle_reset con 0x56404e73cc00 session 0x56405017dc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153632768 unmapped: 29999104 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 388 ms_handle_reset con 0x56404f007800 session 0x56404f4f32c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153632768 unmapped: 29999104 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153632768 unmapped: 29999104 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 388 heartbeat osd_stat(store_statfs(0x4f66b9000/0x0/0x4ffc00000, data 0x2b46c70/0x2c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,3])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153632768 unmapped: 29999104 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2596163 data_alloc: 234881024 data_used: 20865024
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 388 heartbeat osd_stat(store_statfs(0x4f66ba000/0x0/0x4ffc00000, data 0x2b46c70/0x2c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 388 ms_handle_reset con 0x56404f007c00 session 0x56404f493c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153632768 unmapped: 29999104 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 388 ms_handle_reset con 0x56404e73d400 session 0x56404f493860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 388 ms_handle_reset con 0x56404e73cc00 session 0x564050afa960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153632768 unmapped: 29999104 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.328996658s of 10.135367393s, submitted: 56
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153632768 unmapped: 29999104 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 388 heartbeat osd_stat(store_statfs(0x4f66bb000/0x0/0x4ffc00000, data 0x2b46c60/0x2c53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 388 ms_handle_reset con 0x56404f007c00 session 0x564050e93680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 388 ms_handle_reset con 0x56404f007800 session 0x56404f4f3a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153632768 unmapped: 29999104 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 389 ms_handle_reset con 0x564050e63c00 session 0x56404e3d0960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153640960 unmapped: 29990912 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2601083 data_alloc: 234881024 data_used: 20877312
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153657344 unmapped: 29974528 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153657344 unmapped: 29974528 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 390 ms_handle_reset con 0x564050ff7000 session 0x564050e93c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153657344 unmapped: 29974528 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f66b4000/0x0/0x4ffc00000, data 0x2b4a412/0x2c5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153657344 unmapped: 29974528 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 390 ms_handle_reset con 0x564050ff6c00 session 0x56404e3d1a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 390 ms_handle_reset con 0x56404e73cc00 session 0x56404e90a780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 390 handle_osd_map epochs [391,392], i have 390, src has [1,392]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153657344 unmapped: 29974528 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2613024 data_alloc: 234881024 data_used: 20922368
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 392 ms_handle_reset con 0x564050e63c00 session 0x564051e950e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 392 ms_handle_reset con 0x56404f007c00 session 0x564052c83a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153657344 unmapped: 29974528 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f66ad000/0x0/0x4ffc00000, data 0x2b4da0e/0x2c60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 393 ms_handle_reset con 0x564050ff7000 session 0x564050b7fc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 393 ms_handle_reset con 0x56404e73cc00 session 0x564050ebb4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153665536 unmapped: 29966336 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 393 ms_handle_reset con 0x56404f007c00 session 0x56405017c1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153665536 unmapped: 29966336 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.271565437s of 10.499337196s, submitted: 56
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 393 ms_handle_reset con 0x564050ff6c00 session 0x564051163c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153673728 unmapped: 29958144 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153673728 unmapped: 29958144 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2616363 data_alloc: 234881024 data_used: 20942848
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 393 heartbeat osd_stat(store_statfs(0x4f66aa000/0x0/0x4ffc00000, data 0x2b4f631/0x2c63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 394 ms_handle_reset con 0x56404f5a0000 session 0x564050685680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153690112 unmapped: 29941760 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 394 ms_handle_reset con 0x564050ff7000 session 0x56404f591680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 394 ms_handle_reset con 0x56404e73cc00 session 0x56404f71f860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153690112 unmapped: 29941760 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 394 ms_handle_reset con 0x564050e63c00 session 0x564050ebba40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 395 ms_handle_reset con 0x56404f007c00 session 0x56405017cf00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153698304 unmapped: 29933568 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 395 heartbeat osd_stat(store_statfs(0x4f66a8000/0x0/0x4ffc00000, data 0x2b511ca/0x2c66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 395 ms_handle_reset con 0x564050ff6c00 session 0x56404e90a780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 395 ms_handle_reset con 0x564054011c00 session 0x564050b7f680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 395 ms_handle_reset con 0x564054011400 session 0x56404e3d0960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153706496 unmapped: 29925376 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 396 ms_handle_reset con 0x56404f5a3c00 session 0x564050afa960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153706496 unmapped: 29925376 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2627285 data_alloc: 234881024 data_used: 21110784
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 396 ms_handle_reset con 0x56404f007c00 session 0x56404e90ab40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.3 total, 600.0 interval#012Cumulative writes: 18K writes, 68K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 18K writes, 5951 syncs, 3.04 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 25.91 MB, 0.04 MB/s#012Interval WAL: 10K writes, 4344 syncs, 2.45 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 396 ms_handle_reset con 0x56404e73cc00 session 0x56405017dc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153731072 unmapped: 29900800 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f66a0000/0x0/0x4ffc00000, data 0x2b5483a/0x2c6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 396 ms_handle_reset con 0x564050e63c00 session 0x564050aeb4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 397 ms_handle_reset con 0x564050e63c00 session 0x564050938b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 397 ms_handle_reset con 0x564050ff6c00 session 0x56404e90b680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153739264 unmapped: 29892608 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 397 ms_handle_reset con 0x56404e73cc00 session 0x56404e8b94a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 397 ms_handle_reset con 0x56404f007c00 session 0x56404e8b8b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153747456 unmapped: 29884416 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.936318398s of 10.307854652s, submitted: 89
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 398 ms_handle_reset con 0x56404f5a3c00 session 0x56404f499c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 398 ms_handle_reset con 0x564054011400 session 0x56404f498000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153763840 unmapped: 29868032 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 399 ms_handle_reset con 0x56404e73cc00 session 0x56404f4a2780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153780224 unmapped: 29851648 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2639951 data_alloc: 234881024 data_used: 21127168
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 399 ms_handle_reset con 0x564050e63c00 session 0x56404e5d7c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 399 ms_handle_reset con 0x564050ff6c00 session 0x56404e5d70e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153796608 unmapped: 29835264 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f669a000/0x0/0x4ffc00000, data 0x2b599c2/0x2c74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,1,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153845760 unmapped: 29786112 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154910720 unmapped: 28721152 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 400 ms_handle_reset con 0x564054010c00 session 0x56404e90a000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 400 ms_handle_reset con 0x56404f007c00 session 0x564050eba780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 400 ms_handle_reset con 0x564054011c00 session 0x564050aebe00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154910720 unmapped: 28721152 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154927104 unmapped: 28704768 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2640706 data_alloc: 234881024 data_used: 21098496
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f669f000/0x0/0x4ffc00000, data 0x2ac90a4/0x2c6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154927104 unmapped: 28704768 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f669f000/0x0/0x4ffc00000, data 0x2ac90a4/0x2c6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 401 ms_handle_reset con 0x56404e73cc00 session 0x56404e5da960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f669f000/0x0/0x4ffc00000, data 0x2ac90a4/0x2c6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154927104 unmapped: 28704768 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154935296 unmapped: 28696576 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154935296 unmapped: 28696576 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.760511398s of 10.699627876s, submitted: 95
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f66a0000/0x0/0x4ffc00000, data 0x2ac90a4/0x2c6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 401 handle_osd_map epochs [402,402], i have 402, src has [1,402]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2644162 data_alloc: 234881024 data_used: 21110784
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154943488 unmapped: 28688384 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 401 handle_osd_map epochs [402,402], i have 402, src has [1,402]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f669c000/0x0/0x4ffc00000, data 0x2acac3d/0x2c71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154951680 unmapped: 28680192 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154976256 unmapped: 28655616 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f6698000/0x0/0x4ffc00000, data 0x2a3781e/0x2c67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 403 ms_handle_reset con 0x564050ff6c00 session 0x56404f4f3860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 403 ms_handle_reset con 0x564054011400 session 0x56404e825e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154976256 unmapped: 28655616 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 403 ms_handle_reset con 0x56404e73cc00 session 0x56404f4f30e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154976256 unmapped: 28655616 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 404 ms_handle_reset con 0x564050ff6c00 session 0x56404e60a000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2643579 data_alloc: 234881024 data_used: 21127168
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 404 ms_handle_reset con 0x564050e63c00 session 0x564050669a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 154992640 unmapped: 28639232 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f66a3000/0x0/0x4ffc00000, data 0x2a3929d/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 155000832 unmapped: 28631040 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 405 ms_handle_reset con 0x56404f007c00 session 0x564050e925a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153821184 unmapped: 29810688 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 405 ms_handle_reset con 0x564054011c00 session 0x564050427a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 405 ms_handle_reset con 0x564054010800 session 0x56404f58f2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 405 heartbeat osd_stat(store_statfs(0x4f66a0000/0x0/0x4ffc00000, data 0x2a3ae7e/0x2c6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 405 ms_handle_reset con 0x564052f96000 session 0x564050afb860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 405 ms_handle_reset con 0x564054011800 session 0x564050f2cf00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 405 ms_handle_reset con 0x56404e73cc00 session 0x56404e5db2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153862144 unmapped: 29769728 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 405 ms_handle_reset con 0x56404f007c00 session 0x564050f19860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 405 ms_handle_reset con 0x56404e73cc00 session 0x564050e92960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 405 heartbeat osd_stat(store_statfs(0x4f6753000/0x0/0x4ffc00000, data 0x2988df9/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 151420928 unmapped: 32210944 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.774857521s of 10.260143280s, submitted: 104
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 406 ms_handle_reset con 0x56404f007c00 session 0x564050f2d860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2540763 data_alloc: 234881024 data_used: 16171008
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 406 heartbeat osd_stat(store_statfs(0x4f6f6c000/0x0/0x4ffc00000, data 0x2171df9/0x23a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 151420928 unmapped: 32210944 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 407 ms_handle_reset con 0x564052f96000 session 0x5640508c85a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 151420928 unmapped: 32210944 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 407 ms_handle_reset con 0x564054010800 session 0x564050426960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 151429120 unmapped: 32202752 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 408 ms_handle_reset con 0x564054011800 session 0x56404f4f3c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 408 ms_handle_reset con 0x56404e73cc00 session 0x564050668f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 408 ms_handle_reset con 0x56404f007c00 session 0x564050aeb2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152035328 unmapped: 31596544 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 408 heartbeat osd_stat(store_statfs(0x4f6f62000/0x0/0x4ffc00000, data 0x2176ff2/0x23ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 408 ms_handle_reset con 0x564052f96000 session 0x564050d5a1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 408 ms_handle_reset con 0x564054010800 session 0x56404e60ad20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 408 ms_handle_reset con 0x564050e63c00 session 0x564050668960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152035328 unmapped: 31596544 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2549022 data_alloc: 234881024 data_used: 16248832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152043520 unmapped: 31588352 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152043520 unmapped: 31588352 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 409 ms_handle_reset con 0x56404e73cc00 session 0x56404e3d0000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 409 ms_handle_reset con 0x56404f007c00 session 0x56404f492960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 409 ms_handle_reset con 0x564050e63c00 session 0x56404f5910e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152084480 unmapped: 31547392 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152084480 unmapped: 31547392 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 409 handle_osd_map epochs [411,411], i have 409, src has [1,411]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 409 handle_osd_map epochs [410,411], i have 409, src has [1,411]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 411 ms_handle_reset con 0x564052f96000 session 0x564050afb2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 411 ms_handle_reset con 0x564054010800 session 0x564050938960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152084480 unmapped: 31547392 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 411 heartbeat osd_stat(store_statfs(0x4f6cc0000/0x0/0x4ffc00000, data 0x2414205/0x264d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 411 handle_osd_map epochs [412,412], i have 412, src has [1,412]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.673187256s of 10.024498940s, submitted: 126
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2587029 data_alloc: 234881024 data_used: 16248832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152100864 unmapped: 31531008 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f6cbc000/0x0/0x4ffc00000, data 0x2415e0e/0x2650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f6cbc000/0x0/0x4ffc00000, data 0x2415e0e/0x2650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152100864 unmapped: 31531008 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 412 ms_handle_reset con 0x56404e73cc00 session 0x56404f591c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152100864 unmapped: 31531008 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152100864 unmapped: 31531008 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 413 ms_handle_reset con 0x56404f007c00 session 0x56404f4f30e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 413 ms_handle_reset con 0x564052f96000 session 0x56404e5d72c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 413 ms_handle_reset con 0x564050e63c00 session 0x56404e5d7c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152141824 unmapped: 31490048 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 414 ms_handle_reset con 0x564050ff6c00 session 0x56404f4a2780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2597001 data_alloc: 234881024 data_used: 16261120
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 414 heartbeat osd_stat(store_statfs(0x4f6cb5000/0x0/0x4ffc00000, data 0x241946c/0x2657000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152166400 unmapped: 31465472 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 415 ms_handle_reset con 0x56404e73cc00 session 0x56404e8b94a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152182784 unmapped: 31449088 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152182784 unmapped: 31449088 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 415 ms_handle_reset con 0x56404f007c00 session 0x56404e90ab40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152182784 unmapped: 31449088 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152199168 unmapped: 31432704 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f6cb5000/0x0/0x4ffc00000, data 0x241afdb/0x2659000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 415 ms_handle_reset con 0x564050e63c00 session 0x564050b7f680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2597034 data_alloc: 234881024 data_used: 16273408
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152199168 unmapped: 31432704 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 415 ms_handle_reset con 0x564052f96000 session 0x56404f71f680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152199168 unmapped: 31432704 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.221773148s of 11.727247238s, submitted: 93
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f6cb5000/0x0/0x4ffc00000, data 0x241afdb/0x2659000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152190976 unmapped: 31440896 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152190976 unmapped: 31440896 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152190976 unmapped: 31440896 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f6cb5000/0x0/0x4ffc00000, data 0x241afdb/0x2659000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2601340 data_alloc: 234881024 data_used: 16281600
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152207360 unmapped: 31424512 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152207360 unmapped: 31424512 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 416 heartbeat osd_stat(store_statfs(0x4f6cb1000/0x0/0x4ffc00000, data 0x241ca3e/0x265c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 416 ms_handle_reset con 0x564054011000 session 0x56404e2f4000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152231936 unmapped: 31399936 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152231936 unmapped: 31399936 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 417 ms_handle_reset con 0x564054010400 session 0x56404f71e000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152240128 unmapped: 31391744 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 417 ms_handle_reset con 0x564054010000 session 0x5640525dfe00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 417 ms_handle_reset con 0x56404e73cc00 session 0x56404f71fa40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 418 ms_handle_reset con 0x56404f007c00 session 0x56404f71e780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2627668 data_alloc: 234881024 data_used: 19001344
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 418 heartbeat osd_stat(store_statfs(0x4f6caa000/0x0/0x4ffc00000, data 0x242018c/0x2662000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152264704 unmapped: 31367168 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 418 ms_handle_reset con 0x564050e63c00 session 0x564050e92b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152281088 unmapped: 31350784 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.746252060s of 10.005026817s, submitted: 29
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 418 heartbeat osd_stat(store_statfs(0x4f6caa000/0x0/0x4ffc00000, data 0x242018c/0x2662000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152281088 unmapped: 31350784 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 418 ms_handle_reset con 0x56404e73cc00 session 0x564050aeba40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152281088 unmapped: 31350784 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152281088 unmapped: 31350784 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 418 heartbeat osd_stat(store_statfs(0x4f6cac000/0x0/0x4ffc00000, data 0x242018c/0x2662000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2625707 data_alloc: 234881024 data_used: 19005440
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152281088 unmapped: 31350784 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 418 ms_handle_reset con 0x56404f007c00 session 0x56404e824960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152289280 unmapped: 31342592 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 418 ms_handle_reset con 0x564054010000 session 0x56404e7b05a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152289280 unmapped: 31342592 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 419 heartbeat osd_stat(store_statfs(0x4f6cac000/0x0/0x4ffc00000, data 0x242018c/0x2662000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152297472 unmapped: 31334400 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152297472 unmapped: 31334400 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2632309 data_alloc: 234881024 data_used: 19009536
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152297472 unmapped: 31334400 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 419 heartbeat osd_stat(store_statfs(0x4f6ca8000/0x0/0x4ffc00000, data 0x2421b9d/0x2665000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152313856 unmapped: 31318016 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.738299847s of 10.214635849s, submitted: 27
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152313856 unmapped: 31318016 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152576000 unmapped: 31055872 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 420 ms_handle_reset con 0x564054010400 session 0x564050669e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152584192 unmapped: 31047680 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 420 heartbeat osd_stat(store_statfs(0x4f6c65000/0x0/0x4ffc00000, data 0x246376e/0x26a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 420 ms_handle_reset con 0x564052f96000 session 0x56404e5d41e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2637959 data_alloc: 234881024 data_used: 19009536
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152584192 unmapped: 31047680 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152584192 unmapped: 31047680 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152584192 unmapped: 31047680 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152616960 unmapped: 31014912 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152633344 unmapped: 30998528 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f6c5e000/0x0/0x4ffc00000, data 0x2466ea0/0x26ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 423 ms_handle_reset con 0x564052f97000 session 0x56404e2f45a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2648018 data_alloc: 234881024 data_used: 19021824
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152690688 unmapped: 30941184 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 423 ms_handle_reset con 0x56404e73cc00 session 0x56404f4a2000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152707072 unmapped: 30924800 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 423 ms_handle_reset con 0x564052f96000 session 0x564050158d20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.415133476s of 10.109007835s, submitted: 34
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 151748608 unmapped: 31883264 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 424 ms_handle_reset con 0x56404f007c00 session 0x564050684000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 424 ms_handle_reset con 0x564054010000 session 0x564050b7f4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 151748608 unmapped: 31883264 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 151748608 unmapped: 31883264 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 424 ms_handle_reset con 0x56404e73cc00 session 0x56404f4a45a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2610664 data_alloc: 234881024 data_used: 16310272
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 424 heartbeat osd_stat(store_statfs(0x4f6ef1000/0x0/0x4ffc00000, data 0x21d24d4/0x241c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,2])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 151748608 unmapped: 31883264 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 424 ms_handle_reset con 0x564054010400 session 0x564051e95860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 151773184 unmapped: 31858688 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 151781376 unmapped: 31850496 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 424 ms_handle_reset con 0x56404f007c00 session 0x5640508c8d20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 424 ms_handle_reset con 0x564052f96000 session 0x5640508c92c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 424 ms_handle_reset con 0x564052f97000 session 0x564050938b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 151150592 unmapped: 32481280 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 424 ms_handle_reset con 0x56404e73cc00 session 0x56404e2f41e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 151158784 unmapped: 32473088 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 424 heartbeat osd_stat(store_statfs(0x4f6ef2000/0x0/0x4ffc00000, data 0x21d24d4/0x241c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2613686 data_alloc: 234881024 data_used: 16318464
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152215552 unmapped: 31416320 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152215552 unmapped: 31416320 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.363270283s of 10.081651688s, submitted: 114
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 426 ms_handle_reset con 0x56404f007c00 session 0x56405017da40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 426 ms_handle_reset con 0x564052f96000 session 0x564050e925a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 427 ms_handle_reset con 0x564054010400 session 0x56404f4f30e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 427 ms_handle_reset con 0x564052f96800 session 0x564050f2cf00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 427 ms_handle_reset con 0x56404e73cc00 session 0x564050e92960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f6eec000/0x0/0x4ffc00000, data 0x21d5ab4/0x2422000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152264704 unmapped: 31367168 heap: 183631872 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 427 ms_handle_reset con 0x56404f007c00 session 0x564050f19860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 427 ms_handle_reset con 0x564052f96000 session 0x564050eba5a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 427 ms_handle_reset con 0x564052f96800 session 0x564050938960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 32333824 heap: 185253888 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 427 ms_handle_reset con 0x564054010400 session 0x56404f591c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 427 ms_handle_reset con 0x56404e73cc00 session 0x56404e8b92c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 32333824 heap: 185253888 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2733986 data_alloc: 234881024 data_used: 16326656
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 34242560 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 427 ms_handle_reset con 0x56404f007c00 session 0x56404e8b94a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f616d000/0x0/0x4ffc00000, data 0x2f53685/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 34242560 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 34242560 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f616d000/0x0/0x4ffc00000, data 0x2f53685/0x31a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 34242560 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 427 ms_handle_reset con 0x564052f96000 session 0x564050938960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153468928 unmapped: 33890304 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 428 ms_handle_reset con 0x564050e50800 session 0x564050d5b680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2739958 data_alloc: 234881024 data_used: 16404480
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 153468928 unmapped: 33890304 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 155140096 unmapped: 32219136 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f613b000/0x0/0x4ffc00000, data 0x2f80c81/0x31d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 155140096 unmapped: 32219136 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 155140096 unmapped: 32219136 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 430 ms_handle_reset con 0x564050e51800 session 0x564050d5b860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 155140096 unmapped: 32219136 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2788340 data_alloc: 234881024 data_used: 22933504
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 155140096 unmapped: 32219136 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 155140096 unmapped: 32219136 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 155140096 unmapped: 32219136 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f6139000/0x0/0x4ffc00000, data 0x2f82852/0x31d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 155140096 unmapped: 32219136 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f6139000/0x0/0x4ffc00000, data 0x2f82852/0x31d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f6139000/0x0/0x4ffc00000, data 0x2f82852/0x31d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 155140096 unmapped: 32219136 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.084865570s of 18.484209061s, submitted: 94
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 430 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2797608 data_alloc: 234881024 data_used: 22937600
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 155140096 unmapped: 32219136 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 432 ms_handle_reset con 0x56404e73cc00 session 0x564050d5a780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 155156480 unmapped: 32202752 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162963456 unmapped: 24395776 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 158089216 unmapped: 29270016 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 432 ms_handle_reset con 0x564050e50800 session 0x56404e5db860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 161161216 unmapped: 26198016 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 432 ms_handle_reset con 0x56404f007c00 session 0x564050d5ad20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f5795000/0x0/0x4ffc00000, data 0x3924e4e/0x3b79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,11])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2871416 data_alloc: 234881024 data_used: 22966272
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 161595392 unmapped: 25763840 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 161595392 unmapped: 25763840 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 433 ms_handle_reset con 0x564052f96000 session 0x56404e8b85a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 161595392 unmapped: 25763840 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 165863424 unmapped: 21495808 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 165011456 unmapped: 22347776 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f576f000/0x0/0x4ffc00000, data 0x3948a1f/0x3b9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.335645914s of 10.109526634s, submitted: 123
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2943744 data_alloc: 251658240 data_used: 31055872
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166215680 unmapped: 21143552 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166215680 unmapped: 21143552 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166215680 unmapped: 21143552 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f5760000/0x0/0x4ffc00000, data 0x3958a1f/0x3bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166215680 unmapped: 21143552 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166215680 unmapped: 21143552 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2946784 data_alloc: 251658240 data_used: 31293440
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166215680 unmapped: 21143552 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166215680 unmapped: 21143552 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 434 heartbeat osd_stat(store_statfs(0x4f5759000/0x0/0x4ffc00000, data 0x395d482/0x3bb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166223872 unmapped: 21135360 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166223872 unmapped: 21135360 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 168902656 unmapped: 18456576 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.313314438s of 10.020269394s, submitted: 67
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2997514 data_alloc: 251658240 data_used: 31461376
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169811968 unmapped: 17547264 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 434 heartbeat osd_stat(store_statfs(0x4f53c8000/0x0/0x4ffc00000, data 0x3ce8482/0x3f3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169811968 unmapped: 17547264 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169811968 unmapped: 17547264 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169811968 unmapped: 17547264 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169811968 unmapped: 17547264 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 434 heartbeat osd_stat(store_statfs(0x4f537b000/0x0/0x4ffc00000, data 0x3d1c482/0x3f73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 434 heartbeat osd_stat(store_statfs(0x4f537b000/0x0/0x4ffc00000, data 0x3d1c482/0x3f73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2998344 data_alloc: 251658240 data_used: 31469568
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169811968 unmapped: 17547264 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 168706048 unmapped: 18653184 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 168706048 unmapped: 18653184 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 ms_handle_reset con 0x564050e5f800 session 0x564050aeaf00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 168714240 unmapped: 18644992 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 168714240 unmapped: 18644992 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2996467 data_alloc: 251658240 data_used: 31735808
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 168714240 unmapped: 18644992 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 ms_handle_reset con 0x564050e5dc00 session 0x564051163a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f5393000/0x0/0x4ffc00000, data 0x3d2116f/0x3f7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 168714240 unmapped: 18644992 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 ms_handle_reset con 0x564052f96800 session 0x564050f19860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 ms_handle_reset con 0x564050e51400 session 0x56405017dc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.905604362s of 12.068167686s, submitted: 30
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 ms_handle_reset con 0x56404e73cc00 session 0x564050f2c3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 163749888 unmapped: 23609344 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f5393000/0x0/0x4ffc00000, data 0x3d2116f/0x3f7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 163749888 unmapped: 23609344 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 163749888 unmapped: 23609344 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2802044 data_alloc: 234881024 data_used: 23101440
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 163749888 unmapped: 23609344 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 ms_handle_reset con 0x56404f007c00 session 0x5640508c8000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 163749888 unmapped: 23609344 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 ms_handle_reset con 0x56404e73cc00 session 0x56404e7b0d20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 163749888 unmapped: 23609344 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 163749888 unmapped: 23609344 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 ms_handle_reset con 0x56404f007c00 session 0x56404e5d65a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 ms_handle_reset con 0x564050e51400 session 0x564050426960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f63c9000/0x0/0x4ffc00000, data 0x2ceb1d1/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 163749888 unmapped: 23609344 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2804614 data_alloc: 234881024 data_used: 23097344
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f63c9000/0x0/0x4ffc00000, data 0x2ceb1d1/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 163749888 unmapped: 23609344 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f63c9000/0x0/0x4ffc00000, data 0x2ceb1d1/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 163749888 unmapped: 23609344 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 163749888 unmapped: 23609344 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.402039528s of 10.453696251s, submitted: 14
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 ms_handle_reset con 0x564050e5dc00 session 0x564050e925a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 163938304 unmapped: 23420928 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 163938304 unmapped: 23420928 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f639f000/0x0/0x4ffc00000, data 0x2d151d1/0x2f6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2810701 data_alloc: 234881024 data_used: 23228416
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 163938304 unmapped: 23420928 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 163938304 unmapped: 23420928 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 163930112 unmapped: 23429120 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 ms_handle_reset con 0x564052f96000 session 0x56404e60be00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 ms_handle_reset con 0x56404e73cc00 session 0x56404e825860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f639f000/0x0/0x4ffc00000, data 0x2d151d1/0x2f6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 22003712 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 22003712 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2810381 data_alloc: 234881024 data_used: 23228416
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 22003712 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f639c000/0x0/0x4ffc00000, data 0x2f451d1/0x2f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 22003712 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 22003712 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.352780342s of 10.407973289s, submitted: 14
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 436 ms_handle_reset con 0x56404f5a2c00 session 0x56404e5da1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 21995520 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 164282368 unmapped: 23076864 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 436 ms_handle_reset con 0x564052f96400 session 0x56404e2f41e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 436 ms_handle_reset con 0x56404e73b400 session 0x56404e8b9860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f6396000/0x0/0x4ffc00000, data 0x2f46dc0/0x2f77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2843340 data_alloc: 234881024 data_used: 23416832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162529280 unmapped: 24829952 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 436 ms_handle_reset con 0x564050e53c00 session 0x56404e90a1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 436 ms_handle_reset con 0x56404e73b400 session 0x5640501585a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 24772608 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 24772608 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 436 ms_handle_reset con 0x56404e73cc00 session 0x56404db88960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 437 ms_handle_reset con 0x56404f5a2c00 session 0x56404f591860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162594816 unmapped: 24764416 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162594816 unmapped: 24764416 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 437 heartbeat osd_stat(store_statfs(0x4f6393000/0x0/0x4ffc00000, data 0x2f48991/0x2f7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 437 ms_handle_reset con 0x564052f96400 session 0x5640508bf680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2854983 data_alloc: 234881024 data_used: 24510464
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162594816 unmapped: 24764416 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162996224 unmapped: 24363008 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162996224 unmapped: 24363008 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162996224 unmapped: 24363008 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 437 heartbeat osd_stat(store_statfs(0x4f6327000/0x0/0x4ffc00000, data 0x2fb692f/0x2fe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162996224 unmapped: 24363008 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 437 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.615792274s of 11.851361275s, submitted: 39
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2866541 data_alloc: 234881024 data_used: 24518656
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162578432 unmapped: 24780800 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162627584 unmapped: 24731648 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162627584 unmapped: 24731648 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 ms_handle_reset con 0x5640501d3c00 session 0x5640508c85a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 ms_handle_reset con 0x56404e73b400 session 0x564050f2d860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162627584 unmapped: 24731648 heap: 187359232 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 ms_handle_reset con 0x56404e73cc00 session 0x564050d5a1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162922496 unmapped: 28639232 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 heartbeat osd_stat(store_statfs(0x4f54c3000/0x0/0x4ffc00000, data 0x3e46392/0x3e4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 ms_handle_reset con 0x56404f5a2c00 session 0x564051e95e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3179749 data_alloc: 234881024 data_used: 24518656
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 162922496 unmapped: 28639232 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166797312 unmapped: 24764416 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 heartbeat osd_stat(store_statfs(0x4f3696000/0x0/0x4ffc00000, data 0x5c72392/0x5c77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166797312 unmapped: 24764416 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166805504 unmapped: 24756224 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166805504 unmapped: 24756224 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3210393 data_alloc: 234881024 data_used: 26247168
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 24748032 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 24748032 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 heartbeat osd_stat(store_statfs(0x4f3696000/0x0/0x4ffc00000, data 0x5c72392/0x5c77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 24748032 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 24748032 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 24748032 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 heartbeat osd_stat(store_statfs(0x4f3696000/0x0/0x4ffc00000, data 0x5c72392/0x5c77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3210393 data_alloc: 234881024 data_used: 26247168
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 24748032 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 24748032 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 24748032 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.018585205s of 17.873167038s, submitted: 41
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 ms_handle_reset con 0x564052f96800 session 0x564050e92f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 heartbeat osd_stat(store_statfs(0x4f3696000/0x0/0x4ffc00000, data 0x5c72392/0x5c77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 ms_handle_reset con 0x564050e50800 session 0x56405017da40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 ms_handle_reset con 0x56404e73b400 session 0x56404e5d6780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 27000832 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 heartbeat osd_stat(store_statfs(0x4f36c1000/0x0/0x4ffc00000, data 0x5c48392/0x5c4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 27000832 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 ms_handle_reset con 0x56404e73cc00 session 0x56404e8b8000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197705 data_alloc: 234881024 data_used: 26116096
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 164585472 unmapped: 26976256 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 ms_handle_reset con 0x564052f96400 session 0x5640508c9860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 164585472 unmapped: 26976256 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 ms_handle_reset con 0x56404e856c00 session 0x5640506852c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 438 handle_osd_map epochs [438,439], i have 438, src has [1,439]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 164675584 unmapped: 26886144 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 439 ms_handle_reset con 0x564050e5c000 session 0x564050d5a960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 164675584 unmapped: 26886144 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 439 ms_handle_reset con 0x564050e5c400 session 0x564050f19680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 164675584 unmapped: 26886144 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f399d000/0x0/0x4ffc00000, data 0x570ff24/0x5970000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 439 ms_handle_reset con 0x56404e73b400 session 0x5640511623c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3218965 data_alloc: 251658240 data_used: 35082240
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f399d000/0x0/0x4ffc00000, data 0x570ff24/0x5970000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 167886848 unmapped: 23674880 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 167886848 unmapped: 23674880 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 439 ms_handle_reset con 0x56404e73cc00 session 0x56404e3d01e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 165822464 unmapped: 25739264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 165822464 unmapped: 25739264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 165822464 unmapped: 25739264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057181 data_alloc: 234881024 data_used: 25690112
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 165822464 unmapped: 25739264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f447b000/0x0/0x4ffc00000, data 0x4c32f24/0x4e93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.658506393s of 13.120302200s, submitted: 62
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 439 handle_osd_map epochs [440,440], i have 440, src has [1,440]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 165822464 unmapped: 25739264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 165822464 unmapped: 25739264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 441 ms_handle_reset con 0x56404e856c00 session 0x56404e8b9e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 164446208 unmapped: 27115520 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 441 heartbeat osd_stat(store_statfs(0x4f4474000/0x0/0x4ffc00000, data 0x4c3653a/0x4e98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 164446208 unmapped: 27115520 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 441 heartbeat osd_stat(store_statfs(0x4f4474000/0x0/0x4ffc00000, data 0x4c3653a/0x4e98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,14,10])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3077313 data_alloc: 234881024 data_used: 25767936
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170164224 unmapped: 21397504 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172720128 unmapped: 18841600 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 441 heartbeat osd_stat(store_statfs(0x4f4476000/0x0/0x4ffc00000, data 0x4c3653a/0x4e98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169271296 unmapped: 22290432 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169394176 unmapped: 22167552 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169402368 unmapped: 22159360 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3231985 data_alloc: 234881024 data_used: 26259456
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 441 heartbeat osd_stat(store_statfs(0x4f2e73000/0x0/0x4ffc00000, data 0x623953a/0x649b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169459712 unmapped: 22102016 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 441 handle_osd_map epochs [442,442], i have 442, src has [1,442]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.335985661s of 10.115025520s, submitted: 116
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169263104 unmapped: 22298624 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169263104 unmapped: 22298624 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 ms_handle_reset con 0x564052f96400 session 0x56404e7b05a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 ms_handle_reset con 0x564052f96400 session 0x56404f4f2000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169861120 unmapped: 21700608 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169861120 unmapped: 21700608 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3229983 data_alloc: 234881024 data_used: 26333184
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169861120 unmapped: 21700608 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f2e70000/0x0/0x4ffc00000, data 0x623af9d/0x649e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f2e70000/0x0/0x4ffc00000, data 0x623af9d/0x649e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169861120 unmapped: 21700608 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169861120 unmapped: 21700608 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f2e70000/0x0/0x4ffc00000, data 0x623af9d/0x649e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169861120 unmapped: 21700608 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f2e70000/0x0/0x4ffc00000, data 0x623af9d/0x649e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169861120 unmapped: 21700608 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3230159 data_alloc: 234881024 data_used: 26333184
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169861120 unmapped: 21700608 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169861120 unmapped: 21700608 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169861120 unmapped: 21700608 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169861120 unmapped: 21700608 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169861120 unmapped: 21700608 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.861826897s of 14.537788391s, submitted: 19
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3272627 data_alloc: 234881024 data_used: 26333184
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f2e70000/0x0/0x4ffc00000, data 0x623af9d/0x649e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,7,2])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173940736 unmapped: 17620992 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f2aa6000/0x0/0x4ffc00000, data 0x6604f9d/0x6868000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,8])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170262528 unmapped: 21299200 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 ms_handle_reset con 0x5640501a2400 session 0x564050aeb860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170262528 unmapped: 21299200 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f2aa6000/0x0/0x4ffc00000, data 0x6604f9d/0x6868000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170262528 unmapped: 21299200 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170262528 unmapped: 21299200 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f2aa6000/0x0/0x4ffc00000, data 0x6604f9d/0x6868000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262499 data_alloc: 234881024 data_used: 26333184
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170262528 unmapped: 21299200 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 ms_handle_reset con 0x56404f5a2c00 session 0x56404e2f4000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 ms_handle_reset con 0x564052f96800 session 0x56404f4a2780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170270720 unmapped: 21291008 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169091072 unmapped: 22470656 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169107456 unmapped: 22454272 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 ms_handle_reset con 0x564050aa7800 session 0x564052c83e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f2aa6000/0x0/0x4ffc00000, data 0x6604f9d/0x6868000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169115648 unmapped: 22446080 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3260551 data_alloc: 234881024 data_used: 26329088
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169115648 unmapped: 22446080 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f2aa7000/0x0/0x4ffc00000, data 0x6604f7a/0x6867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169115648 unmapped: 22446080 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169115648 unmapped: 22446080 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169115648 unmapped: 22446080 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169115648 unmapped: 22446080 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f2aa7000/0x0/0x4ffc00000, data 0x6604f7a/0x6867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3260551 data_alloc: 234881024 data_used: 26329088
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169115648 unmapped: 22446080 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f2aa7000/0x0/0x4ffc00000, data 0x6604f7a/0x6867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169115648 unmapped: 22446080 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169115648 unmapped: 22446080 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 442 handle_osd_map epochs [442,443], i have 442, src has [1,443]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.095629692s of 17.805044174s, submitted: 35
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2aa7000/0x0/0x4ffc00000, data 0x6604f7a/0x6867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 ms_handle_reset con 0x56404f5a2c00 session 0x564050b7e000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169115648 unmapped: 22446080 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 ms_handle_reset con 0x564052f96400 session 0x564050f2d860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 ms_handle_reset con 0x5640501a2400 session 0x56404e8b8d20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169115648 unmapped: 22446080 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266522 data_alloc: 234881024 data_used: 26337280
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169115648 unmapped: 22446080 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 22257664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2aa2000/0x0/0x4ffc00000, data 0x6606b59/0x686b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 22257664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 22257664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 22257664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3293882 data_alloc: 234881024 data_used: 26963968
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 22257664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 22257664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 22257664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2aa2000/0x0/0x4ffc00000, data 0x6606b59/0x686b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169336832 unmapped: 22224896 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2aa2000/0x0/0x4ffc00000, data 0x6606b59/0x686b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169336832 unmapped: 22224896 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3293882 data_alloc: 234881024 data_used: 26963968
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169336832 unmapped: 22224896 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2aa2000/0x0/0x4ffc00000, data 0x6606b59/0x686b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.059380531s of 13.110032082s, submitted: 5
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 ms_handle_reset con 0x56404f541800 session 0x564050668f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170631168 unmapped: 20930560 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170819584 unmapped: 20742144 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170909696 unmapped: 20652032 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170909696 unmapped: 20652032 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f26d9000/0x0/0x4ffc00000, data 0x69d0b59/0x6c35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326943 data_alloc: 234881024 data_used: 27185152
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170909696 unmapped: 20652032 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171352064 unmapped: 20209664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171393024 unmapped: 20168704 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171393024 unmapped: 20168704 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171393024 unmapped: 20168704 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3336443 data_alloc: 234881024 data_used: 27189248
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171393024 unmapped: 20168704 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2657000/0x0/0x4ffc00000, data 0x6a52b59/0x6cb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171393024 unmapped: 20168704 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.329282761s of 10.975771904s, submitted: 60
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 ms_handle_reset con 0x564052f96800 session 0x56404e3d0000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 ms_handle_reset con 0x56404e73a000 session 0x5640508bf680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170672128 unmapped: 20889600 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 ms_handle_reset con 0x56404f541800 session 0x56404f591680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170672128 unmapped: 20889600 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170672128 unmapped: 20889600 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3337255 data_alloc: 234881024 data_used: 27197440
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170672128 unmapped: 20889600 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2623000/0x0/0x4ffc00000, data 0x6a85b59/0x6cea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170672128 unmapped: 20889600 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169910272 unmapped: 21651456 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2224000/0x0/0x4ffc00000, data 0x6e85b59/0x70ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169910272 unmapped: 21651456 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169910272 unmapped: 21651456 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3371007 data_alloc: 251658240 data_used: 27824128
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169910272 unmapped: 21651456 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2224000/0x0/0x4ffc00000, data 0x6e85b59/0x70ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169910272 unmapped: 21651456 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 169910272 unmapped: 21651456 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.099229813s of 11.176727295s, submitted: 13
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171073536 unmapped: 20488192 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2224000/0x0/0x4ffc00000, data 0x6e85b59/0x70ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171073536 unmapped: 20488192 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2224000/0x0/0x4ffc00000, data 0x6e85b59/0x70ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3371823 data_alloc: 251658240 data_used: 27824128
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171073536 unmapped: 20488192 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 ms_handle_reset con 0x56404f5a2c00 session 0x56404f4a21e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171081728 unmapped: 20480000 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f1e14000/0x0/0x4ffc00000, data 0x6e85b59/0x70ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f1e14000/0x0/0x4ffc00000, data 0x6e85b59/0x70ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171081728 unmapped: 20480000 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171098112 unmapped: 20463616 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f1e14000/0x0/0x4ffc00000, data 0x6e85b59/0x70ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171220992 unmapped: 20340736 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f1e14000/0x0/0x4ffc00000, data 0x6e85b59/0x70ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3373196 data_alloc: 251658240 data_used: 27852800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171220992 unmapped: 20340736 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171220992 unmapped: 20340736 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171220992 unmapped: 20340736 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171220992 unmapped: 20340736 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f1e14000/0x0/0x4ffc00000, data 0x6e85b59/0x70ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171220992 unmapped: 20340736 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.957584381s of 11.777174950s, submitted: 13
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3373196 data_alloc: 251658240 data_used: 27852800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171302912 unmapped: 20258816 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171302912 unmapped: 20258816 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171302912 unmapped: 20258816 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171335680 unmapped: 20226048 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f1e12000/0x0/0x4ffc00000, data 0x6e85b59/0x70ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171352064 unmapped: 20209664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3378748 data_alloc: 251658240 data_used: 28090368
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171352064 unmapped: 20209664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171352064 unmapped: 20209664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f1e12000/0x0/0x4ffc00000, data 0x6e85b59/0x70ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171384832 unmapped: 20176896 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171384832 unmapped: 20176896 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171384832 unmapped: 20176896 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3381668 data_alloc: 251658240 data_used: 28090368
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171384832 unmapped: 20176896 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.223710060s of 11.283980370s, submitted: 11
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 ms_handle_reset con 0x56404f540c00 session 0x564050afa1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 ms_handle_reset con 0x564050ff6c00 session 0x564050aeb2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 ms_handle_reset con 0x56404e73a000 session 0x564050afb0e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171581440 unmapped: 19980288 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171589632 unmapped: 19972096 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f1e08000/0x0/0x4ffc00000, data 0x6e91b59/0x70f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171589632 unmapped: 19972096 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171589632 unmapped: 19972096 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: mgrc ms_handle_reset ms_handle_reset con 0x56404e749800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2433011577
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2433011577,v1:192.168.122.100:6801/2433011577]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: mgrc handle_mgr_configure stats_period=5
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f1e08000/0x0/0x4ffc00000, data 0x6e91b59/0x70f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3386316 data_alloc: 251658240 data_used: 29257728
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 ms_handle_reset con 0x56404f540c00 session 0x5640508bfa40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171728896 unmapped: 19832832 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171728896 unmapped: 19832832 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171728896 unmapped: 19832832 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2207000/0x0/0x4ffc00000, data 0x6a92af7/0x6cf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171761664 unmapped: 19800064 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 ms_handle_reset con 0x56404f541800 session 0x564050f2de00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 ms_handle_reset con 0x56404f5a2c00 session 0x564050f2c780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171761664 unmapped: 19800064 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3357628 data_alloc: 251658240 data_used: 29245440
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171761664 unmapped: 19800064 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2208000/0x0/0x4ffc00000, data 0x6a92ae7/0x6cf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2208000/0x0/0x4ffc00000, data 0x6a92ae7/0x6cf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171794432 unmapped: 19767296 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2208000/0x0/0x4ffc00000, data 0x6a92ae7/0x6cf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2208000/0x0/0x4ffc00000, data 0x6a92ae7/0x6cf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171794432 unmapped: 19767296 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171794432 unmapped: 19767296 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171794432 unmapped: 19767296 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3357628 data_alloc: 251658240 data_used: 29245440
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171794432 unmapped: 19767296 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171794432 unmapped: 19767296 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2208000/0x0/0x4ffc00000, data 0x6a92ae7/0x6cf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171794432 unmapped: 19767296 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171794432 unmapped: 19767296 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2208000/0x0/0x4ffc00000, data 0x6a92ae7/0x6cf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171794432 unmapped: 19767296 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3357628 data_alloc: 251658240 data_used: 29245440
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f2208000/0x0/0x4ffc00000, data 0x6a92ae7/0x6cf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171794432 unmapped: 19767296 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171794432 unmapped: 19767296 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171794432 unmapped: 19767296 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.164690018s of 22.239620209s, submitted: 28
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171859968 unmapped: 19701760 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171950080 unmapped: 19611648 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3375232 data_alloc: 251658240 data_used: 29511680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171950080 unmapped: 19611648 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f21fd000/0x0/0x4ffc00000, data 0x6b0d6d6/0x6d01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171958272 unmapped: 19603456 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171958272 unmapped: 19603456 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 ms_handle_reset con 0x564050e50400 session 0x564050efda40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 ms_handle_reset con 0x56404f5a3c00 session 0x56404f4f2f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171958272 unmapped: 19603456 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171958272 unmapped: 19603456 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3377460 data_alloc: 251658240 data_used: 29515776
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171958272 unmapped: 19603456 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 19595264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f21fd000/0x0/0x4ffc00000, data 0x6b0d6d6/0x6d01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 19595264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 19595264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 19595264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3377460 data_alloc: 251658240 data_used: 29515776
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 19595264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f21fd000/0x0/0x4ffc00000, data 0x6b0d6d6/0x6d01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 19595264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 19595264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 ms_handle_reset con 0x56404e73a000 session 0x56404f4f2960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f21fd000/0x0/0x4ffc00000, data 0x6b0d6d6/0x6d01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 19595264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 ms_handle_reset con 0x56404f540c00 session 0x56404f4a5e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 19595264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 ms_handle_reset con 0x56404f541800 session 0x56404e7b1680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f21fd000/0x0/0x4ffc00000, data 0x6b0d6d6/0x6d01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3377460 data_alloc: 251658240 data_used: 29515776
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 ms_handle_reset con 0x56404f5a2c00 session 0x564050158d20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 19595264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 19595264 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172007424 unmapped: 19554304 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f21fd000/0x0/0x4ffc00000, data 0x6b0d6d6/0x6d01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172023808 unmapped: 19537920 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172023808 unmapped: 19537920 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3377620 data_alloc: 251658240 data_used: 29519872
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172023808 unmapped: 19537920 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172023808 unmapped: 19537920 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172023808 unmapped: 19537920 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f21fd000/0x0/0x4ffc00000, data 0x6b0d6d6/0x6d01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172023808 unmapped: 19537920 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172023808 unmapped: 19537920 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3377620 data_alloc: 251658240 data_used: 29519872
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172023808 unmapped: 19537920 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172023808 unmapped: 19537920 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f21fd000/0x0/0x4ffc00000, data 0x6b0d6d6/0x6d01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172261376 unmapped: 19300352 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172261376 unmapped: 19300352 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f21fd000/0x0/0x4ffc00000, data 0x6b0d6d6/0x6d01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.128686905s of 31.448440552s, submitted: 5
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172294144 unmapped: 19267584 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3397900 data_alloc: 251658240 data_used: 31023104
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172343296 unmapped: 19218432 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172343296 unmapped: 19218432 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172343296 unmapped: 19218432 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172343296 unmapped: 19218432 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f21f9000/0x0/0x4ffc00000, data 0x6cd16d6/0x6d05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172376064 unmapped: 19185664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3398180 data_alloc: 251658240 data_used: 31023104
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172376064 unmapped: 19185664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172376064 unmapped: 19185664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172376064 unmapped: 19185664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f21f9000/0x0/0x4ffc00000, data 0x6cd16d6/0x6d05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172376064 unmapped: 19185664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f21f9000/0x0/0x4ffc00000, data 0x6cd16d6/0x6d05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172376064 unmapped: 19185664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3398180 data_alloc: 251658240 data_used: 31023104
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f21f9000/0x0/0x4ffc00000, data 0x6cd16d6/0x6d05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172376064 unmapped: 19185664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172376064 unmapped: 19185664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172376064 unmapped: 19185664 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.227652550s of 13.932570457s, submitted: 3
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172851200 unmapped: 18710528 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 ms_handle_reset con 0x5640512a3000 session 0x56404f590960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172851200 unmapped: 18710528 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f218a000/0x0/0x4ffc00000, data 0x6d406d6/0x6d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408096 data_alloc: 251658240 data_used: 31199232
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172859392 unmapped: 18702336 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172859392 unmapped: 18702336 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172916736 unmapped: 18644992 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172916736 unmapped: 18644992 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172916736 unmapped: 18644992 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408096 data_alloc: 251658240 data_used: 31199232
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f218a000/0x0/0x4ffc00000, data 0x6d406d6/0x6d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172916736 unmapped: 18644992 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172916736 unmapped: 18644992 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172916736 unmapped: 18644992 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 ms_handle_reset con 0x56404e73a000 session 0x564050efda40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 ms_handle_reset con 0x56404f540c00 session 0x564050e93e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172916736 unmapped: 18644992 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172916736 unmapped: 18644992 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408096 data_alloc: 251658240 data_used: 31199232
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f218a000/0x0/0x4ffc00000, data 0x6d406d6/0x6d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 ms_handle_reset con 0x56404f5a3c00 session 0x564050f2c780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172916736 unmapped: 18644992 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f218a000/0x0/0x4ffc00000, data 0x6d406d6/0x6d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172916736 unmapped: 18644992 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172916736 unmapped: 18644992 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172916736 unmapped: 18644992 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f218a000/0x0/0x4ffc00000, data 0x6d406d6/0x6d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 ms_handle_reset con 0x564050e63c00 session 0x5640508bfa40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172916736 unmapped: 18644992 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.971099854s of 16.912891388s, submitted: 1
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3407376 data_alloc: 251658240 data_used: 31199232
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172457984 unmapped: 19103744 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 ms_handle_reset con 0x564054010c00 session 0x564050afb0e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f21fa000/0x0/0x4ffc00000, data 0x6cd16c6/0x6d04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172457984 unmapped: 19103744 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 ms_handle_reset con 0x56404e73a000 session 0x564050afa1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172457984 unmapped: 19103744 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172531712 unmapped: 19030016 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 445 heartbeat osd_stat(store_statfs(0x4f21fa000/0x0/0x4ffc00000, data 0x6cce235/0x6d03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172539904 unmapped: 19021824 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3387512 data_alloc: 251658240 data_used: 31203328
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172556288 unmapped: 19005440 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 445 ms_handle_reset con 0x5640501a2400 session 0x564050d5af00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172556288 unmapped: 19005440 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 445 ms_handle_reset con 0x564052f96400 session 0x56404db88960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172556288 unmapped: 19005440 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172556288 unmapped: 19005440 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172556288 unmapped: 19005440 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 445 heartbeat osd_stat(store_statfs(0x4f21fe000/0x0/0x4ffc00000, data 0x6a9b235/0x6d00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.301329613s of 10.104992867s, submitted: 27
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3386724 data_alloc: 251658240 data_used: 31203328
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 445 handle_osd_map epochs [445,446], i have 445, src has [1,446]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172564480 unmapped: 18997248 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 446 ms_handle_reset con 0x56404f540c00 session 0x56404f58ef00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172564480 unmapped: 18997248 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172564480 unmapped: 18997248 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172564480 unmapped: 18997248 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172564480 unmapped: 18997248 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3390082 data_alloc: 251658240 data_used: 31211520
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f21fa000/0x0/0x4ffc00000, data 0x6a9cc98/0x6d03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172564480 unmapped: 18997248 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f21fa000/0x0/0x4ffc00000, data 0x6a9cc98/0x6d03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172564480 unmapped: 18997248 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172564480 unmapped: 18997248 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172564480 unmapped: 18997248 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 446 ms_handle_reset con 0x56404f5a3c00 session 0x56404f71e000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3269204 data_alloc: 234881024 data_used: 24633344
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f2a56000/0x0/0x4ffc00000, data 0x6241c98/0x64a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3269204 data_alloc: 234881024 data_used: 24633344
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f2a56000/0x0/0x4ffc00000, data 0x6241c98/0x64a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f2a56000/0x0/0x4ffc00000, data 0x6241c98/0x64a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f2a56000/0x0/0x4ffc00000, data 0x6241c98/0x64a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f2a56000/0x0/0x4ffc00000, data 0x6241c98/0x64a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3269204 data_alloc: 234881024 data_used: 24633344
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f2a56000/0x0/0x4ffc00000, data 0x6241c98/0x64a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3269204 data_alloc: 234881024 data_used: 24633344
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f2a56000/0x0/0x4ffc00000, data 0x6241c98/0x64a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 21209088 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3269204 data_alloc: 234881024 data_used: 24633344
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f2a56000/0x0/0x4ffc00000, data 0x6241c98/0x64a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170360832 unmapped: 21200896 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170360832 unmapped: 21200896 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170369024 unmapped: 21192704 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170369024 unmapped: 21192704 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 446 ms_handle_reset con 0x56404e73a000 session 0x56404f4930e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170369024 unmapped: 21192704 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3269204 data_alloc: 234881024 data_used: 24633344
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170369024 unmapped: 21192704 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f2a56000/0x0/0x4ffc00000, data 0x6241c98/0x64a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.997535706s of 36.544464111s, submitted: 24
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 446 ms_handle_reset con 0x56404f540c00 session 0x564050afbc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170369024 unmapped: 21192704 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170369024 unmapped: 21192704 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 447 ms_handle_reset con 0x564052f96400 session 0x5640509385a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 448 ms_handle_reset con 0x5640501a2400 session 0x56404f4f23c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170393600 unmapped: 21168128 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170393600 unmapped: 21168128 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3278805 data_alloc: 234881024 data_used: 24645632
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170393600 unmapped: 21168128 heap: 191561728 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 448 ms_handle_reset con 0x56405093f000 session 0x56405017dc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 448 ms_handle_reset con 0x564050e63c00 session 0x5640506843c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f2a4d000/0x0/0x4ffc00000, data 0x62453f4/0x64af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187203584 unmapped: 21159936 heap: 208363520 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 170426368 unmapped: 37937152 heap: 208363520 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171540480 unmapped: 41025536 heap: 212566016 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 175742976 unmapped: 36823040 heap: 212566016 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3641877 data_alloc: 234881024 data_used: 24645632
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171548672 unmapped: 41017344 heap: 212566016 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.336182117s of 10.097176552s, submitted: 27
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171556864 unmapped: 41009152 heap: 212566016 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 448 heartbeat osd_stat(store_statfs(0x4eea4f000/0x0/0x4ffc00000, data 0xa2453f4/0xa4af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,1,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171556864 unmapped: 41009152 heap: 212566016 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171556864 unmapped: 41009152 heap: 212566016 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171556864 unmapped: 41009152 heap: 212566016 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3810405 data_alloc: 234881024 data_used: 24645632
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171556864 unmapped: 41009152 heap: 212566016 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 448 heartbeat osd_stat(store_statfs(0x4ede4f000/0x0/0x4ffc00000, data 0xae453f4/0xb0af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171556864 unmapped: 41009152 heap: 212566016 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 171556864 unmapped: 41009152 heap: 212566016 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184164352 unmapped: 28401664 heap: 212566016 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 175800320 unmapped: 36765696 heap: 212566016 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 448 heartbeat osd_stat(store_statfs(0x4ea24f000/0x0/0x4ffc00000, data 0xea453f4/0xecaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4297701 data_alloc: 234881024 data_used: 24645632
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172662784 unmapped: 39903232 heap: 212566016 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.209618092s of 10.015792847s, submitted: 30
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172728320 unmapped: 39837696 heap: 212566016 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 448 heartbeat osd_stat(store_statfs(0x4e624f000/0x0/0x4ffc00000, data 0x12a453f4/0x12caf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 172916736 unmapped: 43851776 heap: 216768512 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 448 ms_handle_reset con 0x56404e73a000 session 0x56404f58f860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 448 ms_handle_reset con 0x56404f540c00 session 0x5640511630e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173178880 unmapped: 43589632 heap: 216768512 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173178880 unmapped: 43589632 heap: 216768512 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 448 heartbeat osd_stat(store_statfs(0x4e1a4f000/0x0/0x4ffc00000, data 0x172453f4/0x174af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 448 handle_osd_map epochs [449,449], i have 449, src has [1,449]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 448 handle_osd_map epochs [449,449], i have 449, src has [1,449]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5189171 data_alloc: 234881024 data_used: 24653824
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173203456 unmapped: 43565056 heap: 216768512 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173211648 unmapped: 43556864 heap: 216768512 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 449 ms_handle_reset con 0x5640501a2400 session 0x564051163c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173219840 unmapped: 43548672 heap: 216768512 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173219840 unmapped: 43548672 heap: 216768512 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173219840 unmapped: 43548672 heap: 216768512 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 449 heartbeat osd_stat(store_statfs(0x4e1a4d000/0x0/0x4ffc00000, data 0x17246f63/0x174b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5187419 data_alloc: 234881024 data_used: 24649728
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 449 ms_handle_reset con 0x564052f96400 session 0x564050f2d4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173219840 unmapped: 43548672 heap: 216768512 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 450 ms_handle_reset con 0x56404e73a000 session 0x56404efb0f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 450 handle_osd_map epochs [450,451], i have 450, src has [1,451]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.445109367s of 10.205060005s, submitted: 48
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173326336 unmapped: 43442176 heap: 216768512 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 451 ms_handle_reset con 0x56404f540c00 session 0x564050f2cd20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173326336 unmapped: 43442176 heap: 216768512 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173326336 unmapped: 43442176 heap: 216768512 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 451 ms_handle_reset con 0x5640501a2400 session 0x564050aeaf00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 451 heartbeat osd_stat(store_statfs(0x4e1a46000/0x0/0x4ffc00000, data 0x1724a6f7/0x174b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173326336 unmapped: 43442176 heap: 216768512 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 451 handle_osd_map epochs [453,453], i have 451, src has [1,453]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 451 handle_osd_map epochs [452,453], i have 451, src has [1,453]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 453 ms_handle_reset con 0x56404eef4800 session 0x564050f2c000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5203856 data_alloc: 234881024 data_used: 24686592
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173383680 unmapped: 43384832 heap: 216768512 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 453 handle_osd_map epochs [453,454], i have 453, src has [1,454]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 454 ms_handle_reset con 0x564050e63c00 session 0x564050efc3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 454 ms_handle_reset con 0x56404e73a000 session 0x564050402780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173400064 unmapped: 43368448 heap: 216768512 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 454 ms_handle_reset con 0x56404f540c00 session 0x564050d5a000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 454 ms_handle_reset con 0x56404eef4800 session 0x564050f2c000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 207224832 unmapped: 17948672 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 454 heartbeat osd_stat(store_statfs(0x4df63c000/0x0/0x4ffc00000, data 0x1964fd60/0x198c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,0,0,2])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173727744 unmapped: 51445760 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 173842432 unmapped: 51331072 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5627718 data_alloc: 234881024 data_used: 24686592
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 174088192 unmapped: 51085312 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 454 heartbeat osd_stat(store_statfs(0x4dc23c000/0x0/0x4ffc00000, data 0x1ca4fd60/0x1ccc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.456221581s of 10.012191772s, submitted: 73
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 178495488 unmapped: 46678016 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 182878208 unmapped: 42295296 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 454 heartbeat osd_stat(store_statfs(0x4d923c000/0x0/0x4ffc00000, data 0x1fa4fd60/0x1fcc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 174563328 unmapped: 50610176 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187424768 unmapped: 37748736 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 454 heartbeat osd_stat(store_statfs(0x4d6e3c000/0x0/0x4ffc00000, data 0x21e4fd60/0x220c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6469158 data_alloc: 234881024 data_used: 24686592
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 174956544 unmapped: 50216960 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179216384 unmapped: 45957120 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 454 heartbeat osd_stat(store_statfs(0x4d5a3c000/0x0/0x4ffc00000, data 0x2324fd60/0x234c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,1,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179355648 unmapped: 45817856 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 183762944 unmapped: 41410560 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 175513600 unmapped: 49659904 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6956214 data_alloc: 234881024 data_used: 24686592
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 175546368 unmapped: 49627136 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.334342957s of 10.044053078s, submitted: 36
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179814400 unmapped: 45359104 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 454 heartbeat osd_stat(store_statfs(0x4d223c000/0x0/0x4ffc00000, data 0x26a4fd60/0x26cc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184213504 unmapped: 40960000 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 180043776 unmapped: 45129728 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184434688 unmapped: 40738816 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 454 ms_handle_reset con 0x5640501a2400 session 0x564050aeaf00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7310758 data_alloc: 234881024 data_used: 24690688
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 454 ms_handle_reset con 0x5640512a3800 session 0x564050f2cd20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 454 heartbeat osd_stat(store_statfs(0x4cee3c000/0x0/0x4ffc00000, data 0x29e4fd60/0x2a0c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 454 ms_handle_reset con 0x56404e856000 session 0x56404e5db860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176046080 unmapped: 49127424 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176046080 unmapped: 49127424 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176046080 unmapped: 49127424 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 454 heartbeat osd_stat(store_statfs(0x4cee3c000/0x0/0x4ffc00000, data 0x29e4fd60/0x2a0c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176046080 unmapped: 49127424 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 454 handle_osd_map epochs [454,455], i have 454, src has [1,455]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176054272 unmapped: 49119232 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7314932 data_alloc: 234881024 data_used: 24698880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 455 ms_handle_reset con 0x56404e73a000 session 0x56404efb0f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176062464 unmapped: 49111040 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 455 ms_handle_reset con 0x56404eef4800 session 0x564051163c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 455 heartbeat osd_stat(store_statfs(0x4cee39000/0x0/0x4ffc00000, data 0x29e51441/0x2a0c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 49102848 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.724014282s of 10.804419518s, submitted: 26
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176087040 unmapped: 49086464 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 455 heartbeat osd_stat(store_statfs(0x4cee3a000/0x0/0x4ffc00000, data 0x29e51441/0x2a0c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176087040 unmapped: 49086464 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176087040 unmapped: 49086464 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7317348 data_alloc: 234881024 data_used: 24702976
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 456 heartbeat osd_stat(store_statfs(0x4cee3a000/0x0/0x4ffc00000, data 0x29e51441/0x2a0c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 49061888 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176128000 unmapped: 49045504 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176128000 unmapped: 49045504 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 456 heartbeat osd_stat(store_statfs(0x4cee36000/0x0/0x4ffc00000, data 0x29e53012/0x2a0c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176136192 unmapped: 49037312 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 456 heartbeat osd_stat(store_statfs(0x4cee36000/0x0/0x4ffc00000, data 0x29e53012/0x2a0c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176136192 unmapped: 49037312 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 456 handle_osd_map epochs [456,457], i have 456, src has [1,457]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 457 heartbeat osd_stat(store_statfs(0x4cee37000/0x0/0x4ffc00000, data 0x29e52fb0/0x2a0c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7319770 data_alloc: 234881024 data_used: 24707072
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 457 ms_handle_reset con 0x56404f540c00 session 0x5640511630e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176152576 unmapped: 49020928 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176152576 unmapped: 49020928 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 457 heartbeat osd_stat(store_statfs(0x4cee34000/0x0/0x4ffc00000, data 0x29e54a2f/0x2a0c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176152576 unmapped: 49020928 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 457 heartbeat osd_stat(store_statfs(0x4cee34000/0x0/0x4ffc00000, data 0x29e54a2f/0x2a0c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176152576 unmapped: 49020928 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.839026451s of 12.229551315s, submitted: 32
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 176168960 unmapped: 49004544 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 457 ms_handle_reset con 0x5640501a2400 session 0x564050afbc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 457 handle_osd_map epochs [457,458], i have 457, src has [1,458]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 457 handle_osd_map epochs [458,458], i have 458, src has [1,458]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7323064 data_alloc: 234881024 data_used: 24715264
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179691520 unmapped: 45481984 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 458 heartbeat osd_stat(store_statfs(0x4cee35000/0x0/0x4ffc00000, data 0x29e54a2f/0x2a0c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 458 ms_handle_reset con 0x56404e73a000 session 0x56404f58ef00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179412992 unmapped: 45760512 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 458 ms_handle_reset con 0x56404e856000 session 0x56404f4a5e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179412992 unmapped: 45760512 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179412992 unmapped: 45760512 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179412992 unmapped: 45760512 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 458 heartbeat osd_stat(store_statfs(0x4cee30000/0x0/0x4ffc00000, data 0x29e564a2/0x2a0cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 458 ms_handle_reset con 0x56404eef4800 session 0x56404f4f2960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7322332 data_alloc: 234881024 data_used: 28254208
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179412992 unmapped: 45760512 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179412992 unmapped: 45760512 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 458 ms_handle_reset con 0x56404f540c00 session 0x564050efc3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179421184 unmapped: 45752320 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 458 ms_handle_reset con 0x5640512a3800 session 0x56404e2f5860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 45645824 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 458 ms_handle_reset con 0x56404e73a000 session 0x56404f4a3c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 458 heartbeat osd_stat(store_statfs(0x4ce5d9000/0x0/0x4ffc00000, data 0x2a6ae4a2/0x2a925000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 45645824 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.967474937s of 10.694049835s, submitted: 32
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7388698 data_alloc: 234881024 data_used: 28254208
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 458 ms_handle_reset con 0x56404eef4800 session 0x5640509381e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179699712 unmapped: 45473792 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 459 ms_handle_reset con 0x56404f540c00 session 0x564050939680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 459 handle_osd_map epochs [459,460], i have 459, src has [1,460]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 459 handle_osd_map epochs [460,460], i have 460, src has [1,460]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179314688 unmapped: 45858816 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 ms_handle_reset con 0x564052bda000 session 0x56404f591680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 ms_handle_reset con 0x564050e5e000 session 0x56404e5d4d20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179347456 unmapped: 45826048 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 ms_handle_reset con 0x56404e856000 session 0x56404f591a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179347456 unmapped: 45826048 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ce1bb000/0x0/0x4ffc00000, data 0x2aac62a0/0x2ad42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179347456 unmapped: 45826048 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7433808 data_alloc: 234881024 data_used: 28270592
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179347456 unmapped: 45826048 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 45809664 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ce1bb000/0x0/0x4ffc00000, data 0x2aac62a0/0x2ad42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 45809664 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 45809664 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ce1bb000/0x0/0x4ffc00000, data 0x2aac62a0/0x2ad42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 45809664 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 rsyslogd[1011]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7433808 data_alloc: 234881024 data_used: 28270592
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 45809664 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 45809664 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 45809664 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 179372032 unmapped: 45801472 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.311925888s of 13.663395882s, submitted: 35
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 ms_handle_reset con 0x56404e73a000 session 0x5640504021e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 ms_handle_reset con 0x56404eef4800 session 0x564050eba5a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 ms_handle_reset con 0x56404f540c00 session 0x56404e8b85a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 ms_handle_reset con 0x564052bda000 session 0x564050afb680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 ms_handle_reset con 0x56404e73a000 session 0x56404e825e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 181043200 unmapped: 44130304 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ce0cc000/0x0/0x4ffc00000, data 0x2abb62a0/0x2ae32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7505910 data_alloc: 234881024 data_used: 28270592
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4cd8c9000/0x0/0x4ffc00000, data 0x2b3b92a0/0x2b635000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 181043200 unmapped: 44130304 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 181051392 unmapped: 44122112 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 181051392 unmapped: 44122112 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 181051392 unmapped: 44122112 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 181051392 unmapped: 44122112 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4cd8c9000/0x0/0x4ffc00000, data 0x2b3b92a0/0x2b635000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7508206 data_alloc: 234881024 data_used: 28270592
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 ms_handle_reset con 0x56404e856000 session 0x564050b7fa40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 181411840 unmapped: 43761664 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 181420032 unmapped: 43753472 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 181682176 unmapped: 43491328 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185638912 unmapped: 39534592 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4cd89f000/0x0/0x4ffc00000, data 0x2b3e32a0/0x2b65f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184860672 unmapped: 40312832 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7574766 data_alloc: 251658240 data_used: 37560320
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184860672 unmapped: 40312832 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184860672 unmapped: 40312832 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4cd89f000/0x0/0x4ffc00000, data 0x2b3e32a0/0x2b65f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184868864 unmapped: 40304640 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184868864 unmapped: 40304640 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184868864 unmapped: 40304640 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7574766 data_alloc: 251658240 data_used: 37560320
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184868864 unmapped: 40304640 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184877056 unmapped: 40296448 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4cd89f000/0x0/0x4ffc00000, data 0x2b3e32a0/0x2b65f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184877056 unmapped: 40296448 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184885248 unmapped: 40288256 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.680301666s of 20.170330048s, submitted: 13
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187826176 unmapped: 37347328 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7682314 data_alloc: 251658240 data_used: 37617664
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186449920 unmapped: 38723584 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187580416 unmapped: 37593088 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4cd015000/0x0/0x4ffc00000, data 0x2c4be2a0/0x2bee8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4cd015000/0x0/0x4ffc00000, data 0x2c4be2a0/0x2bee8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187613184 unmapped: 37560320 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4cd015000/0x0/0x4ffc00000, data 0x2c4be2a0/0x2bee8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187891712 unmapped: 37281792 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ccfd4000/0x0/0x4ffc00000, data 0x2c4ff2a0/0x2bf29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,2])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187629568 unmapped: 37543936 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ccfd2000/0x0/0x4ffc00000, data 0x2c5002a0/0x2bf2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,2])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7714774 data_alloc: 251658240 data_used: 38543360
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187809792 unmapped: 37363712 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 188293120 unmapped: 36880384 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 36773888 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189661184 unmapped: 35512320 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ccf86000/0x0/0x4ffc00000, data 0x2c5482a0/0x2bf72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189816832 unmapped: 35356672 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7721178 data_alloc: 251658240 data_used: 38879232
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189816832 unmapped: 35356672 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189816832 unmapped: 35356672 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ccf86000/0x0/0x4ffc00000, data 0x2c5482a0/0x2bf72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189816832 unmapped: 35356672 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ccf86000/0x0/0x4ffc00000, data 0x2c5482a0/0x2bf72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189849600 unmapped: 35323904 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.471385002s of 15.919590950s, submitted: 85
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189849600 unmapped: 35323904 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7720258 data_alloc: 251658240 data_used: 38879232
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189849600 unmapped: 35323904 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189849600 unmapped: 35323904 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ccf6e000/0x0/0x4ffc00000, data 0x2c5662a0/0x2bf90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189882368 unmapped: 35291136 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189882368 unmapped: 35291136 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189898752 unmapped: 35274752 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7720258 data_alloc: 251658240 data_used: 38879232
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189898752 unmapped: 35274752 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ccf6e000/0x0/0x4ffc00000, data 0x2c5662a0/0x2bf90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189898752 unmapped: 35274752 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ccf6e000/0x0/0x4ffc00000, data 0x2c5662a0/0x2bf90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189898752 unmapped: 35274752 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ccf6e000/0x0/0x4ffc00000, data 0x2c5662a0/0x2bf90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189915136 unmapped: 35258368 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ccf6e000/0x0/0x4ffc00000, data 0x2c5662a0/0x2bf90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 ms_handle_reset con 0x56404eef4800 session 0x56404e90b680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 ms_handle_reset con 0x56404f540c00 session 0x56404f591e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189915136 unmapped: 35258368 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 ms_handle_reset con 0x56404f5a2800 session 0x564050aeb860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7720258 data_alloc: 251658240 data_used: 38879232
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189915136 unmapped: 35258368 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189915136 unmapped: 35258368 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ccf6e000/0x0/0x4ffc00000, data 0x2c5662a0/0x2bf90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189915136 unmapped: 35258368 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189931520 unmapped: 35241984 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189931520 unmapped: 35241984 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7720258 data_alloc: 251658240 data_used: 38879232
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 189931520 unmapped: 35241984 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.307693481s of 16.327440262s, submitted: 2
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ccf6e000/0x0/0x4ffc00000, data 0x2c0c02a0/0x2baea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,4])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4ccff0000/0x0/0x4ffc00000, data 0x2bc732a0/0x2b69d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7585980 data_alloc: 251658240 data_used: 29585408
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4cd861000/0x0/0x4ffc00000, data 0x2bc732a0/0x2b69d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7585980 data_alloc: 251658240 data_used: 29585408
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4cd861000/0x0/0x4ffc00000, data 0x2bc732a0/0x2b69d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.226688385s of 11.930856705s, submitted: 6
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4cd861000/0x0/0x4ffc00000, data 0x2bc492a0/0x2b673000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4cd88b000/0x0/0x4ffc00000, data 0x2bc492a0/0x2b673000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7578267 data_alloc: 251658240 data_used: 29450240
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 ms_handle_reset con 0x56404e73a000 session 0x56404e8a72c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.3 total, 600.0 interval#012Cumulative writes: 21K writes, 82K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 21K writes, 7225 syncs, 2.95 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3209 writes, 14K keys, 3209 commit groups, 1.0 writes per commit group, ingest: 11.63 MB, 0.02 MB/s#012Interval WAL: 3210 writes, 1274 syncs, 2.52 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 heartbeat osd_stat(store_statfs(0x4cd88b000/0x0/0x4ffc00000, data 0x2bc492a0/0x2b673000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 ms_handle_reset con 0x56404eef4800 session 0x5640508bfe00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7578267 data_alloc: 251658240 data_used: 29450240
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 40984576 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 460 handle_osd_map epochs [460,461], i have 460, src has [1,461]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 461 ms_handle_reset con 0x56404f540c00 session 0x564050938780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184737792 unmapped: 40435712 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 462 ms_handle_reset con 0x5640512a8c00 session 0x56404e8b9e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 462 ms_handle_reset con 0x56404e856000 session 0x56404f58e000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184762368 unmapped: 40411136 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.815228462s of 10.442969322s, submitted: 56
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 463 ms_handle_reset con 0x56404e73a000 session 0x564050427c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 463 heartbeat osd_stat(store_statfs(0x4cd3c2000/0x0/0x4ffc00000, data 0x2c52099a/0x2bb3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184803328 unmapped: 40370176 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 463 heartbeat osd_stat(store_statfs(0x4cd3c0000/0x0/0x4ffc00000, data 0x2c52256b/0x2bb3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184803328 unmapped: 40370176 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7660455 data_alloc: 251658240 data_used: 29466624
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185876480 unmapped: 39297024 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 464 ms_handle_reset con 0x56404eef4800 session 0x564050ebaf00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 464 ms_handle_reset con 0x56404f540c00 session 0x56404e8241e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185884672 unmapped: 39288832 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 464 ms_handle_reset con 0x5640512a8c00 session 0x56404e60ab40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185892864 unmapped: 39280640 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185892864 unmapped: 39280640 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 464 handle_osd_map epochs [464,465], i have 464, src has [1,465]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 464 handle_osd_map epochs [465,465], i have 465, src has [1,465]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 465 ms_handle_reset con 0x56404f477400 session 0x564050aeb4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 465 heartbeat osd_stat(store_statfs(0x4cdc92000/0x0/0x4ffc00000, data 0x2b83ba9a/0x2b269000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 465 ms_handle_reset con 0x56404eef4800 session 0x564050afb680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 465 ms_handle_reset con 0x56404e73a000 session 0x56404e2f4000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185917440 unmapped: 39256064 heap: 225173504 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7576962 data_alloc: 251658240 data_used: 29474816
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 215392256 unmapped: 18178048 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 190406656 unmapped: 43163648 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 465 handle_osd_map epochs [465,466], i have 465, src has [1,466]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 194994176 unmapped: 38576128 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.113274574s of 10.025794983s, submitted: 123
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 191021056 unmapped: 42549248 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 466 heartbeat osd_stat(store_statfs(0x4c688c000/0x0/0x4ffc00000, data 0x32c3f124/0x32671000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 195289088 unmapped: 38281216 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8450072 data_alloc: 251658240 data_used: 29478912
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187072512 unmapped: 46497792 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187310080 unmapped: 46260224 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187727872 unmapped: 45842432 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 192323584 unmapped: 41246720 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 467 heartbeat osd_stat(store_statfs(0x4bd88a000/0x0/0x4ffc00000, data 0x3bc40b87/0x3b674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,1,0,2])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 467 ms_handle_reset con 0x5640512a8c00 session 0x56404e8b85a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 467 ms_handle_reset con 0x56404f540c00 session 0x56404f58f680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 188424192 unmapped: 45146112 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 467 ms_handle_reset con 0x564050e5d000 session 0x56404e5da000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 467 ms_handle_reset con 0x56404e73a000 session 0x56404f5910e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 9657918 data_alloc: 251658240 data_used: 29487104
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 188604416 unmapped: 44965888 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 467 ms_handle_reset con 0x56404eef4800 session 0x56404e90a1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 467 ms_handle_reset con 0x56404f540c00 session 0x564050684000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 188604416 unmapped: 44965888 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 468 ms_handle_reset con 0x5640512a8c00 session 0x56404e825860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187596800 unmapped: 45973504 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 468 ms_handle_reset con 0x56405183bc00 session 0x5640511623c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 468 ms_handle_reset con 0x56404e73a000 session 0x5640508bf2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187629568 unmapped: 45940736 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.566242218s of 10.971113205s, submitted: 212
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 468 handle_osd_map epochs [468,469], i have 468, src has [1,469]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187727872 unmapped: 45842432 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 469 heartbeat osd_stat(store_statfs(0x4cdc8c000/0x0/0x4ffc00000, data 0x2afec265/0x2b271000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7630546 data_alloc: 251658240 data_used: 29474816
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 469 handle_osd_map epochs [469,470], i have 469, src has [1,470]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 470 ms_handle_reset con 0x56404eef4800 session 0x56404f71f4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 470 ms_handle_reset con 0x56404f540c00 session 0x56404e60af00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187768832 unmapped: 45801472 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187777024 unmapped: 45793280 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 470 handle_osd_map epochs [470,471], i have 470, src has [1,471]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187777024 unmapped: 45793280 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187858944 unmapped: 45711360 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 472 ms_handle_reset con 0x5640512a8c00 session 0x5640508c9680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187916288 unmapped: 45654016 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 472 heartbeat osd_stat(store_statfs(0x4d9a06000/0x0/0x4ffc00000, data 0x1e66e5d8/0x1e8f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5464424 data_alloc: 234881024 data_used: 28323840
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 187916288 unmapped: 45654016 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184942592 unmapped: 48627712 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 473 handle_osd_map epochs [473,474], i have 473, src has [1,474]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 474 ms_handle_reset con 0x56405183bc00 session 0x564050f19860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 474 heartbeat osd_stat(store_statfs(0x4f2a00000/0x0/0x4ffc00000, data 0x6271c28/0x64fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3637636 data_alloc: 234881024 data_used: 28323840
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 474 heartbeat osd_stat(store_statfs(0x4f2a00000/0x0/0x4ffc00000, data 0x6271c28/0x64fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 474 handle_osd_map epochs [474,475], i have 474, src has [1,475]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.249728203s of 12.693314552s, submitted: 222
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3639938 data_alloc: 234881024 data_used: 28323840
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 475 heartbeat osd_stat(store_statfs(0x4f29fe000/0x0/0x4ffc00000, data 0x627368b/0x64ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 475 ms_handle_reset con 0x56404e73a000 session 0x564050426960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 475 heartbeat osd_stat(store_statfs(0x4f29fe000/0x0/0x4ffc00000, data 0x627368b/0x64ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 475 heartbeat osd_stat(store_statfs(0x4f29fe000/0x0/0x4ffc00000, data 0x627368b/0x64ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 475 ms_handle_reset con 0x56404eef4800 session 0x564050f192c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3642310 data_alloc: 234881024 data_used: 28323840
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 475 handle_osd_map epochs [475,476], i have 475, src has [1,476]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 476 heartbeat osd_stat(store_statfs(0x4f29fe000/0x0/0x4ffc00000, data 0x62736ed/0x6500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 476 ms_handle_reset con 0x5640512a8c00 session 0x564050f2cd20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184492032 unmapped: 49078272 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.228434563s of 10.724776268s, submitted: 18
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 477 ms_handle_reset con 0x56404f540c00 session 0x56404f71eb40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 477 ms_handle_reset con 0x56405093e800 session 0x56404e5d7c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3651976 data_alloc: 234881024 data_used: 28336128
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 477 ms_handle_reset con 0x56404e73a000 session 0x56404f590780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 49086464 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 477 handle_osd_map epochs [477,478], i have 477, src has [1,478]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 478 heartbeat osd_stat(store_statfs(0x4f29f3000/0x0/0x4ffc00000, data 0x6278e58/0x650a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 478 ms_handle_reset con 0x56404eef4800 session 0x564050427a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184492032 unmapped: 49078272 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 479 ms_handle_reset con 0x56404f540c00 session 0x56404e7b05a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184492032 unmapped: 49078272 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 480 ms_handle_reset con 0x564050e4f000 session 0x56404e3d0000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 480 ms_handle_reset con 0x5640512a8c00 session 0x56404e90b2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184565760 unmapped: 49004544 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184565760 unmapped: 49004544 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3665615 data_alloc: 234881024 data_used: 28348416
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 480 ms_handle_reset con 0x56404e73a000 session 0x56404e5d5e00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184565760 unmapped: 49004544 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f29e9000/0x0/0x4ffc00000, data 0x627c608/0x6511000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184565760 unmapped: 49004544 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 481 ms_handle_reset con 0x564050e4f000 session 0x564050d5b860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 481 handle_osd_map epochs [481,482], i have 481, src has [1,482]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.071418762s of 10.014479637s, submitted: 59
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 482 ms_handle_reset con 0x56404f540c00 session 0x56404e8250e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 482 ms_handle_reset con 0x56404eef4800 session 0x56404f71fa40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184573952 unmapped: 48996352 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184573952 unmapped: 48996352 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184582144 unmapped: 48988160 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 483 ms_handle_reset con 0x5640512a3400 session 0x56404e8b8000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f29e6000/0x0/0x4ffc00000, data 0x627fd3a/0x6517000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3672298 data_alloc: 234881024 data_used: 28295168
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 483 ms_handle_reset con 0x5640512a8c00 session 0x56404e60ba40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 483 ms_handle_reset con 0x56404e73a000 session 0x5640508c9a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184606720 unmapped: 48963584 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 483 ms_handle_reset con 0x56404eef4800 session 0x564050eba5a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f29e4000/0x0/0x4ffc00000, data 0x6281a5f/0x6519000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184606720 unmapped: 48963584 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184606720 unmapped: 48963584 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184606720 unmapped: 48963584 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 484 ms_handle_reset con 0x56404f540c00 session 0x564050668780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184606720 unmapped: 48963584 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 484 handle_osd_map epochs [484,485], i have 484, src has [1,485]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 485 ms_handle_reset con 0x564050e4f000 session 0x5640509394a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3680282 data_alloc: 234881024 data_used: 28299264
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f29dc000/0x0/0x4ffc00000, data 0x62850be/0x6520000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184623104 unmapped: 48947200 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 485 ms_handle_reset con 0x564052f97400 session 0x564052c83a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 485 handle_osd_map epochs [486,487], i have 485, src has [1,487]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184672256 unmapped: 48898048 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 487 heartbeat osd_stat(store_statfs(0x4f29d7000/0x0/0x4ffc00000, data 0x628870e/0x6526000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184672256 unmapped: 48898048 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184672256 unmapped: 48898048 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.212979317s of 12.258154869s, submitted: 80
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f29d7000/0x0/0x4ffc00000, data 0x628870e/0x6526000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184672256 unmapped: 48898048 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3688356 data_alloc: 234881024 data_used: 28299264
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 488 ms_handle_reset con 0x56404e73a000 session 0x5640508c9680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184672256 unmapped: 48898048 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184672256 unmapped: 48898048 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f29d4000/0x0/0x4ffc00000, data 0x628a2df/0x6529000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 488 ms_handle_reset con 0x56404f540c00 session 0x564050afaf00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 184680448 unmapped: 48889856 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 489 ms_handle_reset con 0x56404e673c00 session 0x56404e3d1a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185737216 unmapped: 47833088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 489 ms_handle_reset con 0x5640512a8c00 session 0x5640508c9c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 490 ms_handle_reset con 0x56404eef4800 session 0x56404e60af00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185745408 unmapped: 47824896 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3697141 data_alloc: 234881024 data_used: 28311552
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185753600 unmapped: 47816704 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 490 handle_osd_map epochs [491,491], i have 490, src has [1,491]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 491 ms_handle_reset con 0x5640512a8c00 session 0x56404e5da000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 491 ms_handle_reset con 0x56404e673c00 session 0x56404e8241e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 491 handle_osd_map epochs [491,492], i have 491, src has [1,492]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 492 ms_handle_reset con 0x56404e73a000 session 0x5640508c92c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 492 heartbeat osd_stat(store_statfs(0x4f29ca000/0x0/0x4ffc00000, data 0x628f600/0x6533000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185860096 unmapped: 47710208 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185868288 unmapped: 47702016 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 492 ms_handle_reset con 0x564050e55c00 session 0x564050e923c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 492 ms_handle_reset con 0x564052f97400 session 0x56404f492960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 492 ms_handle_reset con 0x56404e673c00 session 0x56404e60b2c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185892864 unmapped: 47677440 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 492 handle_osd_map epochs [492,493], i have 492, src has [1,493]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 492 handle_osd_map epochs [493,493], i have 493, src has [1,493]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 493 ms_handle_reset con 0x56404e73a000 session 0x56404e2f5860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 493 ms_handle_reset con 0x56404eef4800 session 0x564050f194a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 493 ms_handle_reset con 0x564050e55c00 session 0x5640506683c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.810751915s of 10.130002975s, submitted: 168
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185909248 unmapped: 47661056 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 493 ms_handle_reset con 0x56404f540c00 session 0x5640501592c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3712245 data_alloc: 234881024 data_used: 28336128
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185909248 unmapped: 47661056 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f25b2000/0x0/0x4ffc00000, data 0x6292cdc/0x653b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 493 handle_osd_map epochs [494,494], i have 493, src has [1,494]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 494 ms_handle_reset con 0x564050e55c00 session 0x5640501594a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185917440 unmapped: 47652864 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 494 ms_handle_reset con 0x56404e673c00 session 0x5640509385a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f25b0000/0x0/0x4ffc00000, data 0x629485b/0x653d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185917440 unmapped: 47652864 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f25b1000/0x0/0x4ffc00000, data 0x62947f9/0x653c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 494 handle_osd_map epochs [494,495], i have 494, src has [1,495]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185925632 unmapped: 47644672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185925632 unmapped: 47644672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3714835 data_alloc: 234881024 data_used: 28348416
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185925632 unmapped: 47644672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 495 ms_handle_reset con 0x56404e73a000 session 0x564050b7fa40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185925632 unmapped: 47644672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 495 handle_osd_map epochs [496,496], i have 495, src has [1,496]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185942016 unmapped: 47628288 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185942016 unmapped: 47628288 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 496 handle_osd_map epochs [497,497], i have 496, src has [1,497]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 497 ms_handle_reset con 0x56404eef4800 session 0x5640506852c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f25ac000/0x0/0x4ffc00000, data 0x6297e83/0x6541000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185950208 unmapped: 47620096 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3721891 data_alloc: 234881024 data_used: 28356608
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.838029861s of 10.753778458s, submitted: 109
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f25a9000/0x0/0x4ffc00000, data 0x6299a61/0x6543000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185958400 unmapped: 47611904 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 497 handle_osd_map epochs [497,498], i have 497, src has [1,498]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 497 handle_osd_map epochs [498,498], i have 498, src has [1,498]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 498 ms_handle_reset con 0x56404e673c00 session 0x56404e90a3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 498 ms_handle_reset con 0x56404e73a000 session 0x56404f4a34a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 498 handle_osd_map epochs [498,499], i have 498, src has [1,499]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185982976 unmapped: 47587328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185982976 unmapped: 47587328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185982976 unmapped: 47587328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 499 handle_osd_map epochs [500,500], i have 499, src has [1,500]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185982976 unmapped: 47587328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3728441 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185982976 unmapped: 47587328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 500 heartbeat osd_stat(store_statfs(0x4f25a3000/0x0/0x4ffc00000, data 0x629e7cc/0x654a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 500 handle_osd_map epochs [501,501], i have 500, src has [1,501]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 500 handle_osd_map epochs [501,501], i have 501, src has [1,501]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 501 ms_handle_reset con 0x56404f540c00 session 0x56404e2f45a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 47579136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 47579136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 501 heartbeat osd_stat(store_statfs(0x4f25a0000/0x0/0x4ffc00000, data 0x62a03c9/0x654d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 501 handle_osd_map epochs [502,502], i have 501, src has [1,502]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 47579136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f259c000/0x0/0x4ffc00000, data 0x62a1e64/0x6550000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 47579136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3735061 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 47579136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 47579136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 502 handle_osd_map epochs [503,503], i have 502, src has [1,503]
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.318342209s of 12.303573608s, submitted: 99
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185999360 unmapped: 47570944 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185999360 unmapped: 47570944 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185999360 unmapped: 47570944 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3747347 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185999360 unmapped: 47570944 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185999360 unmapped: 47570944 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185999360 unmapped: 47570944 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185999360 unmapped: 47570944 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 185999360 unmapped: 47570944 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3747347 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3747347 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3747347 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3747347 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 47562752 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186015744 unmapped: 47554560 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3747347 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186015744 unmapped: 47554560 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186023936 unmapped: 47546368 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186023936 unmapped: 47546368 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186023936 unmapped: 47546368 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186023936 unmapped: 47546368 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3747347 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186023936 unmapped: 47546368 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186023936 unmapped: 47546368 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186032128 unmapped: 47538176 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186032128 unmapped: 47538176 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186032128 unmapped: 47538176 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 37.345314026s of 37.358398438s, submitted: 19
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x564050e55c00 session 0x564050f2c1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3749129 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186032128 unmapped: 47538176 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x564052f97400 session 0x564050685680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186032128 unmapped: 47538176 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f2599000/0x0/0x4ffc00000, data 0x62a3961/0x6554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404e673c00 session 0x56404e5da3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186073088 unmapped: 47497216 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186073088 unmapped: 47497216 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404e73a000 session 0x564052c82780
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259a000/0x0/0x4ffc00000, data 0x62a3961/0x6554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186081280 unmapped: 47489024 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3748285 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186081280 unmapped: 47489024 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186081280 unmapped: 47489024 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186081280 unmapped: 47489024 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186081280 unmapped: 47489024 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259b000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186081280 unmapped: 47489024 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3748285 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186081280 unmapped: 47489024 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186089472 unmapped: 47480832 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186089472 unmapped: 47480832 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186089472 unmapped: 47480832 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259b000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186089472 unmapped: 47480832 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259b000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3748285 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186089472 unmapped: 47480832 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186089472 unmapped: 47480832 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186089472 unmapped: 47480832 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186089472 unmapped: 47480832 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186097664 unmapped: 47472640 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3748285 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186097664 unmapped: 47472640 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259b000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186097664 unmapped: 47472640 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186097664 unmapped: 47472640 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186097664 unmapped: 47472640 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259b000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186097664 unmapped: 47472640 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3748285 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186097664 unmapped: 47472640 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259b000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186097664 unmapped: 47472640 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259b000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186097664 unmapped: 47472640 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259b000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186097664 unmapped: 47472640 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186097664 unmapped: 47472640 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3748285 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186097664 unmapped: 47472640 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186097664 unmapped: 47472640 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186097664 unmapped: 47472640 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259b000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186097664 unmapped: 47472640 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186105856 unmapped: 47464448 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3748285 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186105856 unmapped: 47464448 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186105856 unmapped: 47464448 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259b000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186105856 unmapped: 47464448 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186105856 unmapped: 47464448 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259b000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186114048 unmapped: 47456256 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3748285 data_alloc: 234881024 data_used: 28364800
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186114048 unmapped: 47456256 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186114048 unmapped: 47456256 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.509319305s of 42.779483795s, submitted: 16
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404f540c00 session 0x564050f2dc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x564050e55c00 session 0x5640509390e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186482688 unmapped: 47087616 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186482688 unmapped: 47087616 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259b000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186482688 unmapped: 47087616 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3751005 data_alloc: 234881024 data_used: 28430336
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186490880 unmapped: 47079424 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186490880 unmapped: 47079424 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f259b000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186490880 unmapped: 47079424 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186490880 unmapped: 47079424 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186490880 unmapped: 47079424 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3756319 data_alloc: 234881024 data_used: 28430336
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 46817280 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f13ba000/0x0/0x4ffc00000, data 0x62e390f/0x6594000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x564052f97400 session 0x564050afbc20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 46817280 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 46817280 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f13ba000/0x0/0x4ffc00000, data 0x62e390f/0x6594000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 46817280 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 46817280 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3756319 data_alloc: 234881024 data_used: 28430336
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f13ba000/0x0/0x4ffc00000, data 0x62e390f/0x6594000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 46817280 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 46817280 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 46817280 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186761216 unmapped: 46809088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f13ba000/0x0/0x4ffc00000, data 0x62e390f/0x6594000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186761216 unmapped: 46809088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3756319 data_alloc: 234881024 data_used: 28430336
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186761216 unmapped: 46809088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186761216 unmapped: 46809088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186761216 unmapped: 46809088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186761216 unmapped: 46809088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f13ba000/0x0/0x4ffc00000, data 0x62e390f/0x6594000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186761216 unmapped: 46809088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3756319 data_alloc: 234881024 data_used: 28430336
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186761216 unmapped: 46809088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186761216 unmapped: 46809088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186761216 unmapped: 46809088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f13ba000/0x0/0x4ffc00000, data 0x62e390f/0x6594000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186761216 unmapped: 46809088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404e673c00 session 0x56404e90a3c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186761216 unmapped: 46809088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3756319 data_alloc: 234881024 data_used: 28430336
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.932964325s of 27.968072891s, submitted: 7
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f13fb000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404e73a000 session 0x5640506852c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186761216 unmapped: 46809088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 190988288 unmapped: 42582016 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f03f9000/0x0/0x4ffc00000, data 0x72a3939/0x7555000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [0,0,0,0,0,0,0,3,2])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186851328 unmapped: 46718976 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404f540c00 session 0x564050b7fa40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x564050e55c00 session 0x564050e923c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4efc39000/0x0/0x4ffc00000, data 0x8aa3971/0x8d55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186851328 unmapped: 46718976 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186851328 unmapped: 46718976 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4037070 data_alloc: 234881024 data_used: 28430336
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186851328 unmapped: 46718976 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186851328 unmapped: 46718976 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186859520 unmapped: 46710784 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186859520 unmapped: 46710784 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4efc39000/0x0/0x4ffc00000, data 0x8aa3971/0x8d55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186867712 unmapped: 46702592 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4037070 data_alloc: 234881024 data_used: 28430336
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186867712 unmapped: 46702592 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186867712 unmapped: 46702592 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186867712 unmapped: 46702592 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x5640512a8c00 session 0x56404e8241e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186867712 unmapped: 46702592 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4efc39000/0x0/0x4ffc00000, data 0x8aa3971/0x8d55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404e673c00 session 0x56404e5da000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186867712 unmapped: 46702592 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4037070 data_alloc: 234881024 data_used: 28430336
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4efc39000/0x0/0x4ffc00000, data 0x8aa3971/0x8d55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404e73a000 session 0x5640508c9c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.696381569s of 15.392226219s, submitted: 48
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404f540c00 session 0x56404e3d1a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186867712 unmapped: 46702592 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186875904 unmapped: 46694400 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186875904 unmapped: 46694400 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4efc38000/0x0/0x4ffc00000, data 0x8aa3981/0x8d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186875904 unmapped: 46694400 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186875904 unmapped: 46694400 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4039068 data_alloc: 234881024 data_used: 28434432
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4efc38000/0x0/0x4ffc00000, data 0x8aa3981/0x8d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [1])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186892288 unmapped: 46678016 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4efc38000/0x0/0x4ffc00000, data 0x8aa3981/0x8d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186892288 unmapped: 46678016 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4efc38000/0x0/0x4ffc00000, data 0x8aa3981/0x8d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186892288 unmapped: 46678016 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186892288 unmapped: 46678016 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186892288 unmapped: 46678016 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4055228 data_alloc: 251658240 data_used: 30752768
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186892288 unmapped: 46678016 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4efc38000/0x0/0x4ffc00000, data 0x8aa3981/0x8d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186892288 unmapped: 46678016 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186892288 unmapped: 46678016 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186892288 unmapped: 46678016 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4efc38000/0x0/0x4ffc00000, data 0x8aa3981/0x8d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186892288 unmapped: 46678016 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4055228 data_alloc: 251658240 data_used: 30752768
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 186892288 unmapped: 46678016 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4efc38000/0x0/0x4ffc00000, data 0x8aa3981/0x8d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.145092010s of 16.153749466s, submitted: 1
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 192454656 unmapped: 41115648 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197140480 unmapped: 36429824 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4efc18000/0x0/0x4ffc00000, data 0x8aa3981/0x8d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 198369280 unmapped: 35201024 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202260480 unmapped: 31309824 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4200276 data_alloc: 251658240 data_used: 31416320
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202260480 unmapped: 31309824 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202260480 unmapped: 31309824 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202260480 unmapped: 31309824 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28981/0xa0db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28981/0xa0db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202260480 unmapped: 31309824 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202260480 unmapped: 31309824 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4200292 data_alloc: 251658240 data_used: 31416320
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x564050e55c00 session 0x5640508c9680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x564054010800 session 0x5640508c8b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202563584 unmapped: 31006720 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x564054010800 session 0x5640509394a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202563584 unmapped: 31006720 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed714000/0x0/0x4ffc00000, data 0x9e28971/0xa0da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed714000/0x0/0x4ffc00000, data 0x9e28971/0xa0da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202563584 unmapped: 31006720 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202563584 unmapped: 31006720 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed714000/0x0/0x4ffc00000, data 0x9e28971/0xa0da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202571776 unmapped: 30998528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4201575 data_alloc: 251658240 data_used: 31416320
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202571776 unmapped: 30998528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202571776 unmapped: 30998528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202571776 unmapped: 30998528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed714000/0x0/0x4ffc00000, data 0x9e28971/0xa0da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 30990336 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 30990336 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4201575 data_alloc: 251658240 data_used: 31416320
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed714000/0x0/0x4ffc00000, data 0x9e28971/0xa0da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 30990336 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 30990336 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 30990336 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 30990336 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed714000/0x0/0x4ffc00000, data 0x9e28971/0xa0da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202588160 unmapped: 30982144 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4201575 data_alloc: 251658240 data_used: 31416320
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202588160 unmapped: 30982144 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202588160 unmapped: 30982144 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202588160 unmapped: 30982144 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202588160 unmapped: 30982144 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed714000/0x0/0x4ffc00000, data 0x9e28971/0xa0da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202588160 unmapped: 30982144 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4201575 data_alloc: 251658240 data_used: 31416320
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202588160 unmapped: 30982144 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202596352 unmapped: 30973952 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202596352 unmapped: 30973952 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.838159561s of 31.879165649s, submitted: 171
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404e673c00 session 0x564050eba5a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202612736 unmapped: 30957568 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed714000/0x0/0x4ffc00000, data 0x9e28971/0xa0da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202620928 unmapped: 30949376 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4202741 data_alloc: 251658240 data_used: 31416320
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202620928 unmapped: 30949376 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28994/0xa0db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202620928 unmapped: 30949376 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28994/0xa0db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28994/0xa0db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202620928 unmapped: 30949376 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202620928 unmapped: 30949376 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202620928 unmapped: 30949376 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4203861 data_alloc: 251658240 data_used: 31535104
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202629120 unmapped: 30941184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202629120 unmapped: 30941184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202629120 unmapped: 30941184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28994/0xa0db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202629120 unmapped: 30941184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202629120 unmapped: 30941184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4203861 data_alloc: 251658240 data_used: 31535104
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202629120 unmapped: 30941184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202629120 unmapped: 30941184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28994/0xa0db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.098192215s of 14.133620262s, submitted: 10
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 30932992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 30932992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28994/0xa0db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 30932992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4217213 data_alloc: 251658240 data_used: 32559104
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 30932992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 30932992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202694656 unmapped: 30875648 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202694656 unmapped: 30875648 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202694656 unmapped: 30875648 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28994/0xa0db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4216141 data_alloc: 251658240 data_used: 32555008
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202694656 unmapped: 30875648 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28994/0xa0db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202694656 unmapped: 30875648 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 202702848 unmapped: 30867456 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.837417603s of 10.881437302s, submitted: 18
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201285632 unmapped: 32284672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28994/0xa0db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201285632 unmapped: 32284672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4215437 data_alloc: 251658240 data_used: 32555008
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201285632 unmapped: 32284672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201285632 unmapped: 32284672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201285632 unmapped: 32284672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28994/0xa0db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201285632 unmapped: 32284672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28994/0xa0db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201285632 unmapped: 32284672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4216829 data_alloc: 251658240 data_used: 32550912
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201285632 unmapped: 32284672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201285632 unmapped: 32284672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201285632 unmapped: 32284672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201285632 unmapped: 32284672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28994/0xa0db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201285632 unmapped: 32284672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4216829 data_alloc: 251658240 data_used: 32550912
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201293824 unmapped: 32276480 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201293824 unmapped: 32276480 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201293824 unmapped: 32276480 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28994/0xa0db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.097886086s of 15.162645340s, submitted: 11
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404e73a000 session 0x56404e8250e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404f540c00 session 0x564050f185a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201285632 unmapped: 32284672 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x564050e55c00 session 0x564050e921e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 32268288 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4218075 data_alloc: 251658240 data_used: 32759808
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28971/0xa0da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 32268288 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 32268288 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 32268288 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 32268288 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 32268288 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4218075 data_alloc: 251658240 data_used: 32759808
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 32268288 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed713000/0x0/0x4ffc00000, data 0x9e28971/0xa0da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 32268288 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404e673c00 session 0x56404e8a6000
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201310208 unmapped: 32260096 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404e73a000 session 0x564052c832c0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 198033408 unmapped: 35536896 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4ed715000/0x0/0x4ffc00000, data 0x9e28961/0xa0d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 198033408 unmapped: 35536896 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3776395 data_alloc: 234881024 data_used: 27992064
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f0a9a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 198033408 unmapped: 35536896 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 198033408 unmapped: 35536896 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 198033408 unmapped: 35536896 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 198033408 unmapped: 35536896 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 198033408 unmapped: 35536896 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3776395 data_alloc: 234881024 data_used: 27992064
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 198033408 unmapped: 35536896 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f0a9a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 198025216 unmapped: 35545088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 198025216 unmapped: 35545088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 198025216 unmapped: 35545088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f0a9a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 198025216 unmapped: 35545088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3776395 data_alloc: 234881024 data_used: 27992064
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 198025216 unmapped: 35545088 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.210863113s of 22.719690323s, submitted: 52
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404f540c00 session 0x56404e8b8b40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x564050e55c00 session 0x564050e92f00
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f0a9a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3773659 data_alloc: 234881024 data_used: 27992064
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f129b000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x564054010800 session 0x564052c82960
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3777145 data_alloc: 234881024 data_used: 27992064
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f125b000/0x0/0x4ffc00000, data 0x62e38ff/0x6593000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3777145 data_alloc: 234881024 data_used: 27992064
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f125b000/0x0/0x4ffc00000, data 0x62e38ff/0x6593000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 36118528 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f125b000/0x0/0x4ffc00000, data 0x62e38ff/0x6593000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3777145 data_alloc: 234881024 data_used: 27992064
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f125b000/0x0/0x4ffc00000, data 0x62e38ff/0x6593000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.431436539s of 19.564586639s, submitted: 6
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404e673c00 session 0x564050b7f680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 212189184 unmapped: 21381120 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404e73a000 session 0x564052c83680
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404f540c00 session 0x56404e7b05a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4048353 data_alloc: 234881024 data_used: 27992064
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4eec53000/0x0/0x4ffc00000, data 0x88ea961/0x8b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4048353 data_alloc: 234881024 data_used: 27992064
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4eec53000/0x0/0x4ffc00000, data 0x88ea961/0x8b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4eec53000/0x0/0x4ffc00000, data 0x88ea961/0x8b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4048353 data_alloc: 234881024 data_used: 27992064
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x564050e55c00 session 0x564050f19860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.542654037s of 18.172573090s, submitted: 57
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4eec53000/0x0/0x4ffc00000, data 0x88ea961/0x8b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4048645 data_alloc: 234881024 data_used: 27996160
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 36052992 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197509120 unmapped: 36061184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4eec53000/0x0/0x4ffc00000, data 0x88ea961/0x8b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197509120 unmapped: 36061184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197509120 unmapped: 36061184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4069125 data_alloc: 251658240 data_used: 29941760
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197509120 unmapped: 36061184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197509120 unmapped: 36061184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4eec53000/0x0/0x4ffc00000, data 0x88ea961/0x8b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197509120 unmapped: 36061184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197509120 unmapped: 36061184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4eec53000/0x0/0x4ffc00000, data 0x88ea961/0x8b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197509120 unmapped: 36061184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4069125 data_alloc: 234881024 data_used: 29941760
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197509120 unmapped: 36061184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4eec53000/0x0/0x4ffc00000, data 0x88ea961/0x8b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4eec53000/0x0/0x4ffc00000, data 0x88ea961/0x8b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 197509120 unmapped: 36061184 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.497042656s of 13.832476616s, submitted: 1
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 205168640 unmapped: 28401664 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203292672 unmapped: 30277632 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203710464 unmapped: 29859840 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4089601 data_alloc: 234881024 data_used: 31227904
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203628544 unmapped: 29941760 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203628544 unmapped: 29941760 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf22000/0x0/0x4ffc00000, data 0x95eb961/0x989c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203628544 unmapped: 29941760 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203628544 unmapped: 29941760 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203628544 unmapped: 29941760 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4177697 data_alloc: 234881024 data_used: 31223808
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf22000/0x0/0x4ffc00000, data 0x95eb961/0x989c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203628544 unmapped: 29941760 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203628544 unmapped: 29941760 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf22000/0x0/0x4ffc00000, data 0x95eb961/0x989c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203628544 unmapped: 29941760 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.876618385s of 11.383620262s, submitted: 137
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203628544 unmapped: 29941760 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203669504 unmapped: 29900800 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4175553 data_alloc: 234881024 data_used: 31227904
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf52000/0x0/0x4ffc00000, data 0x95eb961/0x989c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56405183b400 session 0x564050685a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x564050ff6400 session 0x564050e93c20
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404e673c00 session 0x56404e8245a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf52000/0x0/0x4ffc00000, data 0x95eb961/0x989c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4176089 data_alloc: 234881024 data_used: 31240192
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf52000/0x0/0x4ffc00000, data 0x95eb961/0x989c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf52000/0x0/0x4ffc00000, data 0x95eb961/0x989c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4176089 data_alloc: 234881024 data_used: 31240192
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf52000/0x0/0x4ffc00000, data 0x95eb961/0x989c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4176089 data_alloc: 234881024 data_used: 31240192
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf52000/0x0/0x4ffc00000, data 0x95eb961/0x989c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4176089 data_alloc: 234881024 data_used: 31240192
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203677696 unmapped: 29892608 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.333091736s of 23.145381927s, submitted: 13
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404e73a000 session 0x564050939a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 203997184 unmapped: 29573120 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 29564928 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 29564928 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf2d000/0x0/0x4ffc00000, data 0x960f984/0x98c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 29564928 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4181523 data_alloc: 234881024 data_used: 31244288
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 29564928 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 29564928 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 29564928 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf2d000/0x0/0x4ffc00000, data 0x960f984/0x98c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 29564928 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 29564928 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4183763 data_alloc: 234881024 data_used: 31338496
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 29564928 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 29564928 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 29564928 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf2d000/0x0/0x4ffc00000, data 0x960f984/0x98c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 29564928 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 29564928 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf2d000/0x0/0x4ffc00000, data 0x960f984/0x98c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4183763 data_alloc: 234881024 data_used: 31338496
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.228489876s of 14.287252426s, submitted: 13
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204013568 unmapped: 29556736 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf2d000/0x0/0x4ffc00000, data 0x960f984/0x98c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204046336 unmapped: 29523968 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204087296 unmapped: 29483008 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204087296 unmapped: 29483008 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204087296 unmapped: 29483008 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4202343 data_alloc: 234881024 data_used: 32186368
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204087296 unmapped: 29483008 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204087296 unmapped: 29483008 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf2d000/0x0/0x4ffc00000, data 0x960f984/0x98c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204161024 unmapped: 29409280 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204161024 unmapped: 29409280 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf2d000/0x0/0x4ffc00000, data 0x960f984/0x98c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204161024 unmapped: 29409280 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4202695 data_alloc: 234881024 data_used: 32182272
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204161024 unmapped: 29409280 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204161024 unmapped: 29409280 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204161024 unmapped: 29409280 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf2d000/0x0/0x4ffc00000, data 0x960f984/0x98c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.710569382s of 12.640620232s, submitted: 11
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204201984 unmapped: 29368320 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204201984 unmapped: 29368320 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4201991 data_alloc: 234881024 data_used: 32182272
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204201984 unmapped: 29368320 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf2d000/0x0/0x4ffc00000, data 0x960f984/0x98c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204201984 unmapped: 29368320 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204201984 unmapped: 29368320 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204201984 unmapped: 29368320 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204201984 unmapped: 29368320 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4201991 data_alloc: 234881024 data_used: 32182272
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204201984 unmapped: 29368320 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf2d000/0x0/0x4ffc00000, data 0x960f984/0x98c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204201984 unmapped: 29368320 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204201984 unmapped: 29368320 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf2d000/0x0/0x4ffc00000, data 0x960f984/0x98c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204201984 unmapped: 29368320 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204201984 unmapped: 29368320 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4201991 data_alloc: 234881024 data_used: 32182272
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.817921638s of 12.278414726s, submitted: 4
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404f540c00 session 0x56404f71f4a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x564050e55c00 session 0x56404e3d1860
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204341248 unmapped: 29229056 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404e673c00 session 0x56404e3d1a40
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204398592 unmapped: 29171712 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204398592 unmapped: 29171712 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4edf51000/0x0/0x4ffc00000, data 0x95eb961/0x989c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204398592 unmapped: 29171712 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 204398592 unmapped: 29171712 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4200327 data_alloc: 234881024 data_used: 33333248
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404e73a000 session 0x564050f2c1e0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 ms_handle_reset con 0x56404f540c00 session 0x564050eba5a0
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 32219136 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 32235520 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 32235520 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 32235520 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 32235520 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 32235520 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 32235520 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 32235520 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 32235520 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 32235520 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 32235520 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 32235520 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 32227328 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201375744 unmapped: 32194560 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: bluestore.MempoolThread(0x56404ce8db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3796863 data_alloc: 234881024 data_used: 26746880
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: do_command 'config diff' '{prefix=config diff}'
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f069a000/0x0/0x4ffc00000, data 0x62a38ff/0x6553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: do_command 'config show' '{prefix=config show}'
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201400320 unmapped: 32169984 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: do_command 'counter dump' '{prefix=counter dump}'
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: do_command 'counter schema' '{prefix=counter schema}'
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201179136 unmapped: 32391168 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: prioritycache tune_memory target: 4294967296 mapped: 201269248 unmapped: 32301056 heap: 233570304 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:20 np0005464891 ceph-osd[89750]: do_command 'log dump' '{prefix=log dump}'
Oct  1 13:15:20 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19321 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  1 13:15:21 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  1 13:15:21 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/30365629' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  1 13:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 13:15:21 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 28K writes, 111K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 28K writes, 10K syncs, 2.79 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2717 writes, 16K keys, 2717 commit groups, 1.0 writes per commit group, ingest: 10.35 MB, 0.02 MB/s#012Interval WAL: 2717 writes, 1060 syncs, 2.56 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 13:15:21 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19325 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 13:15:21 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:15:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  1 13:15:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1101092443' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19329 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 13:15:22 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  1 13:15:22 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3361259628' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  1 13:15:22 np0005464891 nova_compute[259907]: 2025-10-01 17:15:22.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:15:22 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19333 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 13:15:23 np0005464891 nova_compute[259907]: 2025-10-01 17:15:23.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  1 13:15:23 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19337 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  1 13:15:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  1 13:15:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1906218178' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  1 13:15:23 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19339 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  1 13:15:23 np0005464891 ceph-mgr[74592]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 271 MiB data, 683 MiB used, 59 GiB / 60 GiB avail
Oct  1 13:15:23 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct  1 13:15:23 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3715768313' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  1 13:15:23 np0005464891 nova_compute[259907]: 2025-10-01 17:15:23.854 2 DEBUG nova.virt.libvirt.imagecache [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct  1 13:15:23 np0005464891 nova_compute[259907]: 2025-10-01 17:15:23.855 2 WARNING nova.virt.libvirt.imagecache [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa#033[00m
Oct  1 13:15:23 np0005464891 nova_compute[259907]: 2025-10-01 17:15:23.855 2 INFO nova.virt.libvirt.imagecache [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Removable base files: /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa#033[00m
Oct  1 13:15:23 np0005464891 nova_compute[259907]: 2025-10-01 17:15:23.855 2 INFO nova.virt.libvirt.imagecache [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d024f7a35ea45569f869f237e2b764bb5c5ddaaa#033[00m
Oct  1 13:15:23 np0005464891 nova_compute[259907]: 2025-10-01 17:15:23.855 2 DEBUG nova.virt.libvirt.imagecache [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct  1 13:15:23 np0005464891 nova_compute[259907]: 2025-10-01 17:15:23.855 2 DEBUG nova.virt.libvirt.imagecache [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct  1 13:15:23 np0005464891 nova_compute[259907]: 2025-10-01 17:15:23.856 2 DEBUG nova.virt.libvirt.imagecache [None req-6fea20bf-dece-4ec8-9caa-e62272eb49d4 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct  1 13:15:24 np0005464891 ceph-mgr[74592]: log_channel(audit) log [DBG] : from='client.19347 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  1 13:15:24 np0005464891 ceph-6b18e3aa-2a4c-5422-bfcb-ab223aacc6d5-mgr-compute-0-ieawdb[74588]: 2025-10-01T17:15:24.534+0000 7f1d20c83640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  1 13:15:24 np0005464891 ceph-mgr[74592]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  1 13:15:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 13:15:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct  1 13:15:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3974011174' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  1 13:15:24 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct  1 13:15:24 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2924985584' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 243 ms_handle_reset con 0x55a66e9bf000 session 0x55a66eec3a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 243 ms_handle_reset con 0x55a66fcc3c00 session 0x55a66f36f2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 43999232 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 43982848 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 244 ms_handle_reset con 0x55a66fcc7c00 session 0x55a66f37e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 244 ms_handle_reset con 0x55a66fcad800 session 0x55a66cc710e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 244 ms_handle_reset con 0x55a66ca6b800 session 0x55a66ea0fc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 43958272 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 245 ms_handle_reset con 0x55a66e9bf000 session 0x55a66f37eb40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1790621 data_alloc: 218103808 data_used: 827392
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 245 ms_handle_reset con 0x55a66ee96000 session 0x55a66f501e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 245 ms_handle_reset con 0x55a66fcad800 session 0x55a66cc74780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 44130304 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 245 heartbeat osd_stat(store_statfs(0x4fa611000/0x0/0x4ffc00000, data 0x16feb2a/0x185b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 44130304 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 245 handle_osd_map epochs [245,246], i have 245, src has [1,246]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 245 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 246 ms_handle_reset con 0x55a66fcc7c00 session 0x55a66f4d9680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 246 ms_handle_reset con 0x55a66fcc3c00 session 0x55a66f40ef00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 246 ms_handle_reset con 0x55a66fcc7c00 session 0x55a66ea0f680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 121716736 unmapped: 44146688 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 247 ms_handle_reset con 0x55a66fcc8800 session 0x55a66d8c5c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 247 heartbeat osd_stat(store_statfs(0x4fa60f000/0x0/0x4ffc00000, data 0x1700737/0x185e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 247 heartbeat osd_stat(store_statfs(0x4fa60b000/0x0/0x4ffc00000, data 0x170230c/0x1861000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 121716736 unmapped: 44146688 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 247 ms_handle_reset con 0x55a66fcc5400 session 0x55a66d8c4d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 247 ms_handle_reset con 0x55a66ea58000 session 0x55a66e7010e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 247 ms_handle_reset con 0x55a66fcc3c00 session 0x55a66ea0f2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 121749504 unmapped: 44113920 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 247 ms_handle_reset con 0x55a66fcc5400 session 0x55a66eec3c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805138 data_alloc: 218103808 data_used: 839680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 247 ms_handle_reset con 0x55a66fcc7c00 session 0x55a66e8db0e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 247 ms_handle_reset con 0x55a66eb7e800 session 0x55a66f6deb40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 247 ms_handle_reset con 0x55a66f973000 session 0x55a66f502b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 121782272 unmapped: 44081152 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.355104446s of 10.000157356s, submitted: 170
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 248 heartbeat osd_stat(store_statfs(0x4fa607000/0x0/0x4ffc00000, data 0x1702410/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 248 ms_handle_reset con 0x55a66fcc8800 session 0x55a66f503680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 248 ms_handle_reset con 0x55a66eb7e800 session 0x55a66f6de000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 248 ms_handle_reset con 0x55a66fcc3c00 session 0x55a66d9105a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 248 ms_handle_reset con 0x55a66fcc5400 session 0x55a66f301e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 44072960 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 249 ms_handle_reset con 0x55a66fcc7c00 session 0x55a66f8f1a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 249 ms_handle_reset con 0x55a66fcc7c00 session 0x55a66ea0fe00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 249 ms_handle_reset con 0x55a66eb7e800 session 0x55a66dc07680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 249 heartbeat osd_stat(store_statfs(0x4fa602000/0x0/0x4ffc00000, data 0x1705bf0/0x186b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 249 ms_handle_reset con 0x55a66fcc3c00 session 0x55a6700f34a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 121806848 unmapped: 44056576 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 250 ms_handle_reset con 0x55a66fcc5400 session 0x55a66dbcd860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 121823232 unmapped: 44040192 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 250 ms_handle_reset con 0x55a66fcaf400 session 0x55a66cc70f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 250 ms_handle_reset con 0x55a66fcc3c00 session 0x55a66f37f0e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 250 ms_handle_reset con 0x55a66eb7e800 session 0x55a66f6df2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 250 ms_handle_reset con 0x55a66fcc5400 session 0x55a66e76af00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 44023808 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1813760 data_alloc: 218103808 data_used: 847872
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 250 ms_handle_reset con 0x55a66fcc7c00 session 0x55a66d5b21e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 250 ms_handle_reset con 0x55a66fcaf000 session 0x55a66f507680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 44007424 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 250 ms_handle_reset con 0x55a66eb7e800 session 0x55a66cca6f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 251 ms_handle_reset con 0x55a66fcaf000 session 0x55a66d8c41e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 251 ms_handle_reset con 0x55a66fcc3c00 session 0x55a66d8c54a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 251 ms_handle_reset con 0x55a66fcc5400 session 0x55a66f502960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 251 ms_handle_reset con 0x55a66fcc7c00 session 0x55a66f502000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 251 ms_handle_reset con 0x55a66eb7e800 session 0x55a66f501c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 251 ms_handle_reset con 0x55a66fcaf000 session 0x55a66f8f1860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 251 ms_handle_reset con 0x55a66fcc5400 session 0x55a66f40ef00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 251 ms_handle_reset con 0x55a66fcc3c00 session 0x55a66f40ed20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 251 ms_handle_reset con 0x55a66fcc3800 session 0x55a66f301860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 43270144 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 251 ms_handle_reset con 0x55a66fcc2800 session 0x55a66f4d9680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 251 heartbeat osd_stat(store_statfs(0x4f9aec000/0x0/0x4ffc00000, data 0x1e0c209/0x1f71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 43270144 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 251 ms_handle_reset con 0x55a66fcae800 session 0x55a66f300d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 251 ms_handle_reset con 0x55a66eb63000 session 0x55a66ea0fc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 43270144 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 251 ms_handle_reset con 0x55a66fca5c00 session 0x55a66f301c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 123658240 unmapped: 42205184 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1879782 data_alloc: 218103808 data_used: 868352
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 252 handle_osd_map epochs [252,253], i have 252, src has [1,253]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 253 ms_handle_reset con 0x55a66f136800 session 0x55a66f300f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 253 ms_handle_reset con 0x55a66eb63000 session 0x55a66ce230e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 253 ms_handle_reset con 0x55a66fcae800 session 0x55a66f37e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 123674624 unmapped: 42188800 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 253 ms_handle_reset con 0x55a66fca5c00 session 0x55a66ce23a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.495039940s of 10.498204231s, submitted: 262
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 254 ms_handle_reset con 0x55a66fcc2800 session 0x55a66ce22000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 123715584 unmapped: 42147840 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 254 heartbeat osd_stat(store_statfs(0x4f9ae0000/0x0/0x4ffc00000, data 0x1e115f2/0x1f7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 254 handle_osd_map epochs [254,255], i have 254, src has [1,255]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 255 ms_handle_reset con 0x55a66dbcf800 session 0x55a66bfe2b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 41992192 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 255 ms_handle_reset con 0x55a66eb63000 session 0x55a66bfe3c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 255 ms_handle_reset con 0x55a66ca6b400 session 0x55a66e7005a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 126140416 unmapped: 39723008 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f9ade000/0x0/0x4ffc00000, data 0x1e1308f/0x1f7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 126140416 unmapped: 39723008 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1944811 data_alloc: 218103808 data_used: 8056832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 255 ms_handle_reset con 0x55a66e9bf400 session 0x55a66dc07680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 126140416 unmapped: 39723008 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 255 handle_osd_map epochs [255,256], i have 255, src has [1,256]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 127188992 unmapped: 38674432 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 257 ms_handle_reset con 0x55a66fcab800 session 0x55a66f507680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 127197184 unmapped: 38666240 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 257 ms_handle_reset con 0x55a66eb6ec00 session 0x55a66d90e5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 257 ms_handle_reset con 0x55a6707be400 session 0x55a66f107680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 127197184 unmapped: 38666240 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 257 heartbeat osd_stat(store_statfs(0x4f9ad8000/0x0/0x4ffc00000, data 0x1e16709/0x1f85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 127197184 unmapped: 38666240 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1951956 data_alloc: 218103808 data_used: 8056832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 258 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f502960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 127205376 unmapped: 38658048 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 259 ms_handle_reset con 0x55a66eb63000 session 0x55a66d90e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 259 ms_handle_reset con 0x55a66e9bf400 session 0x55a66f501c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 127205376 unmapped: 38658048 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 259 heartbeat osd_stat(store_statfs(0x4f9ad1000/0x0/0x4ffc00000, data 0x1e19d05/0x1f8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.285677910s of 11.487959862s, submitted: 88
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 37404672 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 259 handle_osd_map epochs [259,260], i have 259, src has [1,260]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 260 ms_handle_reset con 0x55a66fcab800 session 0x55a66f36f680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 260 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f500960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132161536 unmapped: 33701888 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 260 ms_handle_reset con 0x55a66e9bf400 session 0x55a66dbcd4a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132251648 unmapped: 33611776 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2053962 data_alloc: 218103808 data_used: 8744960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 260 ms_handle_reset con 0x55a66eb63000 session 0x55a66d90d4a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 260 ms_handle_reset con 0x55a6707be400 session 0x55a66f506780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132268032 unmapped: 33595392 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 260 ms_handle_reset con 0x55a66fca4c00 session 0x55a66d8c5680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 260 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f40fa40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132268032 unmapped: 33595392 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 260 ms_handle_reset con 0x55a66e9bf400 session 0x55a66d90cb40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 260 heartbeat osd_stat(store_statfs(0x4f7f54000/0x0/0x4ffc00000, data 0x27ec946/0x2961000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132300800 unmapped: 33562624 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 260 ms_handle_reset con 0x55a66e9c0800 session 0x55a66f105c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 260 heartbeat osd_stat(store_statfs(0x4f7f54000/0x0/0x4ffc00000, data 0x27ec946/0x2961000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 260 handle_osd_map epochs [260,261], i have 260, src has [1,261]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 261 ms_handle_reset con 0x55a66f134000 session 0x55a66e8da780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 261 ms_handle_reset con 0x55a66eb63000 session 0x55a66f8f10e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 261 ms_handle_reset con 0x55a66ca6b400 session 0x55a66d4c3860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 35627008 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 261 ms_handle_reset con 0x55a66e9c0800 session 0x55a66cca01e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 262 ms_handle_reset con 0x55a66eb63000 session 0x55a66e9ad2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 262 ms_handle_reset con 0x55a66e9bf400 session 0x55a66f37e960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 35627008 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2050926 data_alloc: 218103808 data_used: 8761344
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 262 ms_handle_reset con 0x55a66f134000 session 0x55a66d90eb40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 262 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f6df680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 35610624 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 262 heartbeat osd_stat(store_statfs(0x4f7f57000/0x0/0x4ffc00000, data 0x27f0086/0x2966000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 262 ms_handle_reset con 0x55a66e9bf400 session 0x55a66d910b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 262 ms_handle_reset con 0x55a66eb63000 session 0x55a66f301680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 35610624 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 262 ms_handle_reset con 0x55a66d9d3800 session 0x55a66ce22f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 35610624 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.905605316s of 10.462970734s, submitted: 154
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 262 ms_handle_reset con 0x55a66eb7e000 session 0x55a66f6de5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 262 handle_osd_map epochs [262,263], i have 262, src has [1,263]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 263 ms_handle_reset con 0x55a66dbce800 session 0x55a66f40ef00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 263 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f105a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130260992 unmapped: 35602432 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 264 ms_handle_reset con 0x55a66e9bf400 session 0x55a66f36eb40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 264 ms_handle_reset con 0x55a66d9d3800 session 0x55a66f40ef00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 264 ms_handle_reset con 0x55a66eb63000 session 0x55a66f301680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 264 heartbeat osd_stat(store_statfs(0x4f7f52000/0x0/0x4ffc00000, data 0x27f1c91/0x296b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130277376 unmapped: 35586048 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2064901 data_alloc: 218103808 data_used: 8781824
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 265 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f506780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 265 ms_handle_reset con 0x55a66dbce800 session 0x55a66f106f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 265 ms_handle_reset con 0x55a66d9d3800 session 0x55a66dbcd4a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 265 ms_handle_reset con 0x55a66eb7e000 session 0x55a66f37e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 265 ms_handle_reset con 0x55a66e9bf400 session 0x55a66f36f680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 35577856 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 266 ms_handle_reset con 0x55a66da4fc00 session 0x55a66f8f0d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 266 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f506f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 266 ms_handle_reset con 0x55a66d9d3800 session 0x55a66d8a2000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129630208 unmapped: 36233216 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 266 ms_handle_reset con 0x55a66eb7e000 session 0x55a66d90e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 266 ms_handle_reset con 0x55a66dbce800 session 0x55a66efaaf00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 267 ms_handle_reset con 0x55a66ca6b400 session 0x55a66ce22000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129679360 unmapped: 36184064 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 267 ms_handle_reset con 0x55a66f131400 session 0x55a66d90e5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 268 ms_handle_reset con 0x55a66eb69c00 session 0x55a66dc07680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 268 ms_handle_reset con 0x55a66d9d3800 session 0x55a66f5021e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 268 ms_handle_reset con 0x55a66e9bec00 session 0x55a66e7005a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 268 ms_handle_reset con 0x55a66fca9000 session 0x55a66f36f0e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 268 ms_handle_reset con 0x55a66ca6b400 session 0x55a66bfe3c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129687552 unmapped: 36175872 heap: 165863424 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 269 ms_handle_reset con 0x55a66d9d3800 session 0x55a66cd43860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 269 ms_handle_reset con 0x55a66eb62800 session 0x55a66cd42780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 269 ms_handle_reset con 0x55a66eb69c00 session 0x55a66f1070e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129441792 unmapped: 40624128 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2224242 data_alloc: 218103808 data_used: 8802304
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 270 ms_handle_reset con 0x55a66ca6b400 session 0x55a66cd43680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 270 heartbeat osd_stat(store_statfs(0x4f6f28000/0x0/0x4ffc00000, data 0x380bd23/0x3994000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,2])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 270 ms_handle_reset con 0x55a66f131400 session 0x55a66f8f0b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 270 ms_handle_reset con 0x55a66d9d3800 session 0x55a66cd43e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 270 ms_handle_reset con 0x55a66eb65400 session 0x55a66d90c5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 270 ms_handle_reset con 0x55a66eb62800 session 0x55a66f1074a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129466368 unmapped: 40599552 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 270 ms_handle_reset con 0x55a66d9d3800 session 0x55a66cd425a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 271 ms_handle_reset con 0x55a66eb65400 session 0x55a66cd430e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 271 ms_handle_reset con 0x55a66f131400 session 0x55a66f8823c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 271 ms_handle_reset con 0x55a66ca6b400 session 0x55a66cca05a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129466368 unmapped: 40599552 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 271 heartbeat osd_stat(store_statfs(0x4f6f24000/0x0/0x4ffc00000, data 0x380d405/0x3996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 272 ms_handle_reset con 0x55a66fca9000 session 0x55a66dbfe000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129466368 unmapped: 40599552 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 272 ms_handle_reset con 0x55a66fcc2000 session 0x55a66ce22f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 272 ms_handle_reset con 0x55a66d2bc800 session 0x55a66f300000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 272 ms_handle_reset con 0x55a66fca5000 session 0x55a66f37e960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.580346107s of 10.269369125s, submitted: 174
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 273 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f3014a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 273 ms_handle_reset con 0x55a66fcc4400 session 0x55a66f3005a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 273 ms_handle_reset con 0x55a66fcc7000 session 0x55a66ef803c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 273 ms_handle_reset con 0x55a66eb69c00 session 0x55a66ea0fa40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129572864 unmapped: 40493056 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 274 ms_handle_reset con 0x55a66eb71000 session 0x55a66e8dbc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 274 ms_handle_reset con 0x55a66dbcec00 session 0x55a66e9ad4a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 274 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f5003c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 274 ms_handle_reset con 0x55a66c83e000 session 0x55a66f500960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 274 ms_handle_reset con 0x55a66f973c00 session 0x55a66f8834a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129597440 unmapped: 40468480 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2300699 data_alloc: 218103808 data_used: 8814592
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 275 ms_handle_reset con 0x55a66ca6b400 session 0x55a66e76a5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 275 ms_handle_reset con 0x55a66eb69c00 session 0x55a66f507680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 275 ms_handle_reset con 0x55a66dbcec00 session 0x55a66d90eb40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129613824 unmapped: 40452096 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 275 ms_handle_reset con 0x55a66c83e000 session 0x55a66ea0e960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 276 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f40f4a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 276 ms_handle_reset con 0x55a66eb69c00 session 0x55a66f4d9680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 276 heartbeat osd_stat(store_statfs(0x4f6726000/0x0/0x4ffc00000, data 0x4008ef4/0x4196000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129630208 unmapped: 40435712 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 276 ms_handle_reset con 0x55a66f973c00 session 0x55a66cc70f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 276 ms_handle_reset con 0x55a66e9c0800 session 0x55a66f5074a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129630208 unmapped: 40435712 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 277 ms_handle_reset con 0x55a66c83e000 session 0x55a66dbfe960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 277 heartbeat osd_stat(store_statfs(0x4f6726000/0x0/0x4ffc00000, data 0x400a5c0/0x4198000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129638400 unmapped: 40427520 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 277 ms_handle_reset con 0x55a66fcab400 session 0x55a66d9103c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 277 ms_handle_reset con 0x55a66da54c00 session 0x55a66d8a2d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 277 ms_handle_reset con 0x55a66ca6b400 session 0x55a66e9912c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 125558784 unmapped: 44507136 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 277 ms_handle_reset con 0x55a66f973c00 session 0x55a66f6df4a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 277 ms_handle_reset con 0x55a66eb69c00 session 0x55a66f506780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2129037 data_alloc: 218103808 data_used: 950272
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 125558784 unmapped: 44507136 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 278 ms_handle_reset con 0x55a66f973c00 session 0x55a66f40ef00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 278 ms_handle_reset con 0x55a66c83e000 session 0x55a66f105a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 278 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x2f39c49/0x30c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 125583360 unmapped: 44482560 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 278 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f8821e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 278 ms_handle_reset con 0x55a66fcab400 session 0x55a66d4c3860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 279 ms_handle_reset con 0x55a66c83e000 session 0x55a66f6de5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 125583360 unmapped: 44482560 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 279 ms_handle_reset con 0x55a66da54c00 session 0x55a66eec2780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.336144447s of 10.077365875s, submitted: 202
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 279 ms_handle_reset con 0x55a66eb63400 session 0x55a66e9ac5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 125591552 unmapped: 44474368 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 43409408 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 281 heartbeat osd_stat(store_statfs(0x4f77ef000/0x0/0x4ffc00000, data 0x2f3d4a5/0x30ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2144187 data_alloc: 218103808 data_used: 966656
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 281 heartbeat osd_stat(store_statfs(0x4f77eb000/0x0/0x4ffc00000, data 0x2f3f0c2/0x30d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 282 ms_handle_reset con 0x55a66ee96c00 session 0x55a66d8c43c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 126681088 unmapped: 43384832 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 282 ms_handle_reset con 0x55a66fcad400 session 0x55a66eec0d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 282 ms_handle_reset con 0x55a66fca3000 session 0x55a66f40e960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 282 ms_handle_reset con 0x55a66c83e000 session 0x55a66ce223c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 282 ms_handle_reset con 0x55a66da54c00 session 0x55a66f40e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 43376640 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 282 ms_handle_reset con 0x55a66eb63400 session 0x55a66f40e000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 282 heartbeat osd_stat(store_statfs(0x4f77eb000/0x0/0x4ffc00000, data 0x2f40cb9/0x30d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 43352064 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 282 ms_handle_reset con 0x55a66ee96c00 session 0x55a66f107a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 43352064 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 282 ms_handle_reset con 0x55a66da54c00 session 0x55a66cd43860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 282 ms_handle_reset con 0x55a66eb63400 session 0x55a66dc0f2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 43360256 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 282 heartbeat osd_stat(store_statfs(0x4f77ec000/0x0/0x4ffc00000, data 0x2f40ca9/0x30d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2180717 data_alloc: 218103808 data_used: 6180864
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130088960 unmapped: 39976960 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130088960 unmapped: 39976960 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130088960 unmapped: 39976960 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.010863304s of 10.415848732s, submitted: 123
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130088960 unmapped: 39976960 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 283 ms_handle_reset con 0x55a66fcc7800 session 0x55a66dbfe3c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130088960 unmapped: 39976960 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2206959 data_alloc: 218103808 data_used: 9134080
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130088960 unmapped: 39976960 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 284 heartbeat osd_stat(store_statfs(0x4f77e9000/0x0/0x4ffc00000, data 0x2f42706/0x30d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 284 ms_handle_reset con 0x55a66fcc2000 session 0x55a66f500b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 284 ms_handle_reset con 0x55a66c7b8800 session 0x55a66cc72d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130097152 unmapped: 39968768 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 285 ms_handle_reset con 0x55a66da54c00 session 0x55a66cc70000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 285 ms_handle_reset con 0x55a66fcc2000 session 0x55a66e8db4a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130146304 unmapped: 39919616 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 286 ms_handle_reset con 0x55a66eb63400 session 0x55a66ce230e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 286 ms_handle_reset con 0x55a66f130c00 session 0x55a66cc71860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 39870464 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 287 ms_handle_reset con 0x55a66fcafc00 session 0x55a66f36e5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130203648 unmapped: 39862272 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 288 ms_handle_reset con 0x55a66eb63400 session 0x55a66f107860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2235016 data_alloc: 218103808 data_used: 9158656
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 288 ms_handle_reset con 0x55a66da54c00 session 0x55a66d4c3860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 288 heartbeat osd_stat(store_statfs(0x4f7643000/0x0/0x4ffc00000, data 0x31b00a5/0x3277000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133652480 unmapped: 36413440 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 289 ms_handle_reset con 0x55a66f130c00 session 0x55a66eec25a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133701632 unmapped: 36364288 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 ms_handle_reset con 0x55a66fcc2000 session 0x55a66f8834a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 ms_handle_reset con 0x55a66fca3400 session 0x55a66f5003c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 ms_handle_reset con 0x55a66fca9c00 session 0x55a66d90e000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 ms_handle_reset con 0x55a66fca3400 session 0x55a66ea0fa40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 135110656 unmapped: 34955264 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 ms_handle_reset con 0x55a66f130800 session 0x55a66f3014a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.142740250s of 10.021452904s, submitted: 212
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 135110656 unmapped: 34955264 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 ms_handle_reset con 0x55a66eb64800 session 0x55a66ce22f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 134356992 unmapped: 35708928 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2333776 data_alloc: 234881024 data_used: 10321920
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 134356992 unmapped: 35708928 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 ms_handle_reset con 0x55a66ee97800 session 0x55a66f5034a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 heartbeat osd_stat(store_statfs(0x4f6fca000/0x0/0x4ffc00000, data 0x3b3379b/0x38f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 35700736 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 35700736 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 ms_handle_reset con 0x55a66eb6d000 session 0x55a66e6a52c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 ms_handle_reset con 0x55a66e9be400 session 0x55a66cb9d2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 ms_handle_reset con 0x55a66eb7f800 session 0x55a66f8821e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 ms_handle_reset con 0x55a66c83e000 session 0x55a66f8f1860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 134389760 unmapped: 35676160 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 ms_handle_reset con 0x55a66c7b8000 session 0x55a66d9103c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 heartbeat osd_stat(store_statfs(0x4f6fc5000/0x0/0x4ffc00000, data 0x3b367ab/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 134389760 unmapped: 35676160 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2340736 data_alloc: 234881024 data_used: 10321920
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 ms_handle_reset con 0x55a66e9be400 session 0x55a66cca6d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 290 handle_osd_map epochs [291,291], i have 291, src has [1,291]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 291 ms_handle_reset con 0x55a66eb6d000 session 0x55a66e991e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 134397952 unmapped: 35667968 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 292 ms_handle_reset con 0x55a66c83e000 session 0x55a66f36e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 134397952 unmapped: 35667968 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 134397952 unmapped: 35667968 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 292 ms_handle_reset con 0x55a66ee96000 session 0x55a66dbfe3c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 293 handle_osd_map epochs [293,293], i have 293, src has [1,293]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 293 ms_handle_reset con 0x55a66f130000 session 0x55a66ea0f2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 293 ms_handle_reset con 0x55a66da4e000 session 0x55a66f36ef00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 293 ms_handle_reset con 0x55a66eb6d000 session 0x55a66f40e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 293 heartbeat osd_stat(store_statfs(0x4f6fbd000/0x0/0x4ffc00000, data 0x3b3bc5a/0x3900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 293 ms_handle_reset con 0x55a66eb6d800 session 0x55a66e8db4a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 293 ms_handle_reset con 0x55a66c83e000 session 0x55a66f8f05a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 293 ms_handle_reset con 0x55a66e9be400 session 0x55a66dbffa40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.267506599s of 10.018427849s, submitted: 196
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129351680 unmapped: 40714240 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 294 ms_handle_reset con 0x55a66f139000 session 0x55a66bfe3c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 294 heartbeat osd_stat(store_statfs(0x4f8c06000/0x0/0x4ffc00000, data 0x1751bc8/0x18ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 40706048 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 294 ms_handle_reset con 0x55a66c83e000 session 0x55a66cd42780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2022683 data_alloc: 218103808 data_used: 1011712
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 295 ms_handle_reset con 0x55a66da4e000 session 0x55a66cc74f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 39657472 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 39657472 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 295 ms_handle_reset con 0x55a66eb6d800 session 0x55a66e8db2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 39657472 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 296 ms_handle_reset con 0x55a66fca4c00 session 0x55a66ef81680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130433024 unmapped: 39632896 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 296 ms_handle_reset con 0x55a6707bfc00 session 0x55a66dc07c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 296 heartbeat osd_stat(store_statfs(0x4f8fc6000/0x0/0x4ffc00000, data 0x1756fe1/0x18f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 296 handle_osd_map epochs [297,297], i have 297, src has [1,297]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 297 ms_handle_reset con 0x55a66eb6d000 session 0x55a66e8dba40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130457600 unmapped: 39608320 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2032800 data_alloc: 218103808 data_used: 1028096
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130473984 unmapped: 39591936 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 39583744 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 299 ms_handle_reset con 0x55a66da4e000 session 0x55a66d8a3860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 39575552 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 39575552 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.806554794s of 10.861513138s, submitted: 86
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 39575552 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2038144 data_alloc: 218103808 data_used: 1032192
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 299 heartbeat osd_stat(store_statfs(0x4f8fbe000/0x0/0x4ffc00000, data 0x175c404/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 299 ms_handle_reset con 0x55a66c83e000 session 0x55a66d8c41e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 299 ms_handle_reset con 0x55a66eb6d800 session 0x55a66cc70b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 39575552 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 39575552 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 39575552 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 300 ms_handle_reset con 0x55a66e9bfc00 session 0x55a66f40e5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 300 heartbeat osd_stat(store_statfs(0x4f8fbb000/0x0/0x4ffc00000, data 0x175de9f/0x1902000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 300 ms_handle_reset con 0x55a66d2bd000 session 0x55a66f37ef00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 39567360 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 300 ms_handle_reset con 0x55a66eb6fc00 session 0x55a66cc741e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 39567360 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2045638 data_alloc: 218103808 data_used: 1028096
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 300 ms_handle_reset con 0x55a66da55000 session 0x55a66e76bc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 300 handle_osd_map epochs [301,301], i have 301, src has [1,301]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130523136 unmapped: 39542784 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 301 ms_handle_reset con 0x55a66c7b8c00 session 0x55a66f4d92c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 301 ms_handle_reset con 0x55a66f972c00 session 0x55a66f36f680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 39534592 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 302 heartbeat osd_stat(store_statfs(0x4f8fb6000/0x0/0x4ffc00000, data 0x175fb44/0x1908000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 302 ms_handle_reset con 0x55a66c7b8c00 session 0x55a66f4d8b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 302 ms_handle_reset con 0x55a66fca4800 session 0x55a66f6dfe00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 302 ms_handle_reset con 0x55a66d2bd000 session 0x55a66dc0ed20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 302 ms_handle_reset con 0x55a66fcc2000 session 0x55a66f3345a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 39534592 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 303 ms_handle_reset con 0x55a670c18c00 session 0x55a66f6df860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 303 ms_handle_reset con 0x55a66d2bc800 session 0x55a66cc74960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130539520 unmapped: 39526400 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 303 ms_handle_reset con 0x55a66f139800 session 0x55a66f8f1a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 303 heartbeat osd_stat(store_statfs(0x4f8fac000/0x0/0x4ffc00000, data 0x1763dae/0x1910000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 303 ms_handle_reset con 0x55a66c83ec00 session 0x55a66dc0ed20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 40288256 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.746738434s of 10.328664780s, submitted: 67
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2064909 data_alloc: 218103808 data_used: 1064960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 304 ms_handle_reset con 0x55a66ee96000 session 0x55a66f4d8b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 304 ms_handle_reset con 0x55a66c83ec00 session 0x55a66f36f680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 40271872 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 304 heartbeat osd_stat(store_statfs(0x4f8faa000/0x0/0x4ffc00000, data 0x176598f/0x1913000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 305 ms_handle_reset con 0x55a66ee96000 session 0x55a66ce223c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 305 heartbeat osd_stat(store_statfs(0x4f8fa7000/0x0/0x4ffc00000, data 0x1766eca/0x1915000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 40206336 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 306 ms_handle_reset con 0x55a66d2bc800 session 0x55a66f4d92c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 306 ms_handle_reset con 0x55a670c18c00 session 0x55a66cd43e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 306 ms_handle_reset con 0x55a66ee97000 session 0x55a66cca01e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 306 ms_handle_reset con 0x55a66ee97000 session 0x55a66f3010e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 306 ms_handle_reset con 0x55a66f139800 session 0x55a66cc741e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 306 ms_handle_reset con 0x55a66c83ec00 session 0x55a66d8c41e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 306 ms_handle_reset con 0x55a66d2bc800 session 0x55a66d8a3860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 306 ms_handle_reset con 0x55a66da4ec00 session 0x55a66ef81680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 306 ms_handle_reset con 0x55a66c83ec00 session 0x55a66e8db2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 306 heartbeat osd_stat(store_statfs(0x4f8fa7000/0x0/0x4ffc00000, data 0x1768a55/0x1917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 306 ms_handle_reset con 0x55a66d2bc800 session 0x55a66cc74f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 306 ms_handle_reset con 0x55a66da4ec00 session 0x55a66bfe3c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 40026112 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 306 heartbeat osd_stat(store_statfs(0x4f8fa7000/0x0/0x4ffc00000, data 0x1768a55/0x1917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 306 ms_handle_reset con 0x55a66f139800 session 0x55a66f507680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 306 heartbeat osd_stat(store_statfs(0x4f8c05000/0x0/0x4ffc00000, data 0x1b09a65/0x1cb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 40026112 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 307 ms_handle_reset con 0x55a66fca6c00 session 0x55a66f36eb40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130072576 unmapped: 39993344 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2110983 data_alloc: 218103808 data_used: 1089536
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 308 ms_handle_reset con 0x55a66d2bc800 session 0x55a66cd434a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 308 ms_handle_reset con 0x55a66ee97000 session 0x55a66dbffa40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 308 ms_handle_reset con 0x55a66da4ec00 session 0x55a66f40e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 39952384 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 308 handle_osd_map epochs [308,309], i have 308, src has [1,309]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 309 ms_handle_reset con 0x55a66f139800 session 0x55a66ea0f2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 309 ms_handle_reset con 0x55a66eb69c00 session 0x55a66dbfe3c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 309 ms_handle_reset con 0x55a66d2bc800 session 0x55a66d90c780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 309 ms_handle_reset con 0x55a66da4ec00 session 0x55a66eec0d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130138112 unmapped: 39927808 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 310 ms_handle_reset con 0x55a66fca0400 session 0x55a66ea0fa40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 310 heartbeat osd_stat(store_statfs(0x4f8bf6000/0x0/0x4ffc00000, data 0x1b0f4e4/0x1cc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 310 ms_handle_reset con 0x55a66eb69c00 session 0x55a66f107c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 310 ms_handle_reset con 0x55a66c83ec00 session 0x55a66f37e000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 310 ms_handle_reset con 0x55a66ea59c00 session 0x55a66f8dd680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 310 ms_handle_reset con 0x55a66fca2000 session 0x55a66e9acf00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 310 heartbeat osd_stat(store_statfs(0x4f8bf1000/0x0/0x4ffc00000, data 0x1b10c19/0x1cc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129531904 unmapped: 40534016 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 310 ms_handle_reset con 0x55a66f13b400 session 0x55a66f37eb40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129548288 unmapped: 40517632 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 311 ms_handle_reset con 0x55a66eb71400 session 0x55a66f107860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 311 ms_handle_reset con 0x55a66c83e800 session 0x55a66cc705a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 311 ms_handle_reset con 0x55a66fca9000 session 0x55a66d8c5680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129564672 unmapped: 40501248 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2165338 data_alloc: 218103808 data_used: 4829184
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.453514099s of 10.345951080s, submitted: 137
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 312 ms_handle_reset con 0x55a66ea59c00 session 0x55a66f3012c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 312 heartbeat osd_stat(store_statfs(0x4f87dc000/0x0/0x4ffc00000, data 0x1b12dad/0x1cd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 40255488 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 313 ms_handle_reset con 0x55a66c83e800 session 0x55a66dc06000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 313 ms_handle_reset con 0x55a66eb71400 session 0x55a66f5032c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 313 ms_handle_reset con 0x55a66fcc5800 session 0x55a66f500b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 40247296 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 40247296 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 313 ms_handle_reset con 0x55a66ea59000 session 0x55a66e8da1e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 313 ms_handle_reset con 0x55a66eb69800 session 0x55a66f8ddc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 313 handle_osd_map epochs [313,314], i have 313, src has [1,314]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 314 ms_handle_reset con 0x55a66f139800 session 0x55a66dbcc780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 314 ms_handle_reset con 0x55a66fca4400 session 0x55a66f506780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 314 ms_handle_reset con 0x55a66fcc2400 session 0x55a66f502b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 40230912 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 315 ms_handle_reset con 0x55a66ea59000 session 0x55a66f500780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 315 ms_handle_reset con 0x55a66dbcf800 session 0x55a66f501860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 315 ms_handle_reset con 0x55a66eb63400 session 0x55a66cd42000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 315 ms_handle_reset con 0x55a66eb69800 session 0x55a66d5b2960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 40173568 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2181172 data_alloc: 218103808 data_used: 4845568
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129900544 unmapped: 40165376 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 315 heartbeat osd_stat(store_statfs(0x4f87cf000/0x0/0x4ffc00000, data 0x1b19e87/0x1cde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 315 handle_osd_map epochs [316,316], i have 316, src has [1,316]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 316 ms_handle_reset con 0x55a66f139800 session 0x55a66f506000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 316 ms_handle_reset con 0x55a66dbcf800 session 0x55a66d8c4780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 316 ms_handle_reset con 0x55a66ea59000 session 0x55a66cb9d2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 40140800 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 316 handle_osd_map epochs [316,317], i have 316, src has [1,317]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 317 ms_handle_reset con 0x55a66eb63400 session 0x55a66f107680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 40099840 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 318 ms_handle_reset con 0x55a66eb69800 session 0x55a66cb9dc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 318 ms_handle_reset con 0x55a6707bf400 session 0x55a66f4d9860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 318 ms_handle_reset con 0x55a66fca4400 session 0x55a66f5021e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 318 ms_handle_reset con 0x55a6707bec00 session 0x55a66f3001e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 318 ms_handle_reset con 0x55a66dbcf800 session 0x55a66f300780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 40288256 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 318 heartbeat osd_stat(store_statfs(0x4f87c7000/0x0/0x4ffc00000, data 0x1b1ed4e/0x1ce5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 40288256 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 318 ms_handle_reset con 0x55a66eb63400 session 0x55a66f6def00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2190228 data_alloc: 218103808 data_used: 4874240
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.387736320s of 10.235712051s, submitted: 133
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 319 ms_handle_reset con 0x55a66eb69800 session 0x55a66d5b3c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131170304 unmapped: 38895616 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 320 handle_osd_map epochs [320,321], i have 320, src has [1,321]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 321 ms_handle_reset con 0x55a66eb6cc00 session 0x55a66bfe2960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 321 ms_handle_reset con 0x55a66ea59000 session 0x55a66f1052c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132333568 unmapped: 37732352 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 322 handle_osd_map epochs [322,322], i have 322, src has [1,322]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 322 ms_handle_reset con 0x55a66dbcf800 session 0x55a66dbffc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132399104 unmapped: 37666816 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 322 ms_handle_reset con 0x55a66da54400 session 0x55a66dbffc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 322 ms_handle_reset con 0x55a66c83f800 session 0x55a66d8c4780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 38338560 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 324 ms_handle_reset con 0x55a66da54400 session 0x55a66f506000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 324 ms_handle_reset con 0x55a66dbcf800 session 0x55a66f8ddc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 38297600 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2222922 data_alloc: 218103808 data_used: 4943872
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 324 heartbeat osd_stat(store_statfs(0x4f862b000/0x0/0x4ffc00000, data 0x1cb61e7/0x1e81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 38297600 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 325 heartbeat osd_stat(store_statfs(0x4f862a000/0x0/0x4ffc00000, data 0x1cb7c50/0x1e83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 325 handle_osd_map epochs [326,326], i have 326, src has [1,326]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 38273024 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 326 ms_handle_reset con 0x55a66ea59000 session 0x55a66e8da1e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 38215680 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 327 ms_handle_reset con 0x55a66f130c00 session 0x55a66f5032c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 38207488 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 327 ms_handle_reset con 0x55a66ffa1c00 session 0x55a66cca14a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 38207488 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2228826 data_alloc: 218103808 data_used: 4935680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 38207488 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 327 heartbeat osd_stat(store_statfs(0x4f8625000/0x0/0x4ffc00000, data 0x1cbb428/0x1e86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.779938698s of 10.539535522s, submitted: 218
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 328 heartbeat osd_stat(store_statfs(0x4f8621000/0x0/0x4ffc00000, data 0x1cbcf5f/0x1e89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 38199296 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 329 heartbeat osd_stat(store_statfs(0x4f8624000/0x0/0x4ffc00000, data 0x1cbcf5f/0x1e89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 38166528 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132956160 unmapped: 37109760 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 330 handle_osd_map epochs [330,331], i have 330, src has [1,331]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 331 ms_handle_reset con 0x55a66eb62000 session 0x55a66ea0e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 331 heartbeat osd_stat(store_statfs(0x4f8618000/0x0/0x4ffc00000, data 0x1cc23b8/0x1e93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131981312 unmapped: 38084608 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2242434 data_alloc: 218103808 data_used: 4935680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 332 ms_handle_reset con 0x55a66da54400 session 0x55a66cc70b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 332 ms_handle_reset con 0x55a66fca7000 session 0x55a66e9ad2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 332 ms_handle_reset con 0x55a66dbcf800 session 0x55a66f5014a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 332 ms_handle_reset con 0x55a66fcc5400 session 0x55a66f8f0f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 332 handle_osd_map epochs [332,333], i have 332, src has [1,333]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132055040 unmapped: 38010880 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 333 heartbeat osd_stat(store_statfs(0x4f8613000/0x0/0x4ffc00000, data 0x1cc4179/0x1e97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 333 ms_handle_reset con 0x55a66dbcf800 session 0x55a66d9103c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132055040 unmapped: 38010880 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 333 handle_osd_map epochs [333,334], i have 333, src has [1,334]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 334 ms_handle_reset con 0x55a66da54400 session 0x55a66dc0e5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 334 ms_handle_reset con 0x55a66eb62000 session 0x55a66d90e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 37986304 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 334 heartbeat osd_stat(store_statfs(0x4f8612000/0x0/0x4ffc00000, data 0x1cc7735/0x1e99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 334 ms_handle_reset con 0x55a66c7b8400 session 0x55a66dbcda40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132104192 unmapped: 37961728 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 335 ms_handle_reset con 0x55a66fcaf800 session 0x55a66e8dbe00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 335 ms_handle_reset con 0x55a66da54400 session 0x55a66f8f0780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 335 ms_handle_reset con 0x55a66fcaf800 session 0x55a66e9ade00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132136960 unmapped: 37928960 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2253236 data_alloc: 218103808 data_used: 4923392
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 335 handle_osd_map epochs [335,336], i have 335, src has [1,336]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 335 handle_osd_map epochs [336,336], i have 336, src has [1,336]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 336 ms_handle_reset con 0x55a66c7b8400 session 0x55a66f506d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 336 ms_handle_reset con 0x55a66dbcf800 session 0x55a66f4d81e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 336 ms_handle_reset con 0x55a66eb63400 session 0x55a66f4d9c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132161536 unmapped: 37904384 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.105732918s of 10.605552673s, submitted: 180
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132186112 unmapped: 37879808 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 338 ms_handle_reset con 0x55a66c7b8400 session 0x55a66f4d8d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 338 ms_handle_reset con 0x55a66da54400 session 0x55a66f5001e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 338 heartbeat osd_stat(store_statfs(0x4f860a000/0x0/0x4ffc00000, data 0x1cccef3/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132218880 unmapped: 37847040 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 338 ms_handle_reset con 0x55a66f13a000 session 0x55a66cd43860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132218880 unmapped: 37847040 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132218880 unmapped: 37847040 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2265719 data_alloc: 218103808 data_used: 4923392
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132227072 unmapped: 37838848 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 338 handle_osd_map epochs [338,339], i have 338, src has [1,339]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 339 heartbeat osd_stat(store_statfs(0x4f8606000/0x0/0x4ffc00000, data 0x1ccead4/0x1ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 37822464 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 37822464 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 339 handle_osd_map epochs [339,340], i have 339, src has [1,340]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 340 ms_handle_reset con 0x55a66ffa0c00 session 0x55a66e8db680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 37773312 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 341 ms_handle_reset con 0x55a66f972400 session 0x55a66cc71680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 341 ms_handle_reset con 0x55a66ea31c00 session 0x55a66f6dfa40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 341 ms_handle_reset con 0x55a66eb6d800 session 0x55a66cca0d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132300800 unmapped: 37765120 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2280148 data_alloc: 218103808 data_used: 4923392
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 342 ms_handle_reset con 0x55a66c7b8400 session 0x55a66f301e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 342 heartbeat osd_stat(store_statfs(0x4f85fa000/0x0/0x4ffc00000, data 0x1cd3e6c/0x1eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 342 ms_handle_reset con 0x55a66da54400 session 0x55a66ea0e3c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132317184 unmapped: 37748736 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 342 ms_handle_reset con 0x55a6707bfc00 session 0x55a66d90fa40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132317184 unmapped: 37748736 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.313949585s of 10.521257401s, submitted: 79
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 343 ms_handle_reset con 0x55a66c7b8400 session 0x55a66f501c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 343 ms_handle_reset con 0x55a66ea31c00 session 0x55a66f300f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 343 ms_handle_reset con 0x55a66da54400 session 0x55a66d8c5e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 343 ms_handle_reset con 0x55a66eb6d800 session 0x55a66f37e1e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132366336 unmapped: 37699584 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 344 ms_handle_reset con 0x55a66fcc4400 session 0x55a66f40f0e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 344 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f104780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 344 ms_handle_reset con 0x55a66c7b8400 session 0x55a66d90fa40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 344 ms_handle_reset con 0x55a66da54400 session 0x55a66dbcda40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131309568 unmapped: 38756352 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 344 ms_handle_reset con 0x55a66ea31c00 session 0x55a66f8f0f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 345 ms_handle_reset con 0x55a66eb6d800 session 0x55a66d8c52c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 345 ms_handle_reset con 0x55a66c7b8400 session 0x55a66f5005a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 39297024 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2217008 data_alloc: 218103808 data_used: 1159168
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 346 heartbeat osd_stat(store_statfs(0x4f8b23000/0x0/0x4ffc00000, data 0x17ac6bd/0x198a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 38223872 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 346 ms_handle_reset con 0x55a66da54400 session 0x55a66cc710e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 346 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f8f03c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 38223872 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 347 ms_handle_reset con 0x55a66ea31c00 session 0x55a66cc73a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 38223872 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 347 heartbeat osd_stat(store_statfs(0x4f8b20000/0x0/0x4ffc00000, data 0x17aff72/0x198c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 347 ms_handle_reset con 0x55a66f130c00 session 0x55a66f5025a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 347 ms_handle_reset con 0x55a66ea2e400 session 0x55a66dc072c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 347 ms_handle_reset con 0x55a66f130c00 session 0x55a66f507860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 38223872 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 347 heartbeat osd_stat(store_statfs(0x4f8b20000/0x0/0x4ffc00000, data 0x17aff72/0x198c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 38223872 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2220511 data_alloc: 218103808 data_used: 1171456
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 347 handle_osd_map epochs [347,348], i have 347, src has [1,348]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 38223872 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 38223872 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.317691803s of 10.310086250s, submitted: 212
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 348 ms_handle_reset con 0x55a66fcc7400 session 0x55a66f106960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 348 ms_handle_reset con 0x55a66eb63800 session 0x55a66d90e5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 348 heartbeat osd_stat(store_statfs(0x4f8b1e000/0x0/0x4ffc00000, data 0x17b1a61/0x198f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 348 ms_handle_reset con 0x55a66eaf8000 session 0x55a66d8a30e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 38199296 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 348 ms_handle_reset con 0x55a66ea2e400 session 0x55a66f5074a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 348 ms_handle_reset con 0x55a66eb63800 session 0x55a66f5001e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 38199296 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 348 ms_handle_reset con 0x55a66f130c00 session 0x55a66cc71680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 38191104 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 348 ms_handle_reset con 0x55a66e9bf800 session 0x55a66d911860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2235237 data_alloc: 218103808 data_used: 1171456
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 348 ms_handle_reset con 0x55a66fcc7400 session 0x55a66dbcd4a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 38191104 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 349 ms_handle_reset con 0x55a66e9bf800 session 0x55a66f300780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 349 ms_handle_reset con 0x55a66f130c00 session 0x55a66e76ad20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 349 ms_handle_reset con 0x55a66ea2e400 session 0x55a66e9acf00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 349 heartbeat osd_stat(store_statfs(0x4f8b1c000/0x0/0x4ffc00000, data 0x17b1ae3/0x1992000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 38182912 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 349 handle_osd_map epochs [349,350], i have 349, src has [1,350]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 349 handle_osd_map epochs [350,350], i have 350, src has [1,350]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 350 ms_handle_reset con 0x55a66eb7fc00 session 0x55a66f37e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 350 ms_handle_reset con 0x55a67118a400 session 0x55a66cc712c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 350 ms_handle_reset con 0x55a66e9bf800 session 0x55a66d90f680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 38166528 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 350 handle_osd_map epochs [350,351], i have 350, src has [1,351]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 351 ms_handle_reset con 0x55a670861800 session 0x55a66f501680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 351 ms_handle_reset con 0x55a66eb63800 session 0x55a66ef805a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 38141952 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 351 ms_handle_reset con 0x55a66ea2e400 session 0x55a66ea0f0e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 351 ms_handle_reset con 0x55a66eb7fc00 session 0x55a66cc712c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132022272 unmapped: 38043648 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2250431 data_alloc: 218103808 data_used: 1196032
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 352 ms_handle_reset con 0x55a66e9bf800 session 0x55a66f106960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 352 ms_handle_reset con 0x55a66ea2e400 session 0x55a66cc73a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 352 ms_handle_reset con 0x55a66eb63800 session 0x55a66f5005a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132104192 unmapped: 37961728 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 353 ms_handle_reset con 0x55a670861800 session 0x55a66f37e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 353 ms_handle_reset con 0x55a66f130c00 session 0x55a66f501680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 353 heartbeat osd_stat(store_statfs(0x4f8b10000/0x0/0x4ffc00000, data 0x17b878f/0x199b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132104192 unmapped: 37961728 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132104192 unmapped: 37961728 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 353 ms_handle_reset con 0x55a66da55800 session 0x55a66d5b34a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 353 ms_handle_reset con 0x55a66ee97800 session 0x55a66f8dd680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132104192 unmapped: 37961728 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132104192 unmapped: 37961728 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2248162 data_alloc: 218103808 data_used: 1191936
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.328013420s of 13.113635063s, submitted: 184
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 353 ms_handle_reset con 0x55a66fcac800 session 0x55a66ef801e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 353 ms_handle_reset con 0x55a66dbce400 session 0x55a66cca7c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 353 handle_osd_map epochs [353,354], i have 353, src has [1,354]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132128768 unmapped: 37937152 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x17bbe27/0x19a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132128768 unmapped: 37937152 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66fca3000 session 0x55a66cc71860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66ffa0000 session 0x55a66dbffc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66da4fc00 session 0x55a66f5014a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66f137400 session 0x55a66d910000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66f13b400 session 0x55a66f36ef00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66ea59800 session 0x55a66f8830e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132128768 unmapped: 37937152 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66da4fc00 session 0x55a66ea0fc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66f137400 session 0x55a66f105a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66f13b400 session 0x55a66f36fe00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66ffa0000 session 0x55a66f8f01e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66eb64800 session 0x55a66e8dbc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 37470208 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66eb64800 session 0x55a66f8f0d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66da4fc00 session 0x55a66cc743c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66f13b400 session 0x55a66f300b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66f137400 session 0x55a66eec25a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132612096 unmapped: 37453824 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66ffa0000 session 0x55a66f503c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2261433 data_alloc: 218103808 data_used: 1200128
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132612096 unmapped: 37453824 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66da4fc00 session 0x55a66d90f860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66f137400 session 0x55a66e76a3c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66eb64800 session 0x55a66dbccf00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132612096 unmapped: 37453824 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 heartbeat osd_stat(store_statfs(0x4f8b0a000/0x0/0x4ffc00000, data 0x17bbe99/0x19a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66fca1400 session 0x55a66cb9c5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 ms_handle_reset con 0x55a66f139000 session 0x55a66cca0d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 355 ms_handle_reset con 0x55a66f13ac00 session 0x55a66dbccf00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132620288 unmapped: 37445632 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132620288 unmapped: 37445632 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 355 ms_handle_reset con 0x55a66da4fc00 session 0x55a66e701860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 355 ms_handle_reset con 0x55a66f137400 session 0x55a66f37e960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 355 ms_handle_reset con 0x55a66eb64800 session 0x55a66e8dbc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132620288 unmapped: 37445632 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2264681 data_alloc: 218103808 data_used: 1212416
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.860582352s of 10.009658813s, submitted: 109
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 355 ms_handle_reset con 0x55a66fca2000 session 0x55a66f37f0e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 355 ms_handle_reset con 0x55a66fca1800 session 0x55a66f36ef00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132620288 unmapped: 37445632 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 355 heartbeat osd_stat(store_statfs(0x4f8b08000/0x0/0x4ffc00000, data 0x17bda46/0x19a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 355 ms_handle_reset con 0x55a66fca1400 session 0x55a66f4d8000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132620288 unmapped: 37445632 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 356 ms_handle_reset con 0x55a66eb64800 session 0x55a66d910000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 356 ms_handle_reset con 0x55a66da4fc00 session 0x55a66f8dde00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 356 ms_handle_reset con 0x55a66f137400 session 0x55a66f5014a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 356 ms_handle_reset con 0x55a66da54c00 session 0x55a66dbffc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132628480 unmapped: 37437440 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 356 ms_handle_reset con 0x55a66f137400 session 0x55a66f8dc5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 356 ms_handle_reset con 0x55a66e9bf000 session 0x55a66da40960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 356 ms_handle_reset con 0x55a66f131400 session 0x55a66f8dd680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132644864 unmapped: 37421056 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 356 ms_handle_reset con 0x55a66da4e000 session 0x55a66da41a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 356 ms_handle_reset con 0x55a66da54c00 session 0x55a66e7805a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132644864 unmapped: 37421056 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2265775 data_alloc: 218103808 data_used: 1220608
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 356 heartbeat osd_stat(store_statfs(0x4f8b07000/0x0/0x4ffc00000, data 0x17bf565/0x19a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132644864 unmapped: 37421056 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 356 ms_handle_reset con 0x55a66e9bf000 session 0x55a66e7812c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 37412864 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 356 heartbeat osd_stat(store_statfs(0x4f8b05000/0x0/0x4ffc00000, data 0x17bf5d6/0x19a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 37412864 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 356 ms_handle_reset con 0x55a66eb7f400 session 0x55a66cca7c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 356 ms_handle_reset con 0x55a66fcc3c00 session 0x55a66f501680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 356 handle_osd_map epochs [356,357], i have 356, src has [1,357]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 357 ms_handle_reset con 0x55a66f135800 session 0x55a66e781c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 37412864 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 357 ms_handle_reset con 0x55a66da54c00 session 0x55a66f883680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 37412864 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2273699 data_alloc: 218103808 data_used: 1228800
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 357 ms_handle_reset con 0x55a66e9bf000 session 0x55a66f882f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.286134720s of 10.161173820s, submitted: 69
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 357 ms_handle_reset con 0x55a66fcc3c00 session 0x55a66f8dcd20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 358 ms_handle_reset con 0x55a66eb7f400 session 0x55a66f883e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 358 ms_handle_reset con 0x55a66f13b400 session 0x55a66f883860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 358 ms_handle_reset con 0x55a66ffa1800 session 0x55a66f8dcb40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 358 ms_handle_reset con 0x55a66da54c00 session 0x55a66cca6780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132685824 unmapped: 37380096 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 358 ms_handle_reset con 0x55a66e9bf000 session 0x55a66cc71860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 358 ms_handle_reset con 0x55a66eb7f400 session 0x55a66f5063c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132685824 unmapped: 37380096 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 358 heartbeat osd_stat(store_statfs(0x4f8aff000/0x0/0x4ffc00000, data 0x17c2b61/0x19ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132685824 unmapped: 37380096 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 358 heartbeat osd_stat(store_statfs(0x4f8aff000/0x0/0x4ffc00000, data 0x17c2b61/0x19ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 37363712 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 359 ms_handle_reset con 0x55a66d2bdc00 session 0x55a66dbcd860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 360 ms_handle_reset con 0x55a66d2bdc00 session 0x55a66ce221e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133767168 unmapped: 36298752 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2284716 data_alloc: 218103808 data_used: 1241088
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133775360 unmapped: 36290560 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 360 heartbeat osd_stat(store_statfs(0x4f8af8000/0x0/0x4ffc00000, data 0x17c62ee/0x19b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 360 ms_handle_reset con 0x55a66da54c00 session 0x55a66f5061e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 360 ms_handle_reset con 0x55a66eb7f400 session 0x55a66cca10e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 360 ms_handle_reset con 0x55a66e9bf000 session 0x55a66d8a25a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 360 ms_handle_reset con 0x55a66ffa1800 session 0x55a66f301a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133799936 unmapped: 36265984 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133808128 unmapped: 36257792 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 360 handle_osd_map epochs [360,361], i have 360, src has [1,361]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 361 ms_handle_reset con 0x55a66d2bd000 session 0x55a66d90c5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133824512 unmapped: 36241408 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 361 heartbeat osd_stat(store_statfs(0x4f8af8000/0x0/0x4ffc00000, data 0x17c6340/0x19b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133824512 unmapped: 36241408 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2293934 data_alloc: 218103808 data_used: 1253376
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 361 ms_handle_reset con 0x55a66fca2400 session 0x55a66f506b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.579463005s of 10.000776291s, submitted: 93
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 361 ms_handle_reset con 0x55a66fcaa000 session 0x55a66f300780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133832704 unmapped: 36233216 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 362 ms_handle_reset con 0x55a66fcc6800 session 0x55a66f36e3c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133857280 unmapped: 36208640 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133898240 unmapped: 36167680 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 363 ms_handle_reset con 0x55a670860800 session 0x55a66f8f1860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 363 heartbeat osd_stat(store_statfs(0x4f8aed000/0x0/0x4ffc00000, data 0x17cb8b9/0x19c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133898240 unmapped: 36167680 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133898240 unmapped: 36167680 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2304869 data_alloc: 218103808 data_used: 1277952
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 363 ms_handle_reset con 0x55a66d2bd000 session 0x55a66cb9c960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 363 heartbeat osd_stat(store_statfs(0x4f8aed000/0x0/0x4ffc00000, data 0x17cb8b9/0x19c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 363 ms_handle_reset con 0x55a66fcaa000 session 0x55a66bfe34a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 363 ms_handle_reset con 0x55a66fca2400 session 0x55a66cc712c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 363 ms_handle_reset con 0x55a66fcc6800 session 0x55a66d8a23c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133906432 unmapped: 36159488 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 364 ms_handle_reset con 0x55a66eb64800 session 0x55a66dc07c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 36110336 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 364 ms_handle_reset con 0x55a66fca1c00 session 0x55a66eec21e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 364 ms_handle_reset con 0x55a66ee96000 session 0x55a66cc74780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 36077568 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 36077568 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 364 ms_handle_reset con 0x55a66fcaac00 session 0x55a66f104780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 364 ms_handle_reset con 0x55a66cc65400 session 0x55a66cca6d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 36077568 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 364 heartbeat osd_stat(store_statfs(0x4f8ae7000/0x0/0x4ffc00000, data 0x17cd4ca/0x19c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2314711 data_alloc: 218103808 data_used: 1286144
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.630390167s of 10.001489639s, submitted: 42
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 364 ms_handle_reset con 0x55a66d2bd800 session 0x55a66dbfe5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133971968 unmapped: 36093952 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 365 ms_handle_reset con 0x55a66cc65400 session 0x55a66da40d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 365 ms_handle_reset con 0x55a66fca1c00 session 0x55a66e8da000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 36175872 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 365 ms_handle_reset con 0x55a66ea58400 session 0x55a66f40e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133906432 unmapped: 36159488 heap: 170065920 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 366 ms_handle_reset con 0x55a66fcac000 session 0x55a66f36eb40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 366 ms_handle_reset con 0x55a66ee96000 session 0x55a66cca1e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 366 ms_handle_reset con 0x55a66cc65400 session 0x55a66f5025a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 366 ms_handle_reset con 0x55a66ea58400 session 0x55a66f4d90e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 366 ms_handle_reset con 0x55a66fca1c00 session 0x55a66f37e3c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 366 ms_handle_reset con 0x55a66fcac000 session 0x55a66bfe2b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 138272768 unmapped: 35471360 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 366 ms_handle_reset con 0x55a66eb63400 session 0x55a66e990d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 134340608 unmapped: 39403520 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2391022 data_alloc: 218103808 data_used: 1306624
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 366 ms_handle_reset con 0x55a66cc65400 session 0x55a66d4c3860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 367 ms_handle_reset con 0x55a66ea58400 session 0x55a66f36e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 367 heartbeat osd_stat(store_statfs(0x4f7dde000/0x0/0x4ffc00000, data 0x20c5672/0x22bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133128192 unmapped: 40615936 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133128192 unmapped: 40615936 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133128192 unmapped: 40615936 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 367 ms_handle_reset con 0x55a66ffa1c00 session 0x55a66f40fa40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 367 ms_handle_reset con 0x55a66fcc6000 session 0x55a66f40f0e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133128192 unmapped: 40615936 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 367 ms_handle_reset con 0x55a66f131400 session 0x55a66d5b3c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 367 ms_handle_reset con 0x55a66cc65400 session 0x55a66d5b2960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133128192 unmapped: 40615936 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2393027 data_alloc: 218103808 data_used: 1318912
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.757629395s of 10.269737244s, submitted: 123
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 368 ms_handle_reset con 0x55a66fcc6000 session 0x55a66d8c41e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 133136384 unmapped: 40607744 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 368 heartbeat osd_stat(store_statfs(0x4f7dda000/0x0/0x4ffc00000, data 0x20c70e5/0x22c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 136732672 unmapped: 37011456 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 369 ms_handle_reset con 0x55a66ffa1c00 session 0x55a66f3010e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 369 ms_handle_reset con 0x55a66ca6b400 session 0x55a66d90c960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 369 heartbeat osd_stat(store_statfs(0x4f7d3f000/0x0/0x4ffc00000, data 0x215ecc4/0x235d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 369 ms_handle_reset con 0x55a66c7b8c00 session 0x55a66f8f1a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 369 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f36e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 36249600 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 369 ms_handle_reset con 0x55a6707be400 session 0x55a66e8dba40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 369 ms_handle_reset con 0x55a66fcc6000 session 0x55a66f8832c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 369 ms_handle_reset con 0x55a66cc65400 session 0x55a66e9903c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 369 heartbeat osd_stat(store_statfs(0x4f7d3f000/0x0/0x4ffc00000, data 0x215ecc4/0x235d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 137510912 unmapped: 36233216 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 369 ms_handle_reset con 0x55a66ffa1c00 session 0x55a66f5001e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 137510912 unmapped: 36233216 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2565591 data_alloc: 218103808 data_used: 9236480
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 137510912 unmapped: 36233216 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 369 ms_handle_reset con 0x55a66cc65400 session 0x55a66f1054a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 370 ms_handle_reset con 0x55a66ca6b400 session 0x55a66cd43680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 370 ms_handle_reset con 0x55a66fcc6000 session 0x55a66f6dfc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 137510912 unmapped: 36233216 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 370 ms_handle_reset con 0x55a66fcab800 session 0x55a66e9acd20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 370 ms_handle_reset con 0x55a66ea2f400 session 0x55a66eec25a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 370 ms_handle_reset con 0x55a66ee97000 session 0x55a66f40e960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 35856384 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 370 heartbeat osd_stat(store_statfs(0x4f7106000/0x0/0x4ffc00000, data 0x2d958b2/0x2f97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 370 ms_handle_reset con 0x55a66cc65400 session 0x55a66dbfe000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 35856384 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 370 heartbeat osd_stat(store_statfs(0x4f7106000/0x0/0x4ffc00000, data 0x2d95850/0x2f96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 371 ms_handle_reset con 0x55a66fcab800 session 0x55a66dc072c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 138338304 unmapped: 35405824 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2623992 data_alloc: 234881024 data_used: 15286272
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147873792 unmapped: 25870336 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 371 ms_handle_reset con 0x55a66fcc6000 session 0x55a66dbccd20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.753238678s of 10.718365669s, submitted: 112
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 371 ms_handle_reset con 0x55a66fcab000 session 0x55a66cd43c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153681920 unmapped: 20062208 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f6e9f000/0x0/0x4ffc00000, data 0x2ffc421/0x31fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b5f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 371 ms_handle_reset con 0x55a66d2bc400 session 0x55a66f6ded20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 372 ms_handle_reset con 0x55a66c83e800 session 0x55a66f8f0b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 152625152 unmapped: 21118976 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 372 ms_handle_reset con 0x55a66fcac000 session 0x55a66cd43e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 372 heartbeat osd_stat(store_statfs(0x4f59c5000/0x0/0x4ffc00000, data 0x3334f90/0x3537000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 372 ms_handle_reset con 0x55a66da54c00 session 0x55a66da40d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 152772608 unmapped: 20971520 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 152772608 unmapped: 20971520 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2723322 data_alloc: 234881024 data_used: 23232512
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 152772608 unmapped: 20971520 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 152772608 unmapped: 20971520 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 152772608 unmapped: 20971520 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 373 ms_handle_reset con 0x55a66eb6fc00 session 0x55a66e9ade00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 152772608 unmapped: 20971520 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 373 heartbeat osd_stat(store_statfs(0x4f59c4000/0x0/0x4ffc00000, data 0x3336a0f/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154116096 unmapped: 19628032 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2827186 data_alloc: 234881024 data_used: 23252992
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 374 ms_handle_reset con 0x55a66da54c00 session 0x55a66f36e1e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 158105600 unmapped: 15638528 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.294995308s of 10.001763344s, submitted: 243
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 375 ms_handle_reset con 0x55a66fcac000 session 0x55a66e8da780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 14467072 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 375 handle_osd_map epochs [375,376], i have 375, src has [1,376]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 376 ms_handle_reset con 0x55a670860000 session 0x55a66f106960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 376 ms_handle_reset con 0x55a66d2bc400 session 0x55a66dc0e5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160415744 unmapped: 13328384 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160612352 unmapped: 13131776 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 377 ms_handle_reset con 0x55a66eb6f400 session 0x55a66ea0e1e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 377 heartbeat osd_stat(store_statfs(0x4f4b48000/0x0/0x4ffc00000, data 0x41a173d/0x43aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159244288 unmapped: 14499840 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865093 data_alloc: 234881024 data_used: 25841664
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 377 handle_osd_map epochs [377,378], i have 377, src has [1,378]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 378 ms_handle_reset con 0x55a66eb6f400 session 0x55a66f40f680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 378 ms_handle_reset con 0x55a66d2bc400 session 0x55a66e9ac5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 14458880 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 378 ms_handle_reset con 0x55a66da4e400 session 0x55a66cc72000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159424512 unmapped: 14319616 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 378 heartbeat osd_stat(store_statfs(0x4f4b51000/0x0/0x4ffc00000, data 0x41a331a/0x43ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159424512 unmapped: 14319616 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 378 ms_handle_reset con 0x55a66ea58400 session 0x55a66f506780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 378 ms_handle_reset con 0x55a66eaf9c00 session 0x55a66f37f2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 378 heartbeat osd_stat(store_statfs(0x4f4b2f000/0x0/0x4ffc00000, data 0x41c531a/0x43ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 378 ms_handle_reset con 0x55a66fcc5800 session 0x55a66dbcc960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154632192 unmapped: 19111936 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154632192 unmapped: 19111936 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2706240 data_alloc: 234881024 data_used: 16830464
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154632192 unmapped: 19111936 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 378 heartbeat osd_stat(store_statfs(0x4f597d000/0x0/0x4ffc00000, data 0x337638c/0x3581000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.008281708s of 10.603652000s, submitted: 111
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154632192 unmapped: 19111936 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 379 ms_handle_reset con 0x55a66fca3000 session 0x55a66ef80b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 379 ms_handle_reset con 0x55a66d8eec00 session 0x55a66f6de1e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 379 heartbeat osd_stat(store_statfs(0x4f5973000/0x0/0x4ffc00000, data 0x337de0b/0x358a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154632192 unmapped: 19111936 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 379 ms_handle_reset con 0x55a6707bfc00 session 0x55a66f506f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154632192 unmapped: 19111936 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 379 ms_handle_reset con 0x55a66eb64800 session 0x55a66f501a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 380 heartbeat osd_stat(store_statfs(0x4f59b5000/0x0/0x4ffc00000, data 0x333ddfc/0x3549000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154656768 unmapped: 19087360 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 380 ms_handle_reset con 0x55a66eb64800 session 0x55a66f506d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2714774 data_alloc: 234881024 data_used: 16621568
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 381 ms_handle_reset con 0x55a66d8eec00 session 0x55a66e701a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 381 ms_handle_reset con 0x55a66eaf8800 session 0x55a66cc730e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154714112 unmapped: 19030016 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154714112 unmapped: 19030016 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 382 ms_handle_reset con 0x55a66fca3000 session 0x55a66ea0e1e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 382 ms_handle_reset con 0x55a66fcc5800 session 0x55a66f501860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 382 heartbeat osd_stat(store_statfs(0x4f59a9000/0x0/0x4ffc00000, data 0x3343331/0x3554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154746880 unmapped: 18997248 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154755072 unmapped: 18989056 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 383 ms_handle_reset con 0x55a66d8eec00 session 0x55a66f106960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154755072 unmapped: 18989056 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 383 ms_handle_reset con 0x55a66ea31c00 session 0x55a66f36e1e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2730477 data_alloc: 234881024 data_used: 16637952
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 384 ms_handle_reset con 0x55a66eaf8800 session 0x55a66d90f860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154755072 unmapped: 18989056 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f5994000/0x0/0x4ffc00000, data 0x3354ad3/0x3568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f5994000/0x0/0x4ffc00000, data 0x3354ad3/0x3568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.876488686s of 10.087991714s, submitted: 118
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154755072 unmapped: 18989056 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154755072 unmapped: 18989056 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154755072 unmapped: 18989056 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 384 handle_osd_map epochs [385,385], i have 385, src has [1,385]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f5996000/0x0/0x4ffc00000, data 0x3354ad3/0x3568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154755072 unmapped: 18989056 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 385 ms_handle_reset con 0x55a66eb6e800 session 0x55a66f8f0b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732523 data_alloc: 234881024 data_used: 16646144
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155009024 unmapped: 18735104 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155009024 unmapped: 18735104 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 386 heartbeat osd_stat(store_statfs(0x4f598a000/0x0/0x4ffc00000, data 0x335d123/0x3573000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 386 ms_handle_reset con 0x55a66eb69c00 session 0x55a66f4d9a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155009024 unmapped: 18735104 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 386 ms_handle_reset con 0x55a66f134800 session 0x55a66cd43c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155009024 unmapped: 18735104 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 386 ms_handle_reset con 0x55a66ea31c00 session 0x55a66f40e960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 386 ms_handle_reset con 0x55a66d8eec00 session 0x55a66dc072c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155009024 unmapped: 18735104 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2734613 data_alloc: 234881024 data_used: 16654336
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 387 ms_handle_reset con 0x55a66eaf8800 session 0x55a66e9acd20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155009024 unmapped: 18735104 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 387 ms_handle_reset con 0x55a66eb6e800 session 0x55a66f6dfc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155009024 unmapped: 18735104 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.843579292s of 10.553659439s, submitted: 39
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 388 ms_handle_reset con 0x55a66d8eec00 session 0x55a66f1054a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 388 heartbeat osd_stat(store_statfs(0x4f5988000/0x0/0x4ffc00000, data 0x335eca0/0x3576000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155025408 unmapped: 18718720 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 388 ms_handle_reset con 0x55a66fca4800 session 0x55a66f36e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155025408 unmapped: 18718720 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155451392 unmapped: 18292736 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2750571 data_alloc: 234881024 data_used: 16674816
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 388 heartbeat osd_stat(store_statfs(0x4f58fe000/0x0/0x4ffc00000, data 0x33e7871/0x3600000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 18276352 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 388 ms_handle_reset con 0x55a66fca4400 session 0x55a66d90c960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 18276352 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 388 heartbeat osd_stat(store_statfs(0x4f58f9000/0x0/0x4ffc00000, data 0x33ec871/0x3605000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 388 ms_handle_reset con 0x55a66eb7f000 session 0x55a66e9903c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 388 ms_handle_reset con 0x55a66eb62000 session 0x55a66d5b3c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154845184 unmapped: 18898944 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154853376 unmapped: 18890752 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 388 ms_handle_reset con 0x55a66eb62000 session 0x55a66eec25a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 388 ms_handle_reset con 0x55a66d8eec00 session 0x55a66cd43680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 389 ms_handle_reset con 0x55a66eb7f000 session 0x55a66cc70b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154861568 unmapped: 18882560 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 389 ms_handle_reset con 0x55a66fca4400 session 0x55a66f6df2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2760788 data_alloc: 234881024 data_used: 16687104
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 390 ms_handle_reset con 0x55a66fca4800 session 0x55a66ea0f0e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154869760 unmapped: 18874368 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 390 ms_handle_reset con 0x55a66d8eec00 session 0x55a66f104000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 390 ms_handle_reset con 0x55a66eb62000 session 0x55a66ce230e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 18866176 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.786830902s of 10.626619339s, submitted: 73
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 18866176 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 390 ms_handle_reset con 0x55a66fca3400 session 0x55a66f36fe00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f58ec000/0x0/0x4ffc00000, data 0x33f5075/0x3611000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 18866176 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 18866176 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 390 ms_handle_reset con 0x55a66eb7f000 session 0x55a66f106780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2760643 data_alloc: 234881024 data_used: 16695296
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 391 ms_handle_reset con 0x55a66da4ec00 session 0x55a66d8a25a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f58ed000/0x0/0x4ffc00000, data 0x33f5075/0x3611000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154869760 unmapped: 18874368 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 392 ms_handle_reset con 0x55a66eb7f000 session 0x55a66d911a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 392 ms_handle_reset con 0x55a66eb62000 session 0x55a66f301680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 392 ms_handle_reset con 0x55a66d8eec00 session 0x55a66f4d81e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154869760 unmapped: 18874368 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f58e5000/0x0/0x4ffc00000, data 0x33f8671/0x3617000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 392 ms_handle_reset con 0x55a66fca3400 session 0x55a66da41680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 393 ms_handle_reset con 0x55a66d4a5000 session 0x55a66f8dcf00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 18866176 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 393 ms_handle_reset con 0x55a66d4a5000 session 0x55a66f882b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 18866176 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 393 ms_handle_reset con 0x55a66eb62000 session 0x55a66f4d85a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 18866176 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2774011 data_alloc: 234881024 data_used: 16715776
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 21K writes, 77K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 21K writes, 7365 syncs, 2.90 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 43K keys, 12K commit groups, 1.0 writes per commit group, ingest: 27.49 MB, 0.05 MB/s#012Interval WAL: 12K writes, 5239 syncs, 2.38 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 393 heartbeat osd_stat(store_statfs(0x4f58dd000/0x0/0x4ffc00000, data 0x33ff2b4/0x3621000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154886144 unmapped: 18857984 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 394 ms_handle_reset con 0x55a66fca3400 session 0x55a66ef81a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154902528 unmapped: 18841600 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 394 ms_handle_reset con 0x55a66eb7f000 session 0x55a66eec25a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 394 ms_handle_reset con 0x55a66fcad400 session 0x55a66e9acd20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154902528 unmapped: 18841600 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.956850052s of 10.341323853s, submitted: 65
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 394 ms_handle_reset con 0x55a66d8eec00 session 0x55a66f107a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 394 ms_handle_reset con 0x55a66d4a5000 session 0x55a66f40e960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154910720 unmapped: 18833408 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 395 ms_handle_reset con 0x55a66eb62000 session 0x55a66f8f0b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 395 ms_handle_reset con 0x55a66fca3400 session 0x55a66f501860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 395 ms_handle_reset con 0x55a66eb7f000 session 0x55a66d90f860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 395 ms_handle_reset con 0x55a66fca1800 session 0x55a66cc730e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154935296 unmapped: 18808832 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2784640 data_alloc: 234881024 data_used: 16736256
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 396 ms_handle_reset con 0x55a66d4a5000 session 0x55a66f506f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154935296 unmapped: 18808832 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 396 ms_handle_reset con 0x55a66eb62000 session 0x55a66f37f2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 396 ms_handle_reset con 0x55a66d8eec00 session 0x55a66ef80b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f58d3000/0x0/0x4ffc00000, data 0x34044ae/0x362a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154951680 unmapped: 18792448 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 396 ms_handle_reset con 0x55a66f134800 session 0x55a66e9ac5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 396 handle_osd_map epochs [397,397], i have 397, src has [1,397]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 397 ms_handle_reset con 0x55a66d4a5000 session 0x55a66e990b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 397 ms_handle_reset con 0x55a66f135400 session 0x55a66d911860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154976256 unmapped: 18767872 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 397 ms_handle_reset con 0x55a66d8eec00 session 0x55a66f8f01e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 397 ms_handle_reset con 0x55a66eb62000 session 0x55a66f8f0960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154984448 unmapped: 18759680 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 397 heartbeat osd_stat(store_statfs(0x4f5953000/0x0/0x4ffc00000, data 0x3383fe5/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 398 ms_handle_reset con 0x55a66fca1800 session 0x55a66f6dfa40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 398 ms_handle_reset con 0x55a66c7b8800 session 0x55a66f6def00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154992640 unmapped: 18751488 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2789435 data_alloc: 234881024 data_used: 16740352
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 399 ms_handle_reset con 0x55a66d4a5000 session 0x55a66d5b3e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155000832 unmapped: 18743296 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 399 ms_handle_reset con 0x55a66eb62000 session 0x55a66cc70000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 399 ms_handle_reset con 0x55a66f135400 session 0x55a66cca6d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f594c000/0x0/0x4ffc00000, data 0x3387685/0x35b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155009024 unmapped: 18735104 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156065792 unmapped: 17678336 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 399 handle_osd_map epochs [400,400], i have 400, src has [1,400]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f594f000/0x0/0x4ffc00000, data 0x3387623/0x35af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,2])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 399 handle_osd_map epochs [400,400], i have 400, src has [1,400]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.039989471s of 10.058386803s, submitted: 132
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156073984 unmapped: 17670144 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 400 ms_handle_reset con 0x55a66da4e400 session 0x55a66dbffa40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 400 ms_handle_reset con 0x55a66d8eec00 session 0x55a66cc705a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156073984 unmapped: 17670144 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2796209 data_alloc: 234881024 data_used: 16752640
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 400 handle_osd_map epochs [401,401], i have 401, src has [1,401]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156090368 unmapped: 17653760 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 401 heartbeat osd_stat(store_statfs(0x4f5947000/0x0/0x4ffc00000, data 0x338acb3/0x35b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156090368 unmapped: 17653760 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 401 ms_handle_reset con 0x55a66c7b8800 session 0x55a66da40d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156090368 unmapped: 17653760 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156098560 unmapped: 17645568 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156098560 unmapped: 17645568 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2799531 data_alloc: 234881024 data_used: 16760832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156123136 unmapped: 17620992 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156123136 unmapped: 17620992 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f5945000/0x0/0x4ffc00000, data 0x338c7ea/0x35b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156131328 unmapped: 17612800 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.057873249s of 10.231289864s, submitted: 48
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 403 ms_handle_reset con 0x55a66d4a5000 session 0x55a66f8825a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f5941000/0x0/0x4ffc00000, data 0x338e3cb/0x35bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 403 ms_handle_reset con 0x55a66d8ee800 session 0x55a66f3010e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156131328 unmapped: 17612800 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 403 ms_handle_reset con 0x55a66fca3400 session 0x55a66bfe3e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156131328 unmapped: 17612800 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2805381 data_alloc: 234881024 data_used: 16756736
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 403 ms_handle_reset con 0x55a66d4a5000 session 0x55a66f8f1e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156147712 unmapped: 17596416 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 404 ms_handle_reset con 0x55a66ea30400 session 0x55a66ef805a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156147712 unmapped: 17596416 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 405 ms_handle_reset con 0x55a66c7b8800 session 0x55a66dc0e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156180480 unmapped: 17563648 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 405 ms_handle_reset con 0x55a66d8ee800 session 0x55a66f37e1e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 405 ms_handle_reset con 0x55a66e9be800 session 0x55a66f5003c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 405 ms_handle_reset con 0x55a66c7b8800 session 0x55a66f5005a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 405 ms_handle_reset con 0x55a66ea2f400 session 0x55a66d8a3a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 405 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f36f0e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 405 ms_handle_reset con 0x55a66d4a5000 session 0x55a66e76b860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156237824 unmapped: 17506304 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 405 ms_handle_reset con 0x55a66d8ee800 session 0x55a66d90e960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 405 heartbeat osd_stat(store_statfs(0x4f593d000/0x0/0x4ffc00000, data 0x33919a9/0x35be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 405 ms_handle_reset con 0x55a66c7b8800 session 0x55a66f507c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147972096 unmapped: 25772032 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2491294 data_alloc: 218103808 data_used: 1511424
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 405 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f8f0960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 406 ms_handle_reset con 0x55a66d4a5000 session 0x55a66e990b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147980288 unmapped: 25763840 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 406 heartbeat osd_stat(store_statfs(0x4f74bb000/0x0/0x4ffc00000, data 0x18153b6/0x1a41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 407 ms_handle_reset con 0x55a66ea2f400 session 0x55a66f882b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147988480 unmapped: 25755648 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 407 ms_handle_reset con 0x55a66da55800 session 0x55a66f8dcf00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147988480 unmapped: 25755648 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 407 ms_handle_reset con 0x55a66c7b8800 session 0x55a66f301680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.469542503s of 10.157887459s, submitted: 139
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 408 ms_handle_reset con 0x55a66ca6b400 session 0x55a66d911a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 408 ms_handle_reset con 0x55a66d4a5000 session 0x55a66f106780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147988480 unmapped: 25755648 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 408 ms_handle_reset con 0x55a66da55800 session 0x55a66f36fe00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 408 ms_handle_reset con 0x55a66ea2f400 session 0x55a66f104000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 408 ms_handle_reset con 0x55a66c7b8800 session 0x55a66cc70b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147996672 unmapped: 25747456 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2500164 data_alloc: 218103808 data_used: 1515520
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147996672 unmapped: 25747456 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 409 ms_handle_reset con 0x55a66ca6b400 session 0x55a66e780780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 409 heartbeat osd_stat(store_statfs(0x4f74b6000/0x0/0x4ffc00000, data 0x1818b30/0x1a47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148004864 unmapped: 25739264 heap: 173744128 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 409 ms_handle_reset con 0x55a66eb6ec00 session 0x55a66cb9c5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 409 ms_handle_reset con 0x55a66ea2e400 session 0x55a66cd43c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 409 ms_handle_reset con 0x55a66e9bfc00 session 0x55a66cd43860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 409 ms_handle_reset con 0x55a66c7b8800 session 0x55a66e76ad20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147259392 unmapped: 34365440 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 410 ms_handle_reset con 0x55a66f136c00 session 0x55a66d8c4f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147226624 unmapped: 34398208 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 411 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f8f1860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 411 heartbeat osd_stat(store_statfs(0x4f69bc000/0x0/0x4ffc00000, data 0x2310148/0x2541000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 411 heartbeat osd_stat(store_statfs(0x4f69b8000/0x0/0x4ffc00000, data 0x2311ce1/0x2544000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 411 ms_handle_reset con 0x55a66ea2e400 session 0x55a66f6de1e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147251200 unmapped: 34373632 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2598722 data_alloc: 218103808 data_used: 1523712
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 412 ms_handle_reset con 0x55a66eb6ec00 session 0x55a66d8c4d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 412 heartbeat osd_stat(store_statfs(0x4f69b8000/0x0/0x4ffc00000, data 0x2311ce1/0x2544000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 412 heartbeat osd_stat(store_statfs(0x4f69b4000/0x0/0x4ffc00000, data 0x23138ea/0x2547000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146907136 unmapped: 34717696 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 412 ms_handle_reset con 0x55a66c7b8800 session 0x55a66e990d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 412 ms_handle_reset con 0x55a66ca6b400 session 0x55a66cca14a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146915328 unmapped: 34709504 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 412 ms_handle_reset con 0x55a66eb6d000 session 0x55a66e990b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146915328 unmapped: 34709504 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 412 heartbeat osd_stat(store_statfs(0x4f69b7000/0x0/0x4ffc00000, data 0x23138ea/0x2547000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146915328 unmapped: 34709504 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.583111763s of 10.940265656s, submitted: 153
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 413 ms_handle_reset con 0x55a66f139c00 session 0x55a66f507c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 413 ms_handle_reset con 0x55a66fcc6000 session 0x55a66d8c52c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 413 ms_handle_reset con 0x55a66da4fc00 session 0x55a66d910b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146923520 unmapped: 34701312 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2605491 data_alloc: 218103808 data_used: 1536000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 414 ms_handle_reset con 0x55a66c7b8800 session 0x55a66e76b860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146931712 unmapped: 34693120 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 414 ms_handle_reset con 0x55a66ca6b400 session 0x55a66d8a3a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146931712 unmapped: 34693120 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146931712 unmapped: 34693120 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 415 ms_handle_reset con 0x55a66f131800 session 0x55a66f5005a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f69ac000/0x0/0x4ffc00000, data 0x2318ac7/0x2551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146931712 unmapped: 34693120 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 415 ms_handle_reset con 0x55a66cc64000 session 0x55a66d8c41e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146931712 unmapped: 34693120 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2611583 data_alloc: 218103808 data_used: 1544192
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 415 ms_handle_reset con 0x55a66eb6d800 session 0x55a66f5003c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 415 ms_handle_reset con 0x55a66cc64000 session 0x55a66ef805a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146931712 unmapped: 34693120 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 415 ms_handle_reset con 0x55a66c7b8800 session 0x55a66f8f1e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 415 ms_handle_reset con 0x55a66ca6b400 session 0x55a66bfe3e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146948096 unmapped: 34676736 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146956288 unmapped: 34668544 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f69ac000/0x0/0x4ffc00000, data 0x2318aea/0x2552000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146956288 unmapped: 34668544 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146956288 unmapped: 34668544 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2614321 data_alloc: 218103808 data_used: 1548288
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146956288 unmapped: 34668544 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f69ac000/0x0/0x4ffc00000, data 0x2318aea/0x2552000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.464930534s of 11.614153862s, submitted: 39
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 415 handle_osd_map epochs [416,416], i have 416, src has [1,416]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146956288 unmapped: 34668544 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 416 heartbeat osd_stat(store_statfs(0x4f69a8000/0x0/0x4ffc00000, data 0x231a54d/0x2555000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 416 ms_handle_reset con 0x55a66eb62400 session 0x55a66dc07680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146980864 unmapped: 34643968 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148848640 unmapped: 32776192 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 416 handle_osd_map epochs [417,417], i have 417, src has [1,417]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 417 heartbeat osd_stat(store_statfs(0x4f69a7000/0x0/0x4ffc00000, data 0x231a55d/0x2556000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 417 ms_handle_reset con 0x55a66c7b8800 session 0x55a66f40f680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148848640 unmapped: 32776192 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 417 ms_handle_reset con 0x55a66f973800 session 0x55a66dbffa40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 417 ms_handle_reset con 0x55a670860000 session 0x55a66cc74960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 417 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f6dfa40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2704661 data_alloc: 234881024 data_used: 12750848
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 418 ms_handle_reset con 0x55a66f131400 session 0x55a66f500b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 418 ms_handle_reset con 0x55a66cc64000 session 0x55a66f301c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148307968 unmapped: 33316864 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 418 heartbeat osd_stat(store_statfs(0x4f69a0000/0x0/0x4ffc00000, data 0x231dc88/0x255b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 418 ms_handle_reset con 0x55a66c7b8800 session 0x55a66eec21e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 418 ms_handle_reset con 0x55a66eaf8400 session 0x55a66f8f03c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148316160 unmapped: 33308672 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 418 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f882b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148324352 unmapped: 33300480 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 418 ms_handle_reset con 0x55a66eb68000 session 0x55a66cc70f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148324352 unmapped: 33300480 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148324352 unmapped: 33300480 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 418 heartbeat osd_stat(store_statfs(0x4f69a5000/0x0/0x4ffc00000, data 0x231dc68/0x2559000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2702716 data_alloc: 234881024 data_used: 12738560
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.384723663s of 10.046192169s, submitted: 95
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148324352 unmapped: 33300480 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 418 ms_handle_reset con 0x55a66eb68000 session 0x55a66d910000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148324352 unmapped: 33300480 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 419 ms_handle_reset con 0x55a66c7b8800 session 0x55a66da41680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148324352 unmapped: 33300480 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148332544 unmapped: 33292288 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148332544 unmapped: 33292288 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2708364 data_alloc: 234881024 data_used: 12746752
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148332544 unmapped: 33292288 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 419 heartbeat osd_stat(store_statfs(0x4f699e000/0x0/0x4ffc00000, data 0x232129c/0x255f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148340736 unmapped: 33284096 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148340736 unmapped: 33284096 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148340736 unmapped: 33284096 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 420 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f36ef00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148340736 unmapped: 33284096 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2711493 data_alloc: 234881024 data_used: 12746752
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 420 ms_handle_reset con 0x55a670c19000 session 0x55a66f8f0b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148340736 unmapped: 33284096 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 420 heartbeat osd_stat(store_statfs(0x4f699f000/0x0/0x4ffc00000, data 0x232129c/0x255f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148340736 unmapped: 33284096 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.188512325s of 11.640954971s, submitted: 32
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 420 handle_osd_map epochs [421,421], i have 421, src has [1,421]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148340736 unmapped: 33284096 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148340736 unmapped: 33284096 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 421 heartbeat osd_stat(store_statfs(0x4f699b000/0x0/0x4ffc00000, data 0x2322e35/0x2562000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148348928 unmapped: 33275904 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718524 data_alloc: 234881024 data_used: 12754944
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 422 ms_handle_reset con 0x55a66eb7e800 session 0x55a66e8dbc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 423 ms_handle_reset con 0x55a66ea59400 session 0x55a66f507680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148357120 unmapped: 33267712 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 423 ms_handle_reset con 0x55a66ca6b400 session 0x55a66f107e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 148357120 unmapped: 33267712 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 423 ms_handle_reset con 0x55a66c7b8000 session 0x55a66eec21e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 144285696 unmapped: 37339136 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 423 heartbeat osd_stat(store_statfs(0x4f6994000/0x0/0x4ffc00000, data 0x2326493/0x2569000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 423 handle_osd_map epochs [424,424], i have 424, src has [1,424]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 424 ms_handle_reset con 0x55a66e9c0400 session 0x55a66d911c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 424 ms_handle_reset con 0x55a66c7b8800 session 0x55a66e701860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 144285696 unmapped: 37339136 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 144285696 unmapped: 37339136 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2566100 data_alloc: 218103808 data_used: 1593344
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 424 ms_handle_reset con 0x55a66c7b8000 session 0x55a66f5074a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 144285696 unmapped: 37339136 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 424 ms_handle_reset con 0x55a66eb64400 session 0x55a66f5005a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f7486000/0x0/0x4ffc00000, data 0x1834064/0x1a78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 144285696 unmapped: 37339136 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.600783348s of 10.111025810s, submitted: 83
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 144293888 unmapped: 37330944 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 424 ms_handle_reset con 0x55a66c7b8800 session 0x55a66dc065a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 424 ms_handle_reset con 0x55a66d9d3c00 session 0x55a66e7005a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 424 ms_handle_reset con 0x55a66f139800 session 0x55a66f6df4a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 144293888 unmapped: 37330944 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 424 ms_handle_reset con 0x55a66c7b8000 session 0x55a66f502960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 144302080 unmapped: 37322752 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2568476 data_alloc: 218103808 data_used: 1593344
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 425 ms_handle_reset con 0x55a66c7b8800 session 0x55a66f506000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 144302080 unmapped: 37322752 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 425 heartbeat osd_stat(store_statfs(0x4f7071000/0x0/0x4ffc00000, data 0x1835b29/0x1a7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 426 ms_handle_reset con 0x55a66d9d3c00 session 0x55a66d90e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 144334848 unmapped: 37289984 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 426 ms_handle_reset con 0x55a66ea2e800 session 0x55a66f6de3c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 ms_handle_reset con 0x55a66ea58400 session 0x55a66f107680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 ms_handle_reset con 0x55a66fcc9c00 session 0x55a66d8a3a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 144392192 unmapped: 37232640 heap: 181624832 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 ms_handle_reset con 0x55a66c7b8800 session 0x55a66e76b860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 ms_handle_reset con 0x55a66d9d3c00 session 0x55a66d9103c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 144670720 unmapped: 44834816 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 ms_handle_reset con 0x55a66c7b9000 session 0x55a66dc0ed20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 ms_handle_reset con 0x55a66c7b8000 session 0x55a66f501680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 heartbeat osd_stat(store_statfs(0x4f706a000/0x0/0x4ffc00000, data 0x18392a0/0x1a83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 144678912 unmapped: 44826624 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2663645 data_alloc: 218103808 data_used: 1613824
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146604032 unmapped: 42901504 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 ms_handle_reset con 0x55a66ea2e800 session 0x55a66cca14a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 ms_handle_reset con 0x55a66c7b8000 session 0x55a66d90e960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 ms_handle_reset con 0x55a66c7b8800 session 0x55a66f300b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146079744 unmapped: 43425792 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 ms_handle_reset con 0x55a66da55c00 session 0x55a66f502f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.883471012s of 10.386658669s, submitted: 213
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 ms_handle_reset con 0x55a66eb62000 session 0x55a66f503c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146079744 unmapped: 43425792 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 ms_handle_reset con 0x55a66c7b8000 session 0x55a66d4c21e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 heartbeat osd_stat(store_statfs(0x4f624c000/0x0/0x4ffc00000, data 0x26582d9/0x28a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146079744 unmapped: 43425792 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 ms_handle_reset con 0x55a66c7b8800 session 0x55a66f8832c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 ms_handle_reset con 0x55a66da55c00 session 0x55a66ce23e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 428 ms_handle_reset con 0x55a66ea2e800 session 0x55a66f40fc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146128896 unmapped: 43376640 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2699188 data_alloc: 218103808 data_used: 1622016
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 428 ms_handle_reset con 0x55a66c04fc00 session 0x55a66f5014a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 146137088 unmapped: 43368448 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 430 ms_handle_reset con 0x55a66c7b8000 session 0x55a66e7812c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147480576 unmapped: 42024960 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147480576 unmapped: 42024960 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 430 ms_handle_reset con 0x55a66ee96800 session 0x55a66f506b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147488768 unmapped: 42016768 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 430 heartbeat osd_stat(store_statfs(0x4f6241000/0x0/0x4ffc00000, data 0x265d4b5/0x28ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 430 heartbeat osd_stat(store_statfs(0x4f6241000/0x0/0x4ffc00000, data 0x265d4b5/0x28ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 430 ms_handle_reset con 0x55a66eb63000 session 0x55a66f4d8960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147496960 unmapped: 42008576 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2777088 data_alloc: 234881024 data_used: 11956224
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 430 heartbeat osd_stat(store_statfs(0x4f6241000/0x0/0x4ffc00000, data 0x265d4b5/0x28ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 430 heartbeat osd_stat(store_statfs(0x4f6242000/0x0/0x4ffc00000, data 0x265d4b5/0x28ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147496960 unmapped: 42008576 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 430 ms_handle_reset con 0x55a66dbcec00 session 0x55a66f6df680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147505152 unmapped: 42000384 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147505152 unmapped: 42000384 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.210530281s of 10.165696144s, submitted: 73
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147505152 unmapped: 42000384 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 430 heartbeat osd_stat(store_statfs(0x4f7282000/0x0/0x4ffc00000, data 0x265d4b5/0x28ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147505152 unmapped: 42000384 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2778808 data_alloc: 234881024 data_used: 11956224
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 430 ms_handle_reset con 0x55a66fcab400 session 0x55a66cd42f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147505152 unmapped: 42000384 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 431 ms_handle_reset con 0x55a66e9c0000 session 0x55a66cd43e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 432 ms_handle_reset con 0x55a66e9bf800 session 0x55a66dc07c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 147505152 unmapped: 42000384 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 39976960 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 432 ms_handle_reset con 0x55a66ffa1c00 session 0x55a66f8f1e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 150167552 unmapped: 39337984 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 432 ms_handle_reset con 0x55a66f972800 session 0x55a66ce234a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 432 heartbeat osd_stat(store_statfs(0x4f6fcf000/0x0/0x4ffc00000, data 0x290dab1/0x2b5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,1,1,2])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153206784 unmapped: 36298752 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2825999 data_alloc: 234881024 data_used: 12001280
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 432 ms_handle_reset con 0x55a670c19000 session 0x55a66bfe34a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 151920640 unmapped: 37584896 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 151920640 unmapped: 37584896 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 433 ms_handle_reset con 0x55a66f134000 session 0x55a66da40960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 151879680 unmapped: 37625856 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 433 heartbeat osd_stat(store_statfs(0x4f6d68000/0x0/0x4ffc00000, data 0x2b72682/0x2dc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.924759388s of 10.055661201s, submitted: 75
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 433 heartbeat osd_stat(store_statfs(0x4f6d63000/0x0/0x4ffc00000, data 0x2b76682/0x2dc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 433 ms_handle_reset con 0x55a66eb6d000 session 0x55a66f36f860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154124288 unmapped: 35381248 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154124288 unmapped: 35381248 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2851274 data_alloc: 234881024 data_used: 15392768
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154124288 unmapped: 35381248 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 433 ms_handle_reset con 0x55a66dbce800 session 0x55a66eec2f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154189824 unmapped: 35315712 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 433 heartbeat osd_stat(store_statfs(0x4f6d5f000/0x0/0x4ffc00000, data 0x2b7c682/0x2dcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154189824 unmapped: 35315712 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 433 heartbeat osd_stat(store_statfs(0x4f6d5f000/0x0/0x4ffc00000, data 0x2b7c682/0x2dcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154189824 unmapped: 35315712 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154189824 unmapped: 35315712 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2855400 data_alloc: 234881024 data_used: 15454208
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 434 heartbeat osd_stat(store_statfs(0x4f6d5f000/0x0/0x4ffc00000, data 0x2b7c682/0x2dcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154198016 unmapped: 35307520 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 434 heartbeat osd_stat(store_statfs(0x4f6d5b000/0x0/0x4ffc00000, data 0x2b7e0e5/0x2dd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154198016 unmapped: 35307520 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154198016 unmapped: 35307520 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154198016 unmapped: 35307520 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.932987213s of 10.940667152s, submitted: 42
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 434 heartbeat osd_stat(store_statfs(0x4f66aa000/0x0/0x4ffc00000, data 0x32300e5/0x3484000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159776768 unmapped: 29728768 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2959704 data_alloc: 234881024 data_used: 15736832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 29646848 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159891456 unmapped: 29614080 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159891456 unmapped: 29614080 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159891456 unmapped: 29614080 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 434 heartbeat osd_stat(store_statfs(0x4f5f9f000/0x0/0x4ffc00000, data 0x393b0e5/0x3b8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159891456 unmapped: 29614080 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2969880 data_alloc: 234881024 data_used: 16121856
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159891456 unmapped: 29614080 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159891456 unmapped: 29614080 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159891456 unmapped: 29614080 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f5f9f000/0x0/0x4ffc00000, data 0x393b0e5/0x3b8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66fcac800 session 0x55a66ce225a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66fcab000 session 0x55a66f301e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159891456 unmapped: 29614080 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159891456 unmapped: 29614080 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2975684 data_alloc: 234881024 data_used: 16130048
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 159891456 unmapped: 29614080 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66ee96800 session 0x55a66dc06b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.984846115s of 12.840926170s, submitted: 135
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160940032 unmapped: 28565504 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66fcab800 session 0x55a66f503c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66f973800 session 0x55a66e7805a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66eb63000 session 0x55a66dc0ed20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153468928 unmapped: 36036608 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153468928 unmapped: 36036608 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f6b9d000/0x0/0x4ffc00000, data 0x2934d71/0x2b8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153468928 unmapped: 36036608 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f6b9d000/0x0/0x4ffc00000, data 0x2934d71/0x2b8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2783594 data_alloc: 218103808 data_used: 5558272
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153468928 unmapped: 36036608 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66c7b8000 session 0x55a66d5b21e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153468928 unmapped: 36036608 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66fcaac00 session 0x55a66f40e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66eb63000 session 0x55a66cca10e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153468928 unmapped: 36036608 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66ee96800 session 0x55a66f500f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153468928 unmapped: 36036608 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66f973800 session 0x55a66d8c4f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66fcab800 session 0x55a66d8c52c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153468928 unmapped: 36036608 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2783141 data_alloc: 218103808 data_used: 5550080
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f6fa6000/0x0/0x4ffc00000, data 0x2934cff/0x2b88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153468928 unmapped: 36036608 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66fcc7400 session 0x55a66e8db2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153468928 unmapped: 36036608 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66eb65000 session 0x55a66ea0e1e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153468928 unmapped: 36036608 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66eb71800 session 0x55a66e780960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.196525574s of 11.449170113s, submitted: 67
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66ffa0400 session 0x55a66f3001e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f6fa6000/0x0/0x4ffc00000, data 0x2934cff/0x2b88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153485312 unmapped: 36020224 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f6fa5000/0x0/0x4ffc00000, data 0x2934d0f/0x2b89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f6fa5000/0x0/0x4ffc00000, data 0x2934d0f/0x2b89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153485312 unmapped: 36020224 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2785431 data_alloc: 218103808 data_used: 5562368
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153493504 unmapped: 36012032 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153493504 unmapped: 36012032 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153493504 unmapped: 36012032 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66eb6f000 session 0x55a66f36eb40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a66eb6f000 session 0x55a66cd43860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153493504 unmapped: 36012032 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153493504 unmapped: 36012032 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2785911 data_alloc: 218103808 data_used: 5636096
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f6fa5000/0x0/0x4ffc00000, data 0x2934d0f/0x2b89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f6fa5000/0x0/0x4ffc00000, data 0x2934d0f/0x2b89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153493504 unmapped: 36012032 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 ms_handle_reset con 0x55a6707be400 session 0x55a66e76ad20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f6fa5000/0x0/0x4ffc00000, data 0x2934d0f/0x2b89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153509888 unmapped: 35995648 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153509888 unmapped: 35995648 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.099278450s of 10.129653931s, submitted: 9
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 436 ms_handle_reset con 0x55a66eb7e000 session 0x55a66e9ade00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153526272 unmapped: 35979264 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153829376 unmapped: 35676160 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2804060 data_alloc: 218103808 data_used: 5652480
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 436 ms_handle_reset con 0x55a66e9c1400 session 0x55a66bfe3c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 436 ms_handle_reset con 0x55a66ffa1800 session 0x55a66ce22f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154312704 unmapped: 35192832 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 436 ms_handle_reset con 0x55a66e9c0000 session 0x55a66e9ad4a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 436 ms_handle_reset con 0x55a66fcab400 session 0x55a66f502b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 436 ms_handle_reset con 0x55a66f13b400 session 0x55a66f107e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154329088 unmapped: 35176448 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f6da4000/0x0/0x4ffc00000, data 0x2b30971/0x2d8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 436 ms_handle_reset con 0x55a66eb6fc00 session 0x55a66f301e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154329088 unmapped: 35176448 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f6da4000/0x0/0x4ffc00000, data 0x2b30971/0x2d8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 436 ms_handle_reset con 0x55a66dbcec00 session 0x55a66f8f1a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 437 ms_handle_reset con 0x55a66e9c0000 session 0x55a66f6de3c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 437 ms_handle_reset con 0x55a66eb62800 session 0x55a66f40e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154361856 unmapped: 35143680 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154361856 unmapped: 35143680 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 437 ms_handle_reset con 0x55a66eb6fc00 session 0x55a66ef81e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2831605 data_alloc: 218103808 data_used: 7041024
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 437 ms_handle_reset con 0x55a66f13b400 session 0x55a66d4c21e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154378240 unmapped: 35127296 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154378240 unmapped: 35127296 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 437 heartbeat osd_stat(store_statfs(0x4f6da4000/0x0/0x4ffc00000, data 0x2b324c0/0x2d8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154378240 unmapped: 35127296 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154378240 unmapped: 35127296 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154378240 unmapped: 35127296 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2829910 data_alloc: 218103808 data_used: 7036928
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154378240 unmapped: 35127296 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.375078201s of 13.237793922s, submitted: 116
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154386432 unmapped: 35119104 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 heartbeat osd_stat(store_statfs(0x4f6da0000/0x0/0x4ffc00000, data 0x2b33f23/0x2d8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154386432 unmapped: 35119104 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 ms_handle_reset con 0x55a66fcac800 session 0x55a66f501c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 ms_handle_reset con 0x55a66e9c0000 session 0x55a66e8da960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154411008 unmapped: 35094528 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 ms_handle_reset con 0x55a66eb62800 session 0x55a66f36e960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3068926 data_alloc: 218103808 data_used: 7041024
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 ms_handle_reset con 0x55a66eb6fc00 session 0x55a66f501e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 heartbeat osd_stat(store_statfs(0x4f4be0000/0x0/0x4ffc00000, data 0x4cf2ec0/0x4f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 heartbeat osd_stat(store_statfs(0x4f4be0000/0x0/0x4ffc00000, data 0x4cf2ec0/0x4f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3069904 data_alloc: 218103808 data_used: 7041024
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 heartbeat osd_stat(store_statfs(0x4f4be0000/0x0/0x4ffc00000, data 0x4cf2ec0/0x4f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 heartbeat osd_stat(store_statfs(0x4f4be0000/0x0/0x4ffc00000, data 0x4cf2ec0/0x4f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3069904 data_alloc: 218103808 data_used: 7041024
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 heartbeat osd_stat(store_statfs(0x4f4be0000/0x0/0x4ffc00000, data 0x4cf2ec0/0x4f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 ms_handle_reset con 0x55a66eb65400 session 0x55a66f8f0d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.319009781s of 17.003423691s, submitted: 37
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 ms_handle_reset con 0x55a66da4f000 session 0x55a66e76a3c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 ms_handle_reset con 0x55a66da4f000 session 0x55a66f500b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3068251 data_alloc: 218103808 data_used: 7041024
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155467776 unmapped: 34037760 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 ms_handle_reset con 0x55a66fcc9400 session 0x55a66f4d81e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 heartbeat osd_stat(store_statfs(0x4f4be4000/0x0/0x4ffc00000, data 0x4cf2eb0/0x4f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 ms_handle_reset con 0x55a66c7b9000 session 0x55a66f4d8b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155492352 unmapped: 34013184 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 ms_handle_reset con 0x55a670f94800 session 0x55a66f36e3c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 ms_handle_reset con 0x55a66c7b8c00 session 0x55a66f36ef00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155500544 unmapped: 34004992 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155500544 unmapped: 34004992 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 439 ms_handle_reset con 0x55a66eb7f400 session 0x55a66d910000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 439 ms_handle_reset con 0x55a66d8ef800 session 0x55a66d8a2000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155508736 unmapped: 33996800 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3078341 data_alloc: 234881024 data_used: 10276864
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 439 ms_handle_reset con 0x55a66c7b8c00 session 0x55a66d910b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155836416 unmapped: 33669120 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155836416 unmapped: 33669120 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 439 heartbeat osd_stat(store_statfs(0x4f4da1000/0x0/0x4ffc00000, data 0x4b34a81/0x4d8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 439 ms_handle_reset con 0x55a66ea2f000 session 0x55a66f8f14a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 155836416 unmapped: 33669120 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 439 ms_handle_reset con 0x55a66eb7e400 session 0x55a66f8f1860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153985024 unmapped: 35520512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153985024 unmapped: 35520512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2914511 data_alloc: 218103808 data_used: 5074944
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 439 handle_osd_map epochs [439,440], i have 439, src has [1,440]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.972742081s of 12.087361336s, submitted: 81
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153985024 unmapped: 35520512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153985024 unmapped: 35520512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 153985024 unmapped: 35520512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 440 heartbeat osd_stat(store_statfs(0x4f5e8c000/0x0/0x4ffc00000, data 0x3a48482/0x3ca1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 441 ms_handle_reset con 0x55a670860c00 session 0x55a66f503680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154017792 unmapped: 35487744 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 154017792 unmapped: 35487744 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 441 heartbeat osd_stat(store_statfs(0x4f5e8b000/0x0/0x4ffc00000, data 0x3a4a025/0x3ca2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923256 data_alloc: 218103808 data_used: 5320704
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 156811264 unmapped: 32694272 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167739392 unmapped: 21766144 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168345600 unmapped: 21159936 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168968192 unmapped: 20537344 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 441 heartbeat osd_stat(store_statfs(0x4f5977000/0x0/0x4ffc00000, data 0x3a4f025/0x3ca7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 161120256 unmapped: 28385280 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2994832 data_alloc: 218103808 data_used: 5890048
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162242560 unmapped: 27262976 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.688762665s of 11.291813850s, submitted: 113
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162242560 unmapped: 27262976 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162242560 unmapped: 27262976 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 ms_handle_reset con 0x55a66fcc9800 session 0x55a66f5005a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 ms_handle_reset con 0x55a6707be400 session 0x55a66d8a23c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162242560 unmapped: 27262976 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 ms_handle_reset con 0x55a66fcc5400 session 0x55a66f3345a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162111488 unmapped: 27394048 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998038 data_alloc: 218103808 data_used: 5898240
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 heartbeat osd_stat(store_statfs(0x4f557c000/0x0/0x4ffc00000, data 0x4357a88/0x45b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162111488 unmapped: 27394048 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162111488 unmapped: 27394048 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 ms_handle_reset con 0x55a66fca3800 session 0x55a66da40960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162111488 unmapped: 27394048 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162111488 unmapped: 27394048 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162111488 unmapped: 27394048 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998060 data_alloc: 218103808 data_used: 5902336
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 heartbeat osd_stat(store_statfs(0x4f557c000/0x0/0x4ffc00000, data 0x4357aea/0x45b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162111488 unmapped: 27394048 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.007664204s of 10.108162880s, submitted: 39
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162111488 unmapped: 27394048 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 ms_handle_reset con 0x55a66fcc5400 session 0x55a66d90cb40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162111488 unmapped: 27394048 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 heartbeat osd_stat(store_statfs(0x4f557d000/0x0/0x4ffc00000, data 0x4357a88/0x45b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162119680 unmapped: 27385856 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162119680 unmapped: 27385856 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998316 data_alloc: 218103808 data_used: 5898240
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 163176448 unmapped: 26329088 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162652160 unmapped: 26853376 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 heartbeat osd_stat(store_statfs(0x4f547a000/0x0/0x4ffc00000, data 0x445aa88/0x46b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162136064 unmapped: 27369472 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 ms_handle_reset con 0x55a66c04fc00 session 0x55a66e76a5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162144256 unmapped: 27361280 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162144256 unmapped: 27361280 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3011213 data_alloc: 218103808 data_used: 5898240
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 ms_handle_reset con 0x55a66eb69000 session 0x55a66f36fc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162177024 unmapped: 27328512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 ms_handle_reset con 0x55a66ea2f400 session 0x55a66bfe3e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 heartbeat osd_stat(store_statfs(0x4f547a000/0x0/0x4ffc00000, data 0x445aa88/0x46b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162177024 unmapped: 27328512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162177024 unmapped: 27328512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.149212837s of 12.160306931s, submitted: 28
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162177024 unmapped: 27328512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 ms_handle_reset con 0x55a66fca8000 session 0x55a66f503c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 heartbeat osd_stat(store_statfs(0x4f547a000/0x0/0x4ffc00000, data 0x445aa88/0x46b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162177024 unmapped: 27328512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010685 data_alloc: 218103808 data_used: 5898240
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162177024 unmapped: 27328512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162177024 unmapped: 27328512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 heartbeat osd_stat(store_statfs(0x4f547a000/0x0/0x4ffc00000, data 0x445aa88/0x46b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162177024 unmapped: 27328512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 heartbeat osd_stat(store_statfs(0x4f547a000/0x0/0x4ffc00000, data 0x445aa88/0x46b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162177024 unmapped: 27328512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162177024 unmapped: 27328512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010685 data_alloc: 218103808 data_used: 5898240
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162177024 unmapped: 27328512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 heartbeat osd_stat(store_statfs(0x4f547a000/0x0/0x4ffc00000, data 0x445aa88/0x46b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162177024 unmapped: 27328512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162177024 unmapped: 27328512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 ms_handle_reset con 0x55a66fcc3000 session 0x55a66f4d9a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66da4e800 session 0x55a66e7005a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66d2bd800 session 0x55a66cd42f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162177024 unmapped: 27328512 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66ffa0400 session 0x55a66f6dfc20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.353398323s of 10.637358665s, submitted: 7
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66eb6d000 session 0x55a66d4c3860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66f13a400 session 0x55a66f502000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162488320 unmapped: 27017216 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019613 data_alloc: 218103808 data_used: 5906432
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162504704 unmapped: 27000832 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f5451000/0x0/0x4ffc00000, data 0x4480615/0x46dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162512896 unmapped: 26992640 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162512896 unmapped: 26992640 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162512896 unmapped: 26992640 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162512896 unmapped: 26992640 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3027105 data_alloc: 218103808 data_used: 6959104
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162512896 unmapped: 26992640 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f5451000/0x0/0x4ffc00000, data 0x4480615/0x46dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162512896 unmapped: 26992640 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162512896 unmapped: 26992640 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162512896 unmapped: 26992640 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162512896 unmapped: 26992640 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3027105 data_alloc: 218103808 data_used: 6959104
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f5451000/0x0/0x4ffc00000, data 0x4480615/0x46dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 162512896 unmapped: 26992640 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.024955750s of 12.330533028s, submitted: 3
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66ffa0000 session 0x55a66f8f1a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 163143680 unmapped: 26361856 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 161677312 unmapped: 27828224 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160260096 unmapped: 29245440 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f4d58000/0x0/0x4ffc00000, data 0x4b79638/0x4dd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160260096 unmapped: 29245440 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3091154 data_alloc: 218103808 data_used: 7299072
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f4d3b000/0x0/0x4ffc00000, data 0x4b96638/0x4df3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160325632 unmapped: 29179904 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160325632 unmapped: 29179904 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160342016 unmapped: 29163520 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f4d2d000/0x0/0x4ffc00000, data 0x4ba4638/0x4e01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160342016 unmapped: 29163520 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160342016 unmapped: 29163520 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3102830 data_alloc: 218103808 data_used: 7532544
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160342016 unmapped: 29163520 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160342016 unmapped: 29163520 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f4d2d000/0x0/0x4ffc00000, data 0x4ba4638/0x4e01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.065809250s of 10.815029144s, submitted: 54
Oct  1 13:15:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct  1 13:15:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/842155524' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66d2bd800 session 0x55a66ea0e780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66da4e800 session 0x55a66d90c960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160342016 unmapped: 29163520 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66eb68000 session 0x55a66cc72960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f4d4e000/0x0/0x4ffc00000, data 0x4b84628/0x4de0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160342016 unmapped: 29163520 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160342016 unmapped: 29163520 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3094769 data_alloc: 218103808 data_used: 7421952
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160342016 unmapped: 29163520 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 160342016 unmapped: 29163520 heap: 189505536 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f4d4e000/0x0/0x4ffc00000, data 0x4b84628/0x4de0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 163463168 unmapped: 30244864 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 163512320 unmapped: 30195712 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 163512320 unmapped: 30195712 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3191473 data_alloc: 234881024 data_used: 13832192
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f454e000/0x0/0x4ffc00000, data 0x5384628/0x55e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f454e000/0x0/0x4ffc00000, data 0x5384628/0x55e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 163512320 unmapped: 30195712 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 163512320 unmapped: 30195712 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 163512320 unmapped: 30195712 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 26656768 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66da54c00 session 0x55a66ea0e960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f454e000/0x0/0x4ffc00000, data 0x5384628/0x55e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 26656768 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3201713 data_alloc: 234881024 data_used: 18026496
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f454e000/0x0/0x4ffc00000, data 0x5384628/0x55e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66fca7000 session 0x55a66ea0fe00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 26656768 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66fca2c00 session 0x55a66cd43680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.012187958s of 14.135933876s, submitted: 17
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a67118a000 session 0x55a66f8f10e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 26656768 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 26656768 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f454d000/0x0/0x4ffc00000, data 0x538464b/0x55e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 26656768 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 26656768 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3203802 data_alloc: 234881024 data_used: 18075648
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 26656768 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 26656768 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 26656768 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 26656768 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f454d000/0x0/0x4ffc00000, data 0x538464b/0x55e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3203802 data_alloc: 234881024 data_used: 18075648
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 26656768 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 26656768 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f454d000/0x0/0x4ffc00000, data 0x538464b/0x55e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 26656768 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 26656768 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.074834824s of 12.323685646s, submitted: 6
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167116800 unmapped: 26591232 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3209126 data_alloc: 234881024 data_used: 18706432
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167419904 unmapped: 26288128 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167419904 unmapped: 26288128 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167419904 unmapped: 26288128 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f454d000/0x0/0x4ffc00000, data 0x538464b/0x55e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167419904 unmapped: 26288128 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f454d000/0x0/0x4ffc00000, data 0x538464b/0x55e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167419904 unmapped: 26288128 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66eb6dc00 session 0x55a66f507a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f454d000/0x0/0x4ffc00000, data 0x538464b/0x55e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3209286 data_alloc: 234881024 data_used: 18710528
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167419904 unmapped: 26288128 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66fca8400 session 0x55a66d5b2f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66e9be000 session 0x55a66ef80d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 167419904 unmapped: 26288128 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66c04fc00 session 0x55a66f301e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66c83f400 session 0x55a66f8f0780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66d8ef000 session 0x55a66e8da780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f4571000/0x0/0x4ffc00000, data 0x5360628/0x55bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3209154 data_alloc: 234881024 data_used: 19763200
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66ca63800 session 0x55a66e990b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.302001953s of 12.343158722s, submitted: 17
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66eb6e000 session 0x55a66e781c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f4d72000/0x0/0x4ffc00000, data 0x4b60628/0x4dbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66fcad800 session 0x55a66f4d9680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 ms_handle_reset con 0x55a66c04fc00 session 0x55a66ea0fa40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3140354 data_alloc: 234881024 data_used: 18374656
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f4d72000/0x0/0x4ffc00000, data 0x4b60628/0x4dbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f4d72000/0x0/0x4ffc00000, data 0x4b60628/0x4dbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f4d72000/0x0/0x4ffc00000, data 0x4b60628/0x4dbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f4d72000/0x0/0x4ffc00000, data 0x4b60628/0x4dbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3140354 data_alloc: 234881024 data_used: 18374656
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f4d72000/0x0/0x4ffc00000, data 0x4b60628/0x4dbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3140354 data_alloc: 234881024 data_used: 18374656
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 25681920 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.696039200s of 15.706896782s, submitted: 4
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f4d72000/0x0/0x4ffc00000, data 0x4b60628/0x4dbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168886272 unmapped: 24821760 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168886272 unmapped: 24821760 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 handle_osd_map epochs [443,444], i have 443, src has [1,444]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 443 handle_osd_map epochs [444,444], i have 444, src has [1,444]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f4d72000/0x0/0x4ffc00000, data 0x4b60628/0x4dbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 ms_handle_reset con 0x55a66fcac000 session 0x55a66ea0eb40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168935424 unmapped: 24772608 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f4d6e000/0x0/0x4ffc00000, data 0x4b621a5/0x4dbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3169770 data_alloc: 234881024 data_used: 19890176
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168935424 unmapped: 24772608 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f4d6d000/0x0/0x4ffc00000, data 0x4d8e1a5/0x4dc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168935424 unmapped: 24772608 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168943616 unmapped: 24764416 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f4d6d000/0x0/0x4ffc00000, data 0x4d8e1a5/0x4dc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f4d6d000/0x0/0x4ffc00000, data 0x4d8e207/0x4dc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168943616 unmapped: 24764416 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 ms_handle_reset con 0x55a66ee97400 session 0x55a66cca14a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 ms_handle_reset con 0x55a66fcc2c00 session 0x55a66e990d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168943616 unmapped: 24764416 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f4d6d000/0x0/0x4ffc00000, data 0x4d8e207/0x4dc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3171970 data_alloc: 234881024 data_used: 19890176
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168943616 unmapped: 24764416 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168943616 unmapped: 24764416 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f4d6d000/0x0/0x4ffc00000, data 0x4d8e207/0x4dc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168943616 unmapped: 24764416 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168943616 unmapped: 24764416 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f4d6d000/0x0/0x4ffc00000, data 0x4d8e207/0x4dc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168943616 unmapped: 24764416 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3171970 data_alloc: 234881024 data_used: 19890176
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168943616 unmapped: 24764416 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168943616 unmapped: 24764416 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168943616 unmapped: 24764416 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168943616 unmapped: 24764416 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f4d6d000/0x0/0x4ffc00000, data 0x4d8e207/0x4dc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 ms_handle_reset con 0x55a66eb71800 session 0x55a66eec34a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168943616 unmapped: 24764416 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 ms_handle_reset con 0x55a66f972400 session 0x55a66f8f10e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3171970 data_alloc: 234881024 data_used: 19890176
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168943616 unmapped: 24764416 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 ms_handle_reset con 0x55a66fca9400 session 0x55a66cd43680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.227674484s of 18.907342911s, submitted: 16
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 ms_handle_reset con 0x55a670861c00 session 0x55a66ea0fe00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168960000 unmapped: 24748032 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f4d6b000/0x0/0x4ffc00000, data 0x4d8e23a/0x4dc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168960000 unmapped: 24748032 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168976384 unmapped: 24731648 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168976384 unmapped: 24731648 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3175593 data_alloc: 234881024 data_used: 19890176
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168976384 unmapped: 24731648 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168976384 unmapped: 24731648 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f4d6b000/0x0/0x4ffc00000, data 0x4d8e23a/0x4dc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168976384 unmapped: 24731648 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168976384 unmapped: 24731648 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168976384 unmapped: 24731648 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3175593 data_alloc: 234881024 data_used: 19890176
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168976384 unmapped: 24731648 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168976384 unmapped: 24731648 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168976384 unmapped: 24731648 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f4d6b000/0x0/0x4ffc00000, data 0x4d8e23a/0x4dc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.746119499s of 12.036828041s, submitted: 6
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f4d6b000/0x0/0x4ffc00000, data 0x4d8e23a/0x4dc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 173703168 unmapped: 20004864 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 173637632 unmapped: 20070400 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216543 data_alloc: 234881024 data_used: 19898368
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 173637632 unmapped: 20070400 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f49a0000/0x0/0x4ffc00000, data 0x515923a/0x518e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 173637632 unmapped: 20070400 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 173703168 unmapped: 20004864 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 173703168 unmapped: 20004864 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f49a0000/0x0/0x4ffc00000, data 0x515923a/0x518e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170442752 unmapped: 23265280 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3209343 data_alloc: 234881024 data_used: 20041728
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170442752 unmapped: 23265280 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f49a0000/0x0/0x4ffc00000, data 0x515923a/0x518e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170442752 unmapped: 23265280 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170442752 unmapped: 23265280 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170442752 unmapped: 23265280 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f49a0000/0x0/0x4ffc00000, data 0x515923a/0x518e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170442752 unmapped: 23265280 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.676115036s of 11.802116394s, submitted: 18
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3208959 data_alloc: 234881024 data_used: 20045824
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f49a0000/0x0/0x4ffc00000, data 0x515923a/0x518e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,2,0,2])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3209343 data_alloc: 234881024 data_used: 20041728
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 ms_handle_reset con 0x55a66c83fc00 session 0x55a66f4d83c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f49a0000/0x0/0x4ffc00000, data 0x515923a/0x518e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f49a0000/0x0/0x4ffc00000, data 0x515923a/0x518e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f49a0000/0x0/0x4ffc00000, data 0x515923a/0x518e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3208783 data_alloc: 234881024 data_used: 20045824
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 ms_handle_reset con 0x55a66c83f000 session 0x55a66d90c960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.940848351s of 14.980152130s, submitted: 10
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 ms_handle_reset con 0x55a66c7b8c00 session 0x55a66dbfe960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f49a0000/0x0/0x4ffc00000, data 0x515923a/0x518e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3208935 data_alloc: 234881024 data_used: 20066304
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 ms_handle_reset con 0x55a66fcaf000 session 0x55a66f502000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 23224320 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f49a1000/0x0/0x4ffc00000, data 0x5159207/0x518c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3207514 data_alloc: 234881024 data_used: 20062208
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170491904 unmapped: 23216128 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 ms_handle_reset con 0x55a6707bf800 session 0x55a66f503c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170500096 unmapped: 23207936 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 ms_handle_reset con 0x55a66ca62400 session 0x55a66da40960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170508288 unmapped: 23199744 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 ms_handle_reset con 0x55a66c7b8c00 session 0x55a66f8f0b40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 handle_osd_map epochs [444,445], i have 444, src has [1,445]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 444 handle_osd_map epochs [445,445], i have 445, src has [1,445]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 445 ms_handle_reset con 0x55a66c83f000 session 0x55a66f36f2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170549248 unmapped: 23158784 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 445 heartbeat osd_stat(store_statfs(0x4f4d6a000/0x0/0x4ffc00000, data 0x4d8fd76/0x4dc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170549248 unmapped: 23158784 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3169650 data_alloc: 234881024 data_used: 20041728
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170549248 unmapped: 23158784 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170549248 unmapped: 23158784 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.353056908s of 13.024686813s, submitted: 70
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170557440 unmapped: 23150592 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 445 ms_handle_reset con 0x55a670f95400 session 0x55a66e781860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170557440 unmapped: 23150592 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 445 heartbeat osd_stat(store_statfs(0x4f4d6c000/0x0/0x4ffc00000, data 0x4b63d76/0x4dc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170557440 unmapped: 23150592 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3168546 data_alloc: 234881024 data_used: 20041728
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170557440 unmapped: 23150592 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170557440 unmapped: 23150592 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 ms_handle_reset con 0x55a67118a000 session 0x55a66d911e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f4d68000/0x0/0x4ffc00000, data 0x4b657b6/0x4dc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170573824 unmapped: 23134208 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170573824 unmapped: 23134208 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170573824 unmapped: 23134208 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3172024 data_alloc: 234881024 data_used: 20045824
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170573824 unmapped: 23134208 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170573824 unmapped: 23134208 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170573824 unmapped: 23134208 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f4d68000/0x0/0x4ffc00000, data 0x4b657b6/0x4dc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170573824 unmapped: 23134208 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.976563454s of 11.161118507s, submitted: 28
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 ms_handle_reset con 0x55a66fca7c00 session 0x55a66cca05a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170582016 unmapped: 23126016 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074804 data_alloc: 234881024 data_used: 16363520
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 ms_handle_reset con 0x55a66c7b8c00 session 0x55a66f40e1e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f5571000/0x0/0x4ffc00000, data 0x435e7b6/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f5571000/0x0/0x4ffc00000, data 0x435e7b6/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f5571000/0x0/0x4ffc00000, data 0x435e7b6/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074804 data_alloc: 234881024 data_used: 16363520
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f5571000/0x0/0x4ffc00000, data 0x435e7b6/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f5571000/0x0/0x4ffc00000, data 0x435e7b6/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f5571000/0x0/0x4ffc00000, data 0x435e7b6/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f5571000/0x0/0x4ffc00000, data 0x435e7b6/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074804 data_alloc: 234881024 data_used: 16363520
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f5571000/0x0/0x4ffc00000, data 0x435e7b6/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074804 data_alloc: 234881024 data_used: 16363520
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f5571000/0x0/0x4ffc00000, data 0x435e7b6/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f5571000/0x0/0x4ffc00000, data 0x435e7b6/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074804 data_alloc: 234881024 data_used: 16363520
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f5571000/0x0/0x4ffc00000, data 0x435e7b6/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.836351395s of 26.706718445s, submitted: 26
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 ms_handle_reset con 0x55a66fcc6400 session 0x55a66ef81a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074804 data_alloc: 234881024 data_used: 16363520
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f5571000/0x0/0x4ffc00000, data 0x435e7b6/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 ms_handle_reset con 0x55a66fcad800 session 0x55a66cc743c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 168148992 unmapped: 25559040 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f5571000/0x0/0x4ffc00000, data 0x435e7b6/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 446 handle_osd_map epochs [447,447], i have 447, src has [1,447]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 447 ms_handle_reset con 0x55a66f135800 session 0x55a66dbcd2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f5571000/0x0/0x4ffc00000, data 0x435e7b6/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 169213952 unmapped: 24494080 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 448 ms_handle_reset con 0x55a66fca5c00 session 0x55a66e7810e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3082624 data_alloc: 234881024 data_used: 16371712
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 169222144 unmapped: 24485888 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 169222144 unmapped: 24485888 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 169222144 unmapped: 24485888 heap: 193708032 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f5569000/0x0/0x4ffc00000, data 0x4361eb0/0x45c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,14])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 232218624 unmapped: 28663808 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f1d6b000/0x0/0x4ffc00000, data 0x7b61eb0/0x7dc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 173498368 unmapped: 87384064 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.941870689s of 10.092802048s, submitted: 24
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3708608 data_alloc: 234881024 data_used: 16371712
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 181895168 unmapped: 78987264 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 173531136 unmapped: 87351296 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 448 heartbeat osd_stat(store_statfs(0x4ec55b000/0x0/0x4ffc00000, data 0xcf61eb0/0xd1c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 169353216 unmapped: 91529216 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 169353216 unmapped: 91529216 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174653440 unmapped: 86228992 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4297856 data_alloc: 234881024 data_used: 16371712
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 90398720 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 448 heartbeat osd_stat(store_statfs(0x4ea15b000/0x0/0x4ffc00000, data 0xf361eb0/0xf5c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1,3])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 187383808 unmapped: 73498624 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 183230464 unmapped: 77651968 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 187539456 unmapped: 73342976 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170803200 unmapped: 90079232 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.213202477s of 10.008814812s, submitted: 41
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4897344 data_alloc: 234881024 data_used: 16371712
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 179298304 unmapped: 81584128 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 448 heartbeat osd_stat(store_statfs(0x4e555b000/0x0/0x4ffc00000, data 0x13f61eb0/0x141c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 170999808 unmapped: 89882624 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 171278336 unmapped: 89604096 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 171548672 unmapped: 89333760 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 448 ms_handle_reset con 0x55a66f972c00 session 0x55a66d4c21e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 89047040 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5690256 data_alloc: 234881024 data_used: 16371712
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 89047040 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 448 heartbeat osd_stat(store_statfs(0x4ddd5b000/0x0/0x4ffc00000, data 0x1b761eb0/0x1b9c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 448 handle_osd_map epochs [448,449], i have 448, src has [1,449]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 171851776 unmapped: 89030656 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 449 heartbeat osd_stat(store_statfs(0x4ddd57000/0x0/0x4ffc00000, data 0x1b763a81/0x1b9c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 171851776 unmapped: 89030656 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 171851776 unmapped: 89030656 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 171851776 unmapped: 89030656 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 449 ms_handle_reset con 0x55a66c7b8800 session 0x55a66f882000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5697092 data_alloc: 234881024 data_used: 16379904
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 171851776 unmapped: 89030656 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.466395378s of 11.090543747s, submitted: 47
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 449 ms_handle_reset con 0x55a66eb6f400 session 0x55a66f882f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 171859968 unmapped: 89022464 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 450 ms_handle_reset con 0x55a6707bf400 session 0x55a66f883e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 171925504 unmapped: 88956928 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 450 heartbeat osd_stat(store_statfs(0x4ddd53000/0x0/0x4ffc00000, data 0x1b76562a/0x1b9ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 451 ms_handle_reset con 0x55a66eb6d800 session 0x55a66bfe2f00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 451 ms_handle_reset con 0x55a66d8efc00 session 0x55a66dbcde00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 88915968 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 88915968 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5705230 data_alloc: 234881024 data_used: 16392192
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 171950080 unmapped: 88932352 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 453 ms_handle_reset con 0x55a66dbce400 session 0x55a66f1052c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 171991040 unmapped: 88891392 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 454 ms_handle_reset con 0x55a66eb6d800 session 0x55a66d911860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 172023808 unmapped: 88858624 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 205791232 unmapped: 55091200 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 454 heartbeat osd_stat(store_statfs(0x4ddd44000/0x0/0x4ffc00000, data 0x1b76c39e/0x1b9d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,8])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 454 heartbeat osd_stat(store_statfs(0x4ddd44000/0x0/0x4ffc00000, data 0x1b76c39e/0x1b9d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [0,0,0,0,0,0,0,3,0,0,8])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 218497024 unmapped: 42385408 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6165912 data_alloc: 234881024 data_used: 16400384
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 180920320 unmapped: 79962112 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.086527348s of 10.098424911s, submitted: 85
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 172924928 unmapped: 87957504 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 177389568 unmapped: 83492864 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 173293568 unmapped: 87588864 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 173367296 unmapped: 87515136 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 454 heartbeat osd_stat(store_statfs(0x4d3947000/0x0/0x4ffc00000, data 0x25b6c39e/0x25dd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7053448 data_alloc: 234881024 data_used: 16400384
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 173662208 unmapped: 87220224 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 177922048 unmapped: 82960384 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 454 heartbeat osd_stat(store_statfs(0x4d1547000/0x0/0x4ffc00000, data 0x27f6c39e/0x281d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 194920448 unmapped: 65961984 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 182509568 unmapped: 78372864 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 178511872 unmapped: 82370560 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7726600 data_alloc: 234881024 data_used: 16400384
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 178692096 unmapped: 82190336 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.343927860s of 10.009737015s, submitted: 48
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 178839552 unmapped: 82042880 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 454 heartbeat osd_stat(store_statfs(0x4cb147000/0x0/0x4ffc00000, data 0x2e36c39e/0x2e5d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174645248 unmapped: 86237184 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174645248 unmapped: 86237184 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 178937856 unmapped: 81944576 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7902456 data_alloc: 234881024 data_used: 16400384
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 86138880 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174759936 unmapped: 86122496 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 454 heartbeat osd_stat(store_statfs(0x4ca547000/0x0/0x4ffc00000, data 0x2ef6c39e/0x2f1d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [0,0,0,0,0,0,0,2])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 454 ms_handle_reset con 0x55a670860400 session 0x55a66f503680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174768128 unmapped: 86114304 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 454 heartbeat osd_stat(store_statfs(0x4ca547000/0x0/0x4ffc00000, data 0x2ef6c39e/0x2f1d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174768128 unmapped: 86114304 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174768128 unmapped: 86114304 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7902233 data_alloc: 234881024 data_used: 16400384
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 454 heartbeat osd_stat(store_statfs(0x4ca547000/0x0/0x4ffc00000, data 0x2ef6c39e/0x2f1d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174768128 unmapped: 86114304 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 454 handle_osd_map epochs [455,455], i have 455, src has [1,455]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 455 heartbeat osd_stat(store_statfs(0x4ca543000/0x0/0x4ffc00000, data 0x2ef6df6f/0x2f1da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174784512 unmapped: 86097920 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174784512 unmapped: 86097920 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 455 heartbeat osd_stat(store_statfs(0x4ca543000/0x0/0x4ffc00000, data 0x2ef6df6f/0x2f1da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174792704 unmapped: 86089728 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.369784355s of 12.429459572s, submitted: 18
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 455 heartbeat osd_stat(store_statfs(0x4ca543000/0x0/0x4ffc00000, data 0x2ef6df6f/0x2f1da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174800896 unmapped: 86081536 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 455 handle_osd_map epochs [455,456], i have 455, src has [1,456]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7908667 data_alloc: 234881024 data_used: 16408576
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 86048768 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 86048768 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 86048768 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 456 heartbeat osd_stat(store_statfs(0x4ca541000/0x0/0x4ffc00000, data 0x2ef6fade/0x2f1dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 86048768 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 86048768 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7908667 data_alloc: 234881024 data_used: 16408576
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 86048768 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174850048 unmapped: 86032384 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 457 ms_handle_reset con 0x55a66ea30800 session 0x55a66ef81a40
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174866432 unmapped: 86016000 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 457 heartbeat osd_stat(store_statfs(0x4ca53e000/0x0/0x4ffc00000, data 0x2ef7155d/0x2f1df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174866432 unmapped: 86016000 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 457 heartbeat osd_stat(store_statfs(0x4ca53e000/0x0/0x4ffc00000, data 0x2ef7155d/0x2f1df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174866432 unmapped: 86016000 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7911641 data_alloc: 234881024 data_used: 16408576
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174866432 unmapped: 86016000 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 457 ms_handle_reset con 0x55a66ffa0800 session 0x55a66cca05a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.071290970s of 12.499498367s, submitted: 26
Oct  1 13:15:25 np0005464891 ceph-mon[74303]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174891008 unmapped: 85991424 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 458 ms_handle_reset con 0x55a66fca5000 session 0x55a66e781860
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174899200 unmapped: 85983232 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 458 ms_handle_reset con 0x55a66da55c00 session 0x55a66f36f2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174907392 unmapped: 85975040 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-mon[74303]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/962726690' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174907392 unmapped: 85975040 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 458 heartbeat osd_stat(store_statfs(0x4ca53c000/0x0/0x4ffc00000, data 0x2ef72fc0/0x2f1e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7915209 data_alloc: 234881024 data_used: 16408576
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174915584 unmapped: 85966848 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 458 heartbeat osd_stat(store_statfs(0x4ca53c000/0x0/0x4ffc00000, data 0x2ef72fc0/0x2f1e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 458 ms_handle_reset con 0x55a66fcaf000 session 0x55a66da40960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174923776 unmapped: 85958656 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174956544 unmapped: 85925888 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 458 heartbeat osd_stat(store_statfs(0x4ca53c000/0x0/0x4ffc00000, data 0x2ef72fc0/0x2f1e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 174964736 unmapped: 85917696 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 458 ms_handle_reset con 0x55a66eaf8c00 session 0x55a66dbfe960
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 184819712 unmapped: 76062720 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 458 ms_handle_reset con 0x55a66da55c00 session 0x55a66f4d9680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 458 ms_handle_reset con 0x55a66eaf8c00 session 0x55a66eec0000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 458 heartbeat osd_stat(store_statfs(0x4c9ae8000/0x0/0x4ffc00000, data 0x2f9c5fe9/0x2fc36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8001303 data_alloc: 234881024 data_used: 16408576
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 175693824 unmapped: 85188608 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.252787590s of 10.202415466s, submitted: 69
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 458 ms_handle_reset con 0x55a66fca8000 session 0x55a66dbfe5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 458 handle_osd_map epochs [458,459], i have 458, src has [1,459]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 175702016 unmapped: 85180416 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 459 ms_handle_reset con 0x55a670f94800 session 0x55a66eec2780
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 176308224 unmapped: 84574208 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 ms_handle_reset con 0x55a66eb6c400 session 0x55a66d5b25a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 176316416 unmapped: 84566016 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 ms_handle_reset con 0x55a670f94c00 session 0x55a66efab0e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 176316416 unmapped: 84566016 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8126855 data_alloc: 234881024 data_used: 16416768
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 176398336 unmapped: 84484096 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 heartbeat osd_stat(store_statfs(0x4c8d34000/0x0/0x4ffc00000, data 0x3077378e/0x309e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 176398336 unmapped: 84484096 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 176398336 unmapped: 84484096 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 176398336 unmapped: 84484096 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 176398336 unmapped: 84484096 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8126855 data_alloc: 234881024 data_used: 16416768
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 176406528 unmapped: 84475904 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 heartbeat osd_stat(store_statfs(0x4c8d34000/0x0/0x4ffc00000, data 0x3077378e/0x309e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 176406528 unmapped: 84475904 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 heartbeat osd_stat(store_statfs(0x4c8d34000/0x0/0x4ffc00000, data 0x3077378e/0x309e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 176406528 unmapped: 84475904 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 176406528 unmapped: 84475904 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 ms_handle_reset con 0x55a66fca7400 session 0x55a66efabe00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 ms_handle_reset con 0x55a670f94800 session 0x55a66efaa3c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 176406528 unmapped: 84475904 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 ms_handle_reset con 0x55a66fcc7c00 session 0x55a66efaa000
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 ms_handle_reset con 0x55a66ffa0400 session 0x55a66efaa5a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.732727051s of 13.546743393s, submitted: 32
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 ms_handle_reset con 0x55a66fca7400 session 0x55a66efab680
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 ms_handle_reset con 0x55a66fcc7c00 session 0x55a66f3343c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 ms_handle_reset con 0x55a66ffa0400 session 0x55a66f335e00
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 ms_handle_reset con 0x55a670f94800 session 0x55a66f335c20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 ms_handle_reset con 0x55a66da54800 session 0x55a66f3350e0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8189796 data_alloc: 234881024 data_used: 16416768
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 177504256 unmapped: 83378176 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 177504256 unmapped: 83378176 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 177504256 unmapped: 83378176 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 heartbeat osd_stat(store_statfs(0x4c85fa000/0x0/0x4ffc00000, data 0x30eae79e/0x31124000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 177504256 unmapped: 83378176 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 ms_handle_reset con 0x55a66da54800 session 0x55a66f6df2c0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 177504256 unmapped: 83378176 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 ms_handle_reset con 0x55a66fca7400 session 0x55a66cc705a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 ms_handle_reset con 0x55a66fcc7c00 session 0x55a66f36f4a0
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 heartbeat osd_stat(store_statfs(0x4c85fa000/0x0/0x4ffc00000, data 0x30eae79e/0x31124000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8191948 data_alloc: 234881024 data_used: 16416768
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 177504256 unmapped: 83378176 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 heartbeat osd_stat(store_statfs(0x4c85f9000/0x0/0x4ffc00000, data 0x30eae7c1/0x31125000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 ms_handle_reset con 0x55a66e9c1800 session 0x55a66ce22d20
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 177512448 unmapped: 83369984 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 177528832 unmapped: 83353600 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 177438720 unmapped: 83443712 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 179560448 unmapped: 81321984 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8237062 data_alloc: 234881024 data_used: 22597632
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 179560448 unmapped: 81321984 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 heartbeat osd_stat(store_statfs(0x4c85f9000/0x0/0x4ffc00000, data 0x30eae7c1/0x31125000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 179560448 unmapped: 81321984 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 179560448 unmapped: 81321984 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 179560448 unmapped: 81321984 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 179560448 unmapped: 81321984 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8237062 data_alloc: 234881024 data_used: 22597632
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 179560448 unmapped: 81321984 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 179560448 unmapped: 81321984 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 heartbeat osd_stat(store_statfs(0x4c85f9000/0x0/0x4ffc00000, data 0x30eae7c1/0x31125000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 179560448 unmapped: 81321984 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 179568640 unmapped: 81313792 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 179625984 unmapped: 81256448 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.908378601s of 20.183137894s, submitted: 41
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8317450 data_alloc: 234881024 data_used: 22683648
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 183730176 unmapped: 77152256 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 heartbeat osd_stat(store_statfs(0x4c7f17000/0x0/0x4ffc00000, data 0x316e77c1/0x31807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,3])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 75038720 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 187424768 unmapped: 73457664 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 187465728 unmapped: 73416704 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 187572224 unmapped: 73310208 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 heartbeat osd_stat(store_statfs(0x4c7ba2000/0x0/0x4ffc00000, data 0x31b707c1/0x31b78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8356334 data_alloc: 234881024 data_used: 23797760
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 187695104 unmapped: 73187328 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 187727872 unmapped: 73154560 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 187736064 unmapped: 73146368 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 187736064 unmapped: 73146368 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: osd.1 460 heartbeat osd_stat(store_statfs(0x4c7b9c000/0x0/0x4ffc00000, data 0x31b797c1/0x31b81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,9])
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 187842560 unmapped: 73039872 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.605467796s of 10.086105347s, submitted: 139
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: bluestore.MempoolThread(0x55a66b2e7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8364086 data_alloc: 234881024 data_used: 23801856
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 187850752 unmapped: 73031680 heap: 260882432 old mem: 2845415832 new mem: 2845415832
Oct  1 13:15:25 np0005464891 ceph-osd[88747]: prioritycache tune_memory target: 4294967296 mapped: 187850752 unmapped: 73031680 heap: 260882432 old mem: 2845415832 new mem: 2845415832
